Next-Gen Data Scientist: Thinking Like an Economist

Slide1-1

Generative AI (GenAI) products like OpenAI ChatGPT, Microsoft Bing, and Google Bard are disrupting the roles of data engineers and data scientists. According to a recent report by McKinsey, these GenAI products could potentially automate up to 40% of the tasks performed by data science teams by 2025. And Emad Mostaque, founder and CEO of Stability AI, recently stated that he believes that AI will transform the software development world to the point that “There will be no programmers in five years.”

These GenAI products can generate data pipelines, recommend machine learning algorithms, and optimize hyper-parameter tuning with comparable or higher accuracy than human experts. In contrast to the Harvard Business Review’s 2012 declared “Data Scientist: the sexiest job of the 21st Century”, some data professionals may now feel a wee bit concerned about their future prospects in the face of generative AI products.

But there’s no need to not fear, Underdog is here! There is much that GenAI products can do to not only make the Data Science team more effective and productive today but provides new opportunities to acquire new skill sets that make the Data Science team more valuable tomorrow. Let’s explore this evolution.

Data Science Today: GenAI to Become More Productive

Instead of seeking to compete with these GenAI technologies, maybe one should think about how these GenAI products can become Your Own Data Assistant (YODA) in helping you become more effective and productive data scientists and data engineers. By mastering these products, you can leverage their extensive knowledge base to enhance your own skills, knowledge, effectiveness, and productivity. These GenAI products are already proving to be effective in many data science tasks, including:

  • Data Preprocessing: provide guidance on data preprocessing techniques, such as handling missing values, feature scaling, and outlier detection. For example, a GenAI product can suggest different methods to impute missing values based on the type and distribution of the data, such as mean, median, mode, or k-nearest neighbors.
  • Generating Code: for data pipelines, workflows, or products, such as software code, SQL queries, or product sketches. For example, a GenAI product can generate code snippets based on natural language descriptions or pseudocode.
  • Exploratory Analysis: explore datasets, suggest visualizations, create statistical summaries, and uncover patterns and correlations in the data. For example, a GenAI product can create interactive dashboards or charts based on the type and purpose of the data.
  • Generating Synthetic Data: improve the training of AI / ML models when the original data is scarce, sensitive, or biased. For example, a GenAI product can create realistic but fake data that preserves the statistical properties and relationships of the original data.
  • Algorithm Selection: assist in selecting appropriate algorithms based on the nature of the problem, data characteristics, and desired outcomes. For example, a GenAI product can recommend different types of algorithms for classification, regression, clustering, or dimensionality reduction problems.
  • Hyperparameter Tuning: provide recommendations for tuning hyperparameters of machine learning models. For example, a GenAI product can use techniques such as grid search, random search, or Bayesian optimization to find the optimal values for hyperparameters such as learning rate, regularization parameter, or number of hidden layers.
  • Debugging and Troubleshooting: diagnosing and resolving data pipelines, model training, or feature engineering issues. For example, a GenAI product can identify and fix errors in code syntax, logic, or performance.
  • Documentation (Yay!!): assist in writing documentation and generating code snippets. For example, a GenAI product can create comments or annotations for code blocks that explain what the code blocks do and how they work.
  • Data Science Tutoring: teach data science concepts, statistical techniques, programming languages, and tools. For example, a GenAI product can provide interactive lessons or exercises that cover topics such as probability theory, linear algebra, Python programming, or the TensorFlow framework.
Slide2

Figure 1: GenAI: Creating Your Own Data Assistant (YODA)

It’s important to remember that the more that these GenAI products are used, the more effective they will become in performing these data science tasks. Attempting to outlearn them is not the right strategy. Personally, I’ve already abdicated my ability to calculate square root to my $1.99 calculator. While it’s uncertain if GenAI products will ever fully solve the “80% of data science time is spent on data preparation” challenge, it’s worth considering how you can use the extra time to improve your personal and professional development, making you a more valuable asset.

Data Science Tomorrow: GenAI to Become More Valuable

Becoming a more productive data scientist or data engineer is great. But it’s like paving the cowpath. To be successful and reach your full potential, don’t become the best cowpath paver. Instead, develop the skills and competencies to reinvent the cowpath.

If you really want to transform yourself, invest in areas that enhance the value and applicability of your data science skills and competencies. Here are a few areas where you can reinvest that time freed up by the GenAI products to increase your personal and professional value (Figure 2):

  • Domain-specific Knowledge: Explore and uncover domain-specific insights (predicted behavioral and performance propensities) to address industry-specific challenges. For example, a data professional in the healthcare industry can use data and analytics to identify the factors that influence patient outcomes, satisfaction, or loyalty.
  • Economic Literacy: Leverage economic techniques and concepts to ensure AI models deliver relevant, meaningful, responsible, and ethical outcomes. For example, a data professional in the banking industry can use cost-benefit analysis, risk assessment, or social welfare functions to evaluate the trade-offs and impacts of different AI models on customers, stakeholders, or society.
  • Design Thinking: Empathize and empower key constituents in identifying desired outcomes, key decisions, benefits, and impediments on their journeys. For example, a data professional in the education industry can use design thinking methods such as personas, empathy maps, or journey maps to understand the needs, goals, pain points, or emotions of students, teachers, or administrators.
  • Value Engineering: Identify how organizations define and measure value effectiveness and the role of data and analytics to power their value-creation processes and use cases. For example, a data professional in the retail industry can use value engineering techniques such as value stream mapping to identify the sources of value creation or waste reduction for customers or suppliers.
  • User Experience Design: Design products or services that provide meaningful, relevant, easy-to-use experiences to users (solve the last mile usage and adoption challenges). For example, a data professional in the entertainment industry can use user experience design principles such as usability testing, feedback loops, or gamification to enhance the engagement, enjoyment, or retention of users.
  • Storytelling: Combine data analysis and storytelling to effectively communicate insights to your target audience. The goal is to create engaging narratives that help the audience understand key insights, motivating action or decision-making based on the data. By bridging the gap between data analysis and communication, storytelling with data analysis enhances accessibility and impact, making data-driven stories more memorable and actionable.
Slide3-1

Figure 2: Upskilling Your Data Science Skills

Data professionals should embrace this opportunity to update their skill sets. Shifting from algorithmic models to economic models will increase their value to the organization and society overall.

Next-Gen Data Scientist: Thinking Like an Economist

The similarities between AI (and the AI Utility Function) and economics are astounding. Both AI and economic models are based on data and algorithms as well as operational assumptions and “value” projections, where the definition of value, and the dimensions against which value is defined, are critical for driving model effectiveness.

AI models, like economic models, should account for the trade-offs, externalities, or ethical issues that arise from their actions or outcomes. Unfortunately, many AI models are developed without considering the broader economic implications or consequences of their AI-driven decisions and recommendations. This is where learning to “think like an economist” can have a material impact on the effectiveness of your data science teams and their AI models.

Economic models are explicitly concerned with the assumptions and values that underlie their theories and policies. Economic models try to account for the costs and benefits, incentives and constraints, and moral and social implications of their actions or outcomes. The AI Utility Function can be enhanced from an economic mindset by incorporating more human and social factors into their utility functions. The result: AI models that deliver more meaningful, relevant, responsible, and ethical outcomes.

Yea, to advance your data science career, begin to “Think Like an Economist.”

Slide4-1

Real-time deepfake detection: How Intel Labs uses AI to fight misinformation

A few years ago, deepfakes seemed like a novel technology whose makers relied on serious computing power. Today, deepfakes are ubiquitous and have the potential to be misused for misinformation, hacking, and other nefarious purposes.

Intel Labs has developed real-time deepfake detection technology to counteract this growing problem. Ilke Demir, a senior research scientist at Intel, explains the technology behind deepfakes, Intel's detection methods, and the ethical considerations involved in developing and implementing such tools.

Also: Today's AI boom will amplify social problems if we don't act now, says AI ethicist

Deepfakes are videos, speech, or images where the actor or action is not real but created by artificial intelligence (AI). Deepfakes use complex deep-learning architectures, such as generative adversarial networks, variational auto-encoders, and other AI models, to create highly realistic and believable content. These models can generate synthetic personalities, lip-sync videos, and even text-to-image conversions, making it challenging to distinguish between real and fake content.

The term deepfake is sometimes applied to authentic content that has been altered, such as the 2019 video of former House Speaker Nancy Pelosi, which was doctored to make her appear inebriated.

Demir's team examines computational deepfakes, which are synthetic forms of content generated by machines. "The reason that it is called deepfake is that there is this complicated deep-learning architecture in generative AI creating all that content," he says.

Also: Most Americans think AI threatens humanity, according to a poll

Cybercriminals and other bad actors often misuse deepfake technology. Some use cases include political misinformation, adult content featuring celebrities or non-consenting individuals, market manipulation, and impersonation for monetary gain. These negative impacts underscore the need for effective deepfake detection methods.

Intel Labs has developed one of the world's first real-time deepfake detection platforms. Instead of looking for artifacts of fakery, the technology focuses on detecting what's real, such as heart rate. Using a technique called photoplethysmography — the detection system analyzes color changes in the veins due to oxygen content, which is computationally visible — the technology can detect if a personality is a real human or synthetic.

"We are trying to look at what is real and authentic. Heart rate is one of [the signals]," said Demir. "So when your heart pumps blood, it goes to your veins, and the veins change color because of the oxygen content that color changes. It is not visible to our eye; I cannot just look at this video and see your heart rate. But that color change is computationally visible."

Also: Don't get scammed by fake ChatGPT apps: Here's what to look out for

Intel's deepfake detection technology is being implemented across various sectors and platforms, including social media tools, news agencies, broadcasters, content creation tools, startups, and nonprofits. By integrating the technology into their workflows, these organizations can better identify and mitigate the spread of deepfakes and misinformation.

Despite the potential for misuse, deepfake technology has legitimate applications. One of the early uses was the creation of avatars to better represent individuals in digital environments. Demir refers to a specific use case called "MyFace, MyChoice," which leverages deepfakes to enhance privacy on online platforms.

In simple terms, this approach allows individuals to control their appearance in online photos, replacing their face with a "quantifiably dissimilar deepfake" if they want to avoid being recognized. These controls offer increased privacy and control over one's identity, helping to counteract automatic face-recognition algorithms.

Also: GPT-3.5 vs GPT-4: Is ChatGPT Plus worth its subscription fee?

Ensuring ethical development and implementation of AI technologies is crucial. Intel's Trusted Media team collaborates with anthropologists, social scientists, and user researchers to evaluate and refine the technology. The company also has a Responsible AI Council, which reviews AI systems for responsible and ethical principles, including potential biases, limitations, and possible harmful use cases. This multidisciplinary approach helps ensure that AI technologies, like deepfake detection, serve to benefit humans rather than cause harm.

"We have legal people, we have social scientists, we have psychologists, and all of them are coming together to pinpoint the limitations to find if there's bias — algorithmic bias, systematic bias, data bias, any type of bias," says Dimer. The team scans the code to find "any possible use cases of a technology that can harm people."

Also: 5 ways to explore the use of generative AI at work

As deepfakes become more prevalent and sophisticated, developing and implementing detection technologies to combat misinformation and other harmful consequences is increasingly important. Intel Labs' real-time deepfake detection technology offers a scalable and effective solution to this growing problem.

By incorporating ethical considerations and collaborating with experts across various disciplines, Intel is working towards a future where AI technologies are used responsibly and for the betterment of society.

It’s Time OpenAI Launched GPT-5

It’s Time OpenAI Launched GPT-5

Days after the release of GPT-4 in March, AI experts rallied against OpenAI to pause giant AI experiments, which basically meant to not train anything beyond the company’s latest model. The experts were probably scared about AI or just wanted to stop the Microsoft-backed AI company midway as it was becoming hard for them to catch up. Interestingly, now the tables are possibly turning against OpenAI.

There is a lot of competition coming up against OpenAI. Even though the founders believe that no one can replicate what the company has built as mentioned during their visit to Israel, a lot of other startups and companies are possibly capable of stealing OpenAI’s customers. One of the most recent examples for this is Anthropic introducing Claude-2.

Read: Claude-2 vs GPT-4

If we compare the capabilities of GPT-4 against Claude-2, in a lot of cases, Anthropic’s offerings outperform OpenAI’s. Twitter is filled with people using the LLMs capabilities of writing code and uploading documents to make sense of it. The large context length of 100K against GPT-4’s 32K, which is also at much lower price also makes it suitable for a lot of companies and their use cases.

OpenAI wasted enough time already

OpenAI with GPT had given other companies’ a run for their money. Everyone was trying to replicate the success. Seems like OpenAI decided to let them play catch up.

After the pause letter by the likes of Elon Musk and Steve Wozniak, Altman decided to not train the successor to GPT-4 “for some time,” saying that the company anyway has a lot of work to do before starting to build the model. This was in April. Cut to June, Altman said that OpenAI has not yet started training GPT-5 and is only going to focus on building new ideas.

“When we finished GPT-4, it took us more than six months until we were ready to release it,” Altman told Economic Times when he visited India. This means that the company had already started training GPT-4 while they were about to release ChatGPT based on GPT-3.5.

After releasing ChatGPT based on GPT-3.5 in November 2022, it took OpenAI five months more to release GPT-4. It’s been five months now since the release of GPT-4. Ever since then, OpenAI has been introducing incremental updates to their technology, nothing substantial one might say.

Interestingly, incrementally updating its software is an approach that is shared between the founders Altman and Greg Brockman. At the Lex Fridman podcast, Altman said that even though many people might not agree with the company’s approach, he believes that releasing AI models incrementally makes sense to keep AI under control.

We'll stare at the empirical data as it's coming in:
1. We can measure progress locally on various parts of our research roadmap (e.g. for scalable oversight)
2. We can see how well alignment of GPT-5 will go
3. We'll monitor closely how quickly the tech develops

— Jan Leike (@janleike) July 5, 2023

Similarly in April, Brockman tweeted about his views about the pace of AI progress where he said, “it’s easy to create a continuum of incrementally-better AIs (such as by deploying subsequent checkpoints of a given training run), which presents a safety opportunity very unlike our historical approach of infrequent major model upgrades.”

GPT-4.5 is already here!

There have been claims that the next version of GPT would be superintelligent. In a recent blog about alignment of AI models, OpenAI said that superintelligence might be achieved within four years. The timeline does not match. If the company plans to compete with the rising capabilities of other companies’ AI models, it might ditch the halt on training GPT-5 and actually build it right away.

Moreover, the only thing close to the jump from GPT-3.5 to GPT-4 in terms of capabilities can be compared with the introduction of Code Interpreter.

In a podcast on Latent Space, Simon Willison, Alex Volkov, Aravind Srinivas, and Alex Graveley argue that Code Interpreter is actually GPT-4.5. It might be just that the company does not want to use the terminology as of now given all the backlash of the pause letter.

It is possible that OpenAI is currently training GPT-4 with different modalities, similar to what the company did with GPT-3 to GPT-3.5. The introduction of plugins, API, multimodality, all hint towards the training of GPT-5.

The post It’s Time OpenAI Launched GPT-5 appeared first on Analytics India Magazine.

This is how generative AI will change the gig economy for the better

Artificial intelligence will augment work and could add more opportunities to the job market rather than tank it, according to tech executive Gali Arnon. While some fear that AI will erase huge numbers of roles, Arnon argues that AI will accelerate the pace of job creation, augment work, and accelerate startup opportunities.

In an interview with ZDNET, Arnon, CMO of Fiverr, a platform that connects freelancers with work opportunities, says generative artificial intelligence is smart, but it can't dominate the economy because its capabilities are narrow and limited to specific tasks.

Also: 5 ways to explore the use of generative AI at work

Arnon says Fiverr data shows that freelancers are using AI as a "tool" that augments creative work, but doesn't replace humans. Instead, she says AI is creating "new jobs, new opportunities" because it speeds up manual and analog work, allowing freelancers to spend more time on creative and interpersonal tasks.

When it comes to integrating AI into business services, there are several examples that demonstrate the technology's potential for augmenting human work. For instance, generative AI can help writers and journalists by quickly extracting key points and quotes from a transcript, saving time and improving efficiency.

AI can also be used to create artwork, optimize customer support processes, and even aid in code-writing processes. The key to success is finding the right balance between using AI and maintaining the human touch.

Also: How to use ChatGPT to create an app

Arnon says creative professionals are learning to master prompts for generative AI systems. Basic prompts produce low-quality results, but experts can chain prompts to multiple AI systems to produce unique and high-quality images, audio, and text.

She says some of the best creative professionals edit AI-generated outputs in other applications, such as Adobe's Creative Cloud. The end results can be high in quality and unique in style. Arnon says professionals are augmenting their skills with AI, "to use it in a way that will just set the bar higher, set a new standard" of quality.

However, the ethical considerations when using generative AI and creative work are nuanced and challenging. One question employers must answer for their organizations is whether using AI-generated content, such as artwork or text, is considered cheating.

Also: Generative AI is coming for your job. Here are 4 reasons to get excited

Arnon believes that as long as freelancers are transparent about their use of AI tools — and do not claim the work as their own — there is no ethical issue. The real challenge lies in ensuring that AI is used responsibly and ethically without undermining businesses or society at large.

In the coming months, Arnon believes that generative AI will continue to play a significant role in the future of freelancing and work. She says Fiverr is a microcosm of the broader workforce and reflects emerging trends in the job market. By embracing AI and leveraging its capabilities, businesses and freelancers can create new opportunities and jobs, ultimately benefiting the gig economy.

However, ensuring the ethical and responsible use of AI is crucial for its successful integration into the workforce. Through collaboration between regulators, businesses, and AI developers, it is possible to strike the right balance between innovation and ethical considerations, paving the way for a more efficient and dynamic workplace.

Also: These are my 5 favorite AI tools for work

"We need to find the right checks and balances," Arnon says, "but eventually, I really believe humanity will know how to use AI, and it will make us only better."

The First Half of 2023: Data Science and AI Developments

The First Half of 2023: Data Science and AI Developments
Photo by Tara Winstead

A lot has happened in the first half of 2023. There have been significant advancements in data science and artificial intelligence. So much that it’s been hard for us to keep up with them all. We can definitely say that the first half of 2023 has shown rapid progress that we did not expect.

So rather than talking too much about how we’re all woo’d by these innovations, let’s talk about them.

Natural Language Processing

I’m going to start off with the most obvious. Natural Language Processing (NLP). Something that was building in the dark, and in the year 2023 has come to light.

These advancements were proven in OpenAI’s ChatGPT, which took the world by storm. Since their official release earlier on in the year, ChatGPT has moved from GPT-4 and now we’re expecting GPT-5. They have released plugins to improve people's day-to-day lives, and workflows for data scientists and machine learning engineers.

And we all know after ChatGPT released, Google released Bard AI which has proven to be successful amongst people, businesses, and more. Bard AI has been competing with ChatGPT for the best chatbot position, providing similar services such as improving tasks for machine learning engineers.

In the midst of the release of these chatbots, we have seen large language models (LLM) drop out of thin air. Large Model Systems Organization (LMSYS Org), an open research organization founded by students and faculty from UC Berkeley created ChatBot Arena — a LLM benchmark to make models more accessible to everyone using a method of co-development using open datasets, models, systems, and evaluation tools.

AutoML

So now people are getting used to chatbots that answer questions for them and make their work and personal life much easier — what about data analysts and machine learning specialists?

Well they’ve been using AutoML — a powerful tool for data professionals such as data scientists and machine learning engineers to automate data preprocessing, hyperparameter tuning, and perform complex tasks such as feature engineering. With the advancements in data science and AI, naturally we have seen a high demand for data and AI specialists. However, as the progress is moving at a rapid rate, we are seeing a shortage of these AI professionals. Therefore, being able to find ways to explore, analyze, and predict data in an automated process will improve the success of a lot of companies.

Not only will it be able to free up time for data specialists, but organizations will have more time to expand and be more innovative on other tasks.

Generative AI

If you were around for the outburst of chatbots, you would have seen the words ‘Generative AI’ being thrown around. Generative AI is capable of generating text, images, or other forms of media based on user prompts. Just like the above advancements, generative AI is helping different industries with tasks to make their lives easier.

It has the ability to produce new content, replace repetitive tasks, work on customized data, and pretty much generate anything you want. If generative AI is new to you, you will want to learn about Stable Diffusion — it is the foundation behind generative AI. If you are a data scientist or data analyst, you may have heard of PandasAI — the generative AI python library, if not it is an open-source toolkit which integrates generative AI capabilities into Pandas for simpler data analysis.

But with these generative AI tools and softwares being released, Are Data Scientists Still Needed in the Age of Generative AI?

Deep Learning

Deep Learning is continuing to thrive. With the recent advancements in data science and AI, more time and energy is being pumped into research of the industry. As a subset of machine learning concerned with algorithms and artificial neural networks, it is widely being used in tasks such as image classification, object detection, and face recognition.

As we’re experiencing the 4th industrial revolution, deep learning algorithms are allowing us to learn from data the same way humans do. We are seeing more self-driving cars on the roads, fraud detection tools, virtual assistants, healthcare predictive modeling, and more.

2023 has proven to show the works of deep learning through automated processes, robotics, blockchain, and various other technologies.

Edge Computing

With all these that are happening, you must think these computers are pretty tired right? In order to meet the advancements of AI and data science, companies require computers and systems that can help to support them. Edge computing brings computation and data storage closer to the sources of data. When working with these advanced models, edge computing provides real-time data processing and allows for smooth communication between all devices.

For example, when LLMs were getting released every two seconds, it was obvious that organizations would require effective systems such as edge computing to be successful. Google released TPU v4 this year — computing resources that can handle the high computational needs of machine learning and artificial intelligence.

Due to these advancements, we are seeing more organizations move from the cloud to edge to fit their current and future requirements.

Ethical AI and Data Science

A lot has been happening, and it’s been happening in a short period of time. It’s becoming very difficult for organizations such as the government to keep up. Governments from around the world are raising the question of ‘how do these AI applications affect the economy and society, and what are the implications?’.

People are concerned about the bias and discrimination, privacy, transparency, and security of these AI and data science applications. So what are the ethical aspects of AI and data science, and what should we expect in the future?

We already have the European AI Act pushing a framework that groups AI systems into 4 risk areas. OpenAI CEO Sam Altman testified about the concerns and possible pitfalls of the new technology at a US Senate committee on Tuesday the 16th. Although there are a lot of advancements happening in a short period of time, a lot of people are concerned. Over the next 6 months we can expect a few more laws getting passed and regulations and frameworks being put into place.

Wrapping it up

If you haven’t been keeping up with AI and data science in the last 6 months, I hope this article has provided you with a quick breakdown of what’s been going on. It will be interesting to see over the next 6 months how these advancements get embraced whilst being able to ensure responsible and ethical use of these technologies.
Nisha Arya is a Data Scientist, Freelance Technical Writer and Community Manager at KDnuggets. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.

More On This Topic

  • 5 Free Data Science Books You Must Read in 2023
  • Top Free Data Science Online Courses for 2023
  • Top 8 Data Science Slack Communities to Join in 2023
  • Key Data Science, Machine Learning, AI and Analytics Developments of 2022
  • Data Science & Analytics Industry Main Developments in 2021 and Key Trends…
  • Main 2021 Developments and Key 2022 Trends in AI, Data Science, Machine…

Data Science Hiring Process at Lowe’s India

American retail giant Lowe’s operates a chain of over 2,100 stores in North America and is the second-largest hardware chain in the US and the world, trailing behind Home Depot.

Headquartered in Bengaluru, Lowe’s India boasts a robust team of over 3,600 members who deliver exceptional service to 19 million customers. To make this possible, data plays an important role, particularly in the retail industry, where it is vital for decision-making processes such as search, product recommendations, inventory management, supply chain operations, and demand forecasting.

And the data science team at Lowe’s India are at the forefront of this.

The Data, Analytics, & Computational Intelligence (DACI) team at Lowe’s supports the company’s global business endeavours by delivering prompt, relevant, and profoundly actionable data, analytics, and cutting-edge analytic services and solutions.

“Some of the key areas of innovation by the DACI team include computer vision applications, homegrown machine learning platform and many operational and enterprise analytics insights,” said Amit Kapur, Vice President, Data & ML Platforms and Data Governance in an exclusive interaction with AIM.

The DACI team is divided into ‘products’ and ‘platforms’ teams, spread across the US and India. The DACI platforms team is heavily led from India comprising over 300 members including all levels of data engineers, software engineers, data analysts, and data scientists.

For instance, the merchandising teams have harnessed data insights provided by the DACI team to elevate and optimise their sourcing strategies. Moreover, the Stores teams at Lowe’s are leveraging the computer vision platform developed by the data science team, bolstering customer service and boosting productivity. Additionally, the marketing teams have devised customer lifetime value models, enabling them to gain deeper insights into customers and target them more effectively. The finance team employs machine learning models to forecast tax codes, thereby enhancing accuracy and efficiency in tax-related operations.

Currently, DACI has over ten open positions for data scientists in Bangalore. These roles require a range of experience levels, from four to fifteen years.

Interview Process

The data science hiring process involves resume screening to assess qualifications, followed by a technical assessment to evaluate proficiency and problem-solving skills. A deep-tech interview focuses on algorithmic problem-solving, communication, and analytical thinking.

The final round determines a candidate’s fit by assessing their ability to communicate clearly, solve problems efficiently, and demonstrate the business impact of their work. Aligning contributions with strategic goals provides an advantage, especially at senior levels.

Some common key result areas include technical proficiency, machine learning algorithms, data visualisation, statistical and mathematical aptitude, and relevant experience in the retail industry.

“Given that Lowe’s is a home improvement retailer, candidates with expertise in data analysis and exploration, particularly in the retail sector, have a competitive edge,” added Kapur.

Expectations

Meanwhile, the data science team at Lowe’s requires a range of skills including proficiency in machine learning algorithms, statistical analysis, problem-solving approaches, strong Python coding skills, and data visualisation.

In terms of technology capabilities, the team utilises ML/AI, deep learning, clustering, time series forecasting, Python, SQL, linear regression, and statistical modeling. They employ various tools, applications, and frameworks such as Python, ETL (Extract, Transform, Load), Scikit-learn for machine learning algorithms, ML Flow, KubeFlow, Feast, and Explainable AI to support their work.

Additionally, Lowe’s values an innovation mindset, encouraging creative thinking and continuous improvement for the benefit of our customers and associates.

Kapur points out that one of the common mistakes often make the mistake of overly emphasising theoretical knowledge during interviews instead of demonstrating the practical application of skills to real-world problems. Candidates should highlight past projects, explaining their approach, techniques used, and achieved results.

“Especially at senior levels, it is important to quantify the business impact of their work. Candidates who can articulate how their work will contribute to the company’s strategic goals and can communicate their findings in a clear and concise manner will have a significant advantage during the interview process,” Kapur commented.

Work Culture

Lowe’s takes pride in its unique work culture, where associates are driven by core values like customer focus, courage, action, results, and continuous learning. Trust, respect, empathy, and agility form the foundation of the organisation.

“Our people policies and benefits are inclusive and are designed to ensure we have a diverse talent pool. Currently, our gender ratio is at 33% – much ahead of the industry” said Kapur.

Special perks at Lowe’s include ESOP, insurance, workcations, wellness programs, parental support, and skill development training, among others.

“If you’re driven, eager to learn, and can solve complex problems at scale, Lowe’s offers the perfect opportunity to shape the future of omnichannel retail,” concluded Kapur.

Click here to check out their careers page.

Read more: Data Science Hiring Process at Mastercard

The post Data Science Hiring Process at Lowe’s India appeared first on Analytics India Magazine.

OpenAI’s Malevolent Plan Backfires

Soon after the launch of GPT-4, OpenAI CEO Sam Altman appeared in Congress to ‘educate’ regulators on the potential harms of AI. From taking away jobs to stating that safety is ‘vital’ to OpenAI’s work, Altman perfectly played the role of a measured AI doomer to whip regulators into a frenzy to curtail the quickly-growing AI market. While many criticised this move as a way to increase OpenAI’s lead in the ecosystem, it now seems like this has backfired.

According to reports, the FTC (federal trade commission) has opened an expansive investigation into OpenAI’s activities, mainly over concerns of personal reputations and risks of leaking personal data. Earlier this week, the regulator sent OpenAI a document detailing its concerns over the company’s products. This not only underscores the AI company’s hypocrisy, but also represents a strong regulatory threat against it, possibly putting an end to the free rein it has been enjoying in the emerging AI market.

FTC’s opening salvo

Taking a look at the document released by the Washington Post, the FTC has requested a variety of information from the company, including the whole database of third-parties using its APIs. What’s more, the regulator has even asked OpenAI to pull back the curtain on their top models, asking the tech giant to describe in detail research regarding its products.

The FTC has also requested the training data OpenAI used, as well as information on the reinforcement learning from human feedback process. It has also requested OpenAI to shed some light on the ChatGPT security issue that occurred in March of this year, which allowed some personal information to be leaked.

Other requested information includes details on the process of retraining and refraining LLMs, risk and safety assessments, personal information protection. This is the meat of the request, as the FTC’s concerns are made clear in Section 24 of the interrogatories section.

The regulator also wants to know the capacity of OpenAI’s LLMs to generate statements about individuals, especially statements containing personal information. While the regulator has also expressed concerns over the LLMs’ capacity to make ‘misleading, disparaging, or harmful statements’, the crux of the matter resides within how OpenAI is handling personal information.

This also goes in line with the FTC’s commitment to stick to the current civil rights laws on discrimination. FTC Chair Lina Khan has specifically stated that “There is no AI exemption to the laws on the books”, suggesting that the FTC will stick to the current regulatory framework until the Biden administration creates a new one.

The FTC’s moves have put Altman on the backfoot, as evidenced by his tweet thread on the matter. While decrying the fact that the FTC’s request was leaked to the press, he stated,

“It’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC.”

This comment stands in line with the other messages Altman has been sending to regulators, speaking about how AI is risky while OpenAI’s products are built ‘on top of years of security research’. This is a narrative we’ve seen before, backed up in this instance by a reiteration that OpenAI is not ‘incentivised to make unlimited returns’ due to its capped-profits structure.

Tricking regulators no more?

Sam Altman has been on a global charm offensive to convince regulators of the potential impact of AI algorithms. Calling it a ‘diplomatic mission’, the CEO has taken it upon himself to be the champion of AI to the world’s regulators. This strategy seems to be a leaf out of lobbyists’ books, curtailing regulation for one company while constraining the market with heavy-handed laws.

Hidden behind his meetings with global regulators is a sinister agenda to expand OpenAI’s products all over the world with as little regulatory oversight as possible. Reports have emerged that Altman has lobbied the EU to water down its stringent AI Act to allow OpenAI a freer hand in the data privacy-centric EEA. What’s worse, the strategy actually worked, as the latest draft of the Act does not classify GPT as a high-risk system, in line with OpenAI’s requests for the same.

Under the new act, providers of foundational models need to only comply with a small handful of requirements, not the stringent regulation they faced as high-risk systems. Sarah Chander, a senior policy advisor at European Digital Rights, stated on the move,

“They got what they asked for…OpenAI, like many Big Tech companies, have used the argument of utility and public benefit of AI to mask their financial interest in watering down the regulation.”

While Altman outwardly has asked for the AI field to be regulated as a whole, it seems that he is ensuring exceptions can be made for OpenAI’s financial gain. This means that OpenAI will be allowed to ‘self-regulate’ where other companies bow down to the needs of regulators. Now, it seems that the FTC has caught on to this game, going after the biggest fish in the sea for its first catch.

With the inquiry into OpenAI, the FTC has indirectly revealed that they have seen through Altman’s guise, as they are striking directly into the heart of the matter. The company is currently under fire for multiple copyright violations, which the FTC has used as an inroad to raise concerns over OpenAI’s handling of personal information. All in all, it seems as though a storm is brewing on the horizon for OpenAI, and Altman is in the centre of it all.

The post OpenAI’s Malevolent Plan Backfires appeared first on Analytics India Magazine.

The Real Reason Why OpenAI Partnered with AP 

When Should Newsrooms Use AI?

It is widely accepted that one of the key limitations of ChatGPT is its lack of information beyond September 2021. Users found it frustrating that a software with immense potential for creating various types of content is struggling with providing up-to-date facts. Consequently, there were instances where ChatGPT would provide users with inaccurate information, leading to an unpleasant experience for the users.

It looks like OpenAI has acknowledged this and recently took an unexpected step to partner with Associated Press.The Associated Press and OpenAI reached an agreement on Thursday to share access to select news content and technology as they examine potential use cases for generative AI in news products and services.

This collaboration between a major news organization and an artificial intelligence company marks one of the initial significant partnerships of its kind. The emergence of this partnership also raises questions regarding the motivation behind this sudden alliance and the selection of the Associated Press (AP) as the chosen news organization.

To Stay Relevant

In a joint statement, OpenAI and the Associated Press (AP) announced an arrangement where OpenAI will license a portion of AP’s text archive, while AP will leverage OpenAI’s technology and product expertise. This partnership involves a mutually beneficial exchange, with both organizations capitalizing on each other’s strengths and resources

The objective of this collaboration is to utilize the valuable data obtained to enhance the effectiveness of future iterations of ChatGPT and other related tools developed by OpenAI. This partnership does not revolve around AI chatbots generating content, but rather focuses on OpenAI leveraging access to a curated selection of news content and technological resources from the AP archives, spanning back to 1985. To stay relevant, it is essential for OpenAI to have access to news and current affairs going around the world. With AP’s data coming in ChatGPT would be able to provide real time news to the users.

It is worth noting that AP currently does not use any generative AI technology in curating news. This alliance is more of an experiment for AP on how it can leverage this new technology in the newsroom which can eventually decide how the news industry as a whole will accept it.

The collaboration represents a logical evolution for AP, as the news organization has been integrating AI into its journalistic practices for almost ten years. This journey began in 2014 when AP started utilizing AI to automate corporate earnings reports, and over time, it has expanded to encompass various areas. This includes generating stories that provide previews and recaps of sporting events and leveraging AI technology to facilitate transcription tasks for live events like press conferences, which involve audio and video content.

To Skip legal trouble

As the race to advance increasingly powerful tools intensifies, there has been a corresponding increase in regulatory scrutiny surrounding the technology. The Washington Post reported on Thursday that the U.S. Federal Trade Commission (FTC) has initiated an investigation into OpenAI, the creator of ChatGPT, based on allegations that the company has violated consumer protection laws by jeopardizing personal reputations and data. This investigation highlights the growing concerns and legal implications surrounding the responsible use and protection of personal information in the development and deployment of AI technologies.

OpenAI till now has been scraping data from all over the internet which could potentially land them into legal trouble.This partnership serves as a precautionary measure against potential access limitations to valuable material due to legal disputes that could jeopardize OpenAI’s ability to obtain necessary content. Earlier ChatGPT has come under heavy criticism for being trained on web content, where the original source is neither informed nor attributed. The data, on the other hand, is openly used by ChatGPT to answer queries.

“We are pleased that OpenAI acknowledges the importance of fact-based, nonpartisan news content in this transformative technology and recognizes the value of our intellectual property.” said a written statement from Kristin Heitmann, AP senior vice president and chief revenue officer.

This statement highlights the mutual understanding between AP and OpenAI regarding the essential role of reliable news content and the respect for intellectual property as they navigate the advancements of generative AI. With the esteemed support of the Associated Press, OpenAI is strategically positioning itself to steer clear of legal complications, benefiting from the reputation and expertise of the renowned news organization.

The post The Real Reason Why OpenAI Partnered with AP appeared first on Analytics India Magazine.

Artificial Intelligence: Cheat Sheet

Abstract digital human face. Artificial intelligence concept of big data or cyber security. 3D illustration
Image: pinkeyes/Adobe Stock

Artificial intelligence comes in many forms, from simple tools that respond to customers via a chat to complex machine learning algorithms that predict the trajectory of an entire organization. Despite years of overpromising, AI doesn’t comprise sentient machines that reason like humans. Rather, AI encompasses more narrowly focused pattern matching at scale to complement human reasoning.

SEE: Explore all of TechRepublic’s cheat sheets and smart person’s guides.

In order to help business leaders understand what AI capabilities are, how to use artificial intelligence and where to begin an AI journey, it’s essential to first dispel the myths surrounding this huge leap in AI technology.

Jump to:

  • What is artificial intelligence?
  • How does artificial intelligence work?
  • What can artificial intelligence do?
  • What are the different types of AI?
  • What AI platforms are available?
  • What AI skills will businesses need to invest in?
  • How can businesses start using AI?

What is artificial intelligence?

AI is largely a pattern-recognition tool that can run at a scale that’s dramatically beyond any human, yet never quite replaces humans. Even at its best, AI delivers acceptable, though not perfect results, giving people the ability to step in, observe the data and reason from there.

Note that while we use AI throughout this cheat sheet, most enterprises actually engage with a subset of AI called machine learning or deep learning. We’ll use AI here as a shorthand that includes machine learning and deep learning.

SEE: Download this business leader’s guide on artificial intelligence.

The truth is that current AI technology is limited, but it’s still incredibly powerful. However complicated its processes may seem in practice, at the core of AI-driven applications is the simple ability to identify patterns and make inferences based on those patterns.

AI isn’t truly intelligent, and it’s often as biased as the data we choose to feed into our ML models. That doesn’t mean AI isn’t useful for businesses and consumers trying to solve real-world problems, it means that we’re nowhere close to machines that can actually make independent decisions or arrive at conclusions without being given the proper data first. It’s also true that AI can tend to confirm our biases, rather than eliminate them.

How does artificial intelligence work?

AI is a complex system designed to model human behavior and intelligence. It combines large data with intelligent algorithms to analyze, understand, and make decisions or predictions about future states. To make accurate predictions, AI systems require large amounts of data to learn from; this data is gathered from various sources, processed, analyzed and organized in a suitable format for the AI algorithms.

AI algorithms are the core of AI systems and are designed to analyze and interpret data, identify patterns, and make predictions or decisions based on the input. By continuously collecting new data and retraining the models, AI systems can adapt to changing conditions and improve their performance.

The core process of how AI works involves the following subdomains:

  • Machine learning: A branch of AI that focuses on the development of algorithms and statistical models that allow computer systems to learn and improve from data without being explicitly programmed.
  • Deep learning: A subfield of machine learning that mimics the workings of the human brain’s neural networks, using multiple layers of artificial neural networks to learn and understand complex patterns and features in data.
  • Neural networks: A computational model, inspired by the structure and function of the human brain, that can process and analyze large amounts of data to recognize patterns, make predictions or classify information.
  • Natural language processing: A branch of AI that focuses on the interaction between computers and human language, enabling machines to understand, interpret and generate human language.
  • Computer vision: A branch of AI that enables machines to interpret and understand visual information from images or videos.
  • Cognitive computing: A model that aims to create AI systems that can simulate human-like intelligence and interact with humans in a more natural and intuitive way.

What can artificial intelligence do?

Artificial intelligence is essentially pattern matching at scale. With its pattern recognition capabilities, modern AI can perform image recognition, understand the natural language and writing patterns of humans, make connections between different types of data, identify abnormalities in patterns, strategize, predict and more.

While humans are unable to as easily comb through the amount of data that machines can to uncover patterns, machines struggle when presented with an outlier that might be easy for a human to spot but contradicts the training data. Therefore, the best AI applications are highly focused and combine human reasoning with the brute power of ML.

Since the onset of the COVID-19 pandemic in 2020, AI and ML have seen a massive market growth. The global pandemic also shifted AI priorities and applications: Instead of solely focusing on financial analysis and consumer insight, post-pandemic AI projects have trended toward customer experience and cost optimization.

AI bots can perform many basic customer service tasks, freeing employees up to only address cases that need human intervention. AI like this has been particularly widespread since the start of the pandemic, when workers forced out of call centers put stress on customer service.

What are the business applications of AI?

In the business world, there are plenty of AI applications, but perhaps none is gaining traction as much as business and predictive analytics and its end goal: prescriptive analytics.

Business analytics is a complicated set of processes that aim to model the present state of a business, predict where it will go if kept on its current trajectory and model potential futures with a given set of changes. Predicting the future with an established model of the past can be easy enough, but prescriptive analysis, which aims to find the best possible outcome by tweaking an organization’s current course, can be downright impossible without AI help.

SEE: Explore this AI ethics policy from TechRepublic Premium

Analytics may be the rising star of business AI, but it’s hardly the only application of artificial intelligence in the commercial and industrial worlds. Other AI use cases for businesses include the following:

  • Recruiting and employment: AI can streamline recruiting by filtering through larger numbers of candidates more quickly than a human and by noticing qualified people who may be overlooked.
  • Fraud detection: AI is great at picking up on subtle differences and irregular behavior, such as subtle indicators of financial fraud that humans may miss.
  • Cybersecurity: AI is great at detecting indicators of hacking and other cybersecurity issues.
  • Data management: Using AI, you can categorize raw data and find relationships between items that were previously unknown.
  • Customer relations: Modern AI-powered chatbots are incredibly good at carrying on conversations thanks to natural language processing, making them a great first line of customer service.
  • Healthcare: Not only are some AI applications able to detect cancer and other health concerns before doctors, they can also provide feedback on patient care based on long-term records and trends.
  • Predicting market trends: Much like prescriptive analysis in the business analytics world, AI systems can be trained to predict trends in larger markets, which can lead to businesses getting a jump on emerging trends.
  • Reducing energy use: AI can streamline energy use in buildings and even across cities as well as make better predictions for construction planning, oil and gas drilling, and other energy-centric projects.
  • Marketing: AI systems can be trained to increase the value of marketing both toward individuals and larger markets, helping organizations save money and get better marketing results.

What are the different types of AI?

Narrow AI

Also known as weak AI, narrow AI helps you perform specific functions. It’s focused on a single domain and operates within predefined limits. Narrow AI cannot do anything more than what they are programmed to do — they have a very limited or narrow range of competencies. Examples include voice assistants like Siri or Alexa. These lack general intelligence and cannot perform tasks outside their designated domain.

General AI

General AI, also known as strong AI or artificial general intelligence, refers to AI systems that possess human-level intelligence and can understand, learn and perform any intellectual task that a human being can do. They can adapt to various scenarios and solve problems creatively. While general AI remains a long-term objective, current advancements primarily focus on developing narrow AI systems that excel in specific areas, such as image recognition or natural language processing.

Superintelligent AI

This type of AI surpasses human intelligence in nearly all aspects. It not only outperforms humans in cognitive tasks but also possesses the ability to improve itself, leading to an exponential increase in intelligence. While superintelligent AI remains largely theoretical at present, it’s a topic of interest and concern within the field of AI.

Reactive machines

Reactive AI systems automatically respond to a limited set or combination of inputs and operate based on the current input without any memory or past experiences. In fact, reactive AI systems don’t have the ability to form memories or learn from previous interactions. They simply react to the current situation or stimulus. Examples include chess-playing AI systems or recommendation algorithms.

Limited memory

AI systems with limited memory can store and retrieve information from previous experiences to make better decisions. They have the ability to learn from past data and use it to improve their future actions. Self-driving cars that use historical data to make driving decisions are an example of limited memory AI.

Theory of mind

This type of AI is still largely theoretical. Theory of mind AI will have the ability to understand and model the mental states, beliefs, and intentions of other agents. They’ll be able to attribute thoughts, emotions, and intentions to other entities and predict their behavior based on this understanding.

Self-aware

While still theoretical and not fully realized, this type of AI would possess human-like awareness and understanding of its own existence. Self-aware AI systems will have a sense of their own existence, consciousness and internal state. They possess self-reflective abilities and are aware of their own thoughts, actions and impact on their environment. True self-aware AI is a concept that remains largely speculative and is currently beyond the capabilities of existing technology.

Generative AI

Generative AI systems are capable of creating content, such as images, videos, music or text, that is nearly indistinguishable from human-generated content. They can autonomously generate new outputs based on their learned patterns and styles.

Generative adversarial networks are an example of generative AI, where one network generates content, and another network evaluates and provides feedback to improve the quality of the generated output.

SEE: Learn everything you need to know about ChatGPT

Other popular examples of generative AI include:

  • DeepArt.io: This tool uses neural networks to transform photos into artistic styles from famous artists.
  • Runway: This platform offers a range of generative AI tools for creating images and videos.
  • DeepDream: DeepDream is a tool developed by Google that uses generative AI to modify images. It enhances patterns and structures in an image to create dream-like visuals.
  • OpenAI’s ChatGPT: Generative pretrained transformer is a language model developed by OpenAI to generate human-like text based on a given prompt. The latest model GPT-4 was trained on Microsoft Azure AI supercomputers and is available on ChatGPT Plus.

What AI platforms are available?

When adopting an AI strategy, it’s important to know what software is available for business-focused AI. There are a wide variety of platforms available from the usual cloud computing suspects like Google, AWS, Microsoft and IBM, and choosing the right one can mean the difference between success and failure.

AWS Machine Learning

AWS Machine Learning offers a wide variety of services that run in the AWS cloud. AI services, prebuilt frameworks, analytics tools and more are all available, with many designed to take the legwork out of getting started and others like SageMaker for Business Analysts designed to enable corporations to get AI insight without writing code. AWS offers prebuilt AI algorithms, one-click ML training and training tools for developers getting started in or expanding their knowledge of AI development.

Google Cloud

Google Cloud offers similar AI solutions to AWS, as well as having several prebuilt total AI solutions that organizations can ideally plug into their organizations with minimal effort. Google also distinguishes itself by innovating some of the industry standards for AI like TensorFlow, an open-source ML library.

SEE: Discover Google’s latest generative AI platform Google Bard

Microsoft AI

Microsoft’s AI platform comes with pre-generated services, ready-to-deploy cloud computing infrastructure and a variety of additional AI tools that can be plugged into existing models. Its AI Lab also offers a wide range of AI apps that developers can tinker with and learn from what others have done. Microsoft also offers an AI school with educational tracks specifically for business applications.

Watson

Watson is IBM’s version of cloud-hosted ML and business AI, but it goes a bit farther with more AI options. IBM offers on-site servers custom built for AI tasks for businesses that don’t want to rely on cloud hosting, and it also has IBM AI OpenScale, an AI platform that can be integrated into other cloud hosting services, which could help to avoid vendor lock-in. In 2021, IBM Watson suffered a media backlash after years of overpromising on what its AI could deliver in healthcare, but many enterprises still turn to it for narrower tasks.

What AI skills will businesses need to invest in?

Perhaps the most critical skill needed to use AI is knowing when to skip AI altogether. The reality of AI is that many problems could be solved by applying simple regression analysis or if/then statements. Most AI, in other words, isn’t AI at all: It’s only basic math and common sense.

For more complicated, AI-oriented tasks, the associated data science breaks down into two categories: that which is intended for human consumption and that which is intended for machine consumption.

SEE: Learn what you need to know to become a machine learning engineer.

In the latter case, AI involves complex digital models that apply ML models and AI algorithms to large amounts of data. These systems then act autonomously to generate a particular ad or customer experience or make real-time stock trades. Hence, machine-oriented AI professions require “exceptionally strong mathematical, statistical and computational fluency to build models that can quickly make good predictions,” as former Google and Foursquare data scientist Michael Li has noted.

By contrast, the skills needed for more human-oriented data science and AI skew toward storytelling. Given that no data is unbiased, the role of human intelligence is to help the data tell clear stories. Such AI storytellers use data visualization to facilitate exploration and insights into that data.

For many in AI, the most sophisticated math they’ll do is power analyses and significance tests. They might write SQL queries to get data, do basic math on that data, graph results and then explain the results. Not gee whiz data science, but it’s incredibly helpful for breaking down complex data into actionable insights, to use the data science lingo.

With all that in mind, it’s still true that skills needed for an AI project differ based on business needs and the platform being used, though most of the biggest platforms support most, if not all, of the most commonly used AI programming languages and skills needed.

Many business AI platforms offer training courses in the specifics of running their architecture and the programming languages needed to develop more AI tools. Businesses that are serious about AI should plan to either hire new employees or give existing ones the time and resources necessary to train in the skills needed to make AI projects succeed.

How can businesses start using AI?

Getting started with business AI isn’t as easy as simply spending money on an AI platform provider and spinning up some prebuilt models and algorithms. There’s a lot that goes into successfully adding AI to an organization.

At the heart of it all is good project planning. Adding artificial intelligence to a business, no matter how it will be used, is like any business transformation initiative. Here’s an outline of just one way to approach getting started with business AI.

Determine your AI objective

To begin, figure out how AI can be used in your organization and to what end. By focusing on a narrower implementation with a specific goal, you can better allocate resources.

Identify what needs to happen to get there

Once you know where you want to be, you can figure out where you are and how to make the journey. This could include starting to sort existing data, gathering new data, hiring talent and other preproject steps.

Build a team

With an end goal in sight and a plan to get there, it’s time to assemble the best team to make it happen. This can include current employees, but don’t be afraid to go outside the organization to find the most qualified people. Be sure to allow existing staff to train so they have the opportunity to contribute to the project.

Choose an AI platform

Some AI platforms may be better suited to particular projects, but by and large they all offer similar products in order to compete with each other. Let your team give recommendations on which AI platform to choose — they’re the experts who will be in the trenches.

Begin implementation

With a goal, team and platform, you’re ready to start working in earnest. This won’t be quick: AI machines need to be trained, testing on subsets of data has to be performed and lots of tweaks will need to be made before a business AI is ready to hit the real world. In fact, you should expect that the vast majority of your time won’t be spent in crafting sexy algorithms, but rather in data preparation.

woman working with data on laptop

Subscribe to the Data Insider Newsletter

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Delivered Mondays and Thursdays Sign up today

Could movie studios use AI to replicate an actor’s image and use it forever?

camera AI related graphic

AI has made its way into nearly all of our favorite art forms including music, visual art, and now possibly even films. According to SAG-AFTRA, AI makes it possible for background performers' images and likenesses to be used forever.

The Screen Actors Guild — American Federation of Television and Radio Artists (SAG-AFTRA) has been making headlines because of its national board's unanimous decision to go on strike after failing to agree on a new contract with the Alliance of Motion Picture and Television Producers (AMPTP).

Also: Train AI models with your own data to mitigate risks

In a press conference on Friday announcing the strike, SAG-AFTRA chief negotiator Duncan Crabtree-Ireland revealed a proposal by AMPTP that would have background performers scanned by AI, get one day's pay, and have their image and likenesses owned by their company and used in perpetuity.

"They proposed that our background performers should be able to be scanned, get paid for one day's pay, and their companies should own that scan, their image, their likeness, and for people to use it for the rest of eternity in any project they want with no consent and no compensation," said Crabtree-Ireland.

The alleged proposal has major ethical implications as workers would be losing out on potential pay for their work, and also raises questions about the future and authenticity of acting.

In a statement to ZDNET, AMPTP denied the claims, stating that the organization would pay actors for using their digital replica and would ask for actors' consent.

Also: Most workers want to use generative AI to advance their careers but don't know how

"The claim made by SAG-AFTRA leadership that the digital replicas of background actors may be used in perpetuity with no consent or compensation is false. In fact, the current AMPTP proposal only permits a company to use the digital replica of a background actor in the motion picture for which the background actor is employed. Any other use requires the background actor's consent and bargaining for the use, subject to a minimum payment," an AMPTP spokesperson told ZDNET.

While the contradictory statements leave the true nature of the proposal unclear, there's no denying that generative AI will continue to permeate the ways in which our favorite arts are produced.

Artificial Intelligence