Google, SAP Unveil Data Cloud, the Next Big Thing in Business Intelligence?

The enterprise application software vendor SAP recently announced an extensive partnership with Google Cloud to introduce an end-to-end data cloud that brings data from across the enterprise landscape using the SAP® Datasphere solution. This offering, which complements the RISE with SAP solution, enables organisations to access business-critical data in real time.

The solution addresses a significant challenge faced by organisations that need to invest substantial resources in building complex data integrations, custom analytics engines, and generative AI and natural language processing (NLP) models to derive value from their data investments.

By combining SAP software data on supply chains, financial forecasting, human resources records, omnichannel retail, and more with non-SAP data on Google Cloud from virtually any other data source, organisations can accelerate their digital transformation significantly. This approach provides them with a fully-defined data foundation that retains complete business context, enabling organisations to accelerate their digital transformation.

“SAP and Google Cloud share a commitment to open data and our extended partnership will help break down barriers between data stored in disparate systems, databases, and environments,” said Christian Klein, CEO and member of the Executive Board of SAP SE.

Businesses will be able to utilise our analytics capabilities, as well as advanced AI tools and large language models to find new insights from their data.

At SAP Now India conference held in Mumbai, Paul Marriott, President, SAP Asia Pacific Japan, told AIM, “Whether it’s ChatGPT, generative AI, or what we’re doing in the automotive sector with Catena-X to create a global exchange that drives a more efficient supply chain for the industry, or initiatives around energy transition to create a greener future, all this innovation is being delivered into the RISE and SAP cloud platforms.”

SAP and Google Cloud also plan to partner on joint go-to-market initiatives for enterprises’ largest data projects. The SAP Sapphire® conference, which will take place May 16-17 in Orlando, Florida, will host demos of joint AI and data solutions, including how

Enterprises can apply generative AI to common workflows and applications, such as using a chatbot to search, create, and edit purchase requests.

Alongside Google Cloud, SAP also announced collaboration with IBM to embed IBM’s Watson technology into applications such as SAP Start, a digital assistant designed to work with SAP’s cloud solutions, including those integrated with SAP S/4 HANA Cloud.

The post Google, SAP Unveil Data Cloud, the Next Big Thing in Business Intelligence? appeared first on Analytics India Magazine.

Did Google meet the ChatGPT challenge at I/O 2023? ZDNET editors debate

At Google's annual developer conference, Google I/O, the tech-giant unveiled its latest software and products releases. However, for the first time in the conference's history, AI dominated Google's flagship event of the year.

To analyze the announcements, Contributing Editor Dan Patterson discussed them with ZDNET's Editor in Chief Jason Hiner and editors Kerry Wan for his expertise in hardware, and Sabrina Ortiz for her extensive AI coverage.

Google's latest announcements and focus on AI were a response to the huge success of AI chatbots like ChatGPT and Bing Chat in the last few months, and the fact that Google hasn't had much luck with Bard.

Also: Every major AI feature announced at Google I/O 2023

For that reason, in addition to the hardware announcements such as the Pixel Fold, Pixel 7a and the Pixel Tablet, Google made many AI related announcements including the release of PaLM 2 (Google's new large language model to rival GPT-4), an updated version of Bard, and a new AI-powered search engine.

Although the announcements seem innovative, were they enough to keep Google ahead of the competition? Besides evaluating the success of each major product and software release, the ZDNET editors chatted about whether Google's emphasis on AI is viable for its business model and what this shift could mean for consumers.

Also: Every hardware product Google just announced at I/O 2023 (and yes, there's a foldable)

As Hiner mentioned in the conversation, which can now be viewed on YouTube, Google is facing an innovator's dilemma. Its business model relies on generating revenue from its traditional search and ads within search results, and a shift to a chatbot could potentially cost the company a lot of revenue.

Will Google overcome its classic innovator's dilemma? Was the event a win or a flop for the company? These questions and more were tackled during the conversation.

Google

Stability AI releases an open source text-to-animation tool

Stability AI Animation SDK

From anime to childhood classics, animations have brought stories to life by combining still images. Now, with just a text prompt, you can generate your own animations using AI.

On Thursday, Stability AI, the AI company that created Stable Diffusion, unveiled a text-to-animation tool that allows developers and artists to use Stable Diffusion models to generate animations.

Also: The best AI art generators: DALL-E 2 and other fun alternatives to try

The tool, known as Stable Animation SDK, can generate video from three different inputs including text alone, text and an initial image, and text and an input video.

Some users have taken to Twitter to share their animations.

Unlike DALL-E or Bing Image Creator, you can't access this model simply by visiting a website. Rather, using this model requires more advanced technical skills to install and run the UI, including coding.

Also: How to use Craiyon AI (formerly known as DALL-E mini)

The cost of an operation is based on a credit system and varies depending on the video dimensions and 3D render mode. The cost ranges from 0.058 to 0.174 credits per operation.

For full details on how to use Stable Animation SDK and the cost involved, visit Stability AI's developer platform for Animation SDK.

See also

Meta’s sprucing up its ads with new generative AI tools

Meta ad generative AI background

Whether you are browsing the web or scrolling through social media, product ads typically take away from the experience. Usually you are shown a series of different products overlayed on a white background that all look pretty much the same.

Lucky for you, Meta will be using AI to help advertisers make ads more interesting.

On Thursday, Meta introduced an AI Sandbox for advertisers which will act as a testing playground for new generative AI tools, including generative AI-powered ad tools such as text variation, background generation and image outcropping.

Also: Stability AI releases an open source text-to-animation tool

These tools will allow developers to get more creative with their ads without having to do much more work, which would ideally result in more visually appealing ads for users and more revenue for advertisers.

Text variation will allow advertisers to generate multiple versions of text so that different messages can reach different audiences.

In the demo, an advertiser includes a description of the product they are creating the ad for and Meta generates four different potential ad texts for the post.

Also: Google's 'translation glasses' were actually at I/O 2023, and right in front of our eyes

The other two tools focus on optimizing visual aspects of the post.

With background generation, advertisers can replace the standard white background with more imaginative ones. Image outcropping allows the same post to be adapted to fit different platforms, such as Stories or Reels.

These features are currently available to a small group of advertisers but will begin to roll out gradually to others in July.

Artificial Intelligence

Council Post: Mastering the AI Maze: Insider Strategies for Tackling Implementation Challenges

As the emergence of Dall-E and Chat-GPT brings Artificial Intelligence to the limelight, it’s no surprise that companies are looking for creative ways to make use of this limitless power. AI isn’t simply a technology for specialists anymore – businesses can now fully leverage its advantages regardless of their technical skill level.

86% of business leaders see AI as an essential part of their daily operations, and they’re reaping the rewards in terms of increased efficiency and productivity. AI empowers decision makers with lightning-fast speed and accuracy, eliminating tedious manual data entry tasks along the way. With this level of intelligence on your side, unparalleled market opportunities will be within reach.

As McKinsey’s global survey indicates, 50% of businesses have already integrated AI in at least one domain, with the adoption rate predicted to double in upcoming years. However only 11% of organisations leveraging AI have experienced notable ROI. Why is this so?

As AI Advances, So Does the Role of Data Engineers and Scientists

Picture this— just a few years ago, data engineers and scientists were merely responsible for maintaining databases and extracting useful insights from gathered business intelligence, but as technology advanced and AI solutions became a rising priority, a storm began brewing, leaving them completely blindsided.

CTOs, data engineers, and scientists are facing the challenge of keeping up with the constantly changing technology landscape, which involves exploring and managing new algorithms, architectures, and solutions in both open-source and industry domains. Meanwhile, they are also grappling with the need to master best practices.The world of data has transformed into a fierce beast that is difficult to tame, and it is catching everyone off guard.

Big universities and training camps have yet to catch up to the tech, leaving data engineers and scientists with no choice but to piece together knowledge from scratch and learn through trial and error.

Data scientists need to access a specific table to build predictive AI models.But it doesn’t stop there. They will still face challenges such as figuring out the meaning of column names and dealing with missing values. These challenges can cause delays and confusion in the modelling process.

The role of a data scientist has turned into an adrenaline-pumping race against time and the need to always be one step ahead in a field that is expanding towards uncharted territories. Data science is morphing into a new hybrid role, merging with technological advancement to shape the industry.

In this article, I will explore three common challenges that enterprises face when implementing AI while providing practical solutions.

1- Challenge: Data Shortage

Machine-learning models often require vast, diverse datasets to function optimally, but obtaining such data remains a challenge. Insufficient data can lead to overfitting and biassed models eventually leading to poor performance.

Data shortages result from data privacy constraints, a lack of historical data for new trends, small sample sizes, and the costly, time-consuming nature of manual data labelling.

Acquiring permission to use sensitive data is challenging, and historical data often proves inadequate due to evolving trends and processes. However, adequate training data is essential for creating unbiased and accurate ML models capable of addressing a wide range of scenarios.

Solution: Synthetic Data Generation

Synthetic data many times serves as a viable solution to help overcome this challenge. When generated artificially to resemble real datasets, synthetic data can reveal hidden patterns, interactions, and correlations between variables, offering a substantial foundation for ML models. Balancing datasets and improving performance, it can supplement marginal classes without jeopardising privacy.

Advancements in synthetic data have significantly increased its value for machine learning models. Techniques like generative adversarial networks (GANs) and Wasserstein GANs (WGANs) foster the creation of more realistic data while maintaining compliance and data balance.

We used synthetic data to develop an AI tool that could monitor the brand uniformity of posters outside automobile showrooms in India. By using CycleGAN, we generated data for both brand compliant and non-complaint cases, allowing us to successfully train and deploy the AI model.

2- Challenge: Talent Shortage

75% of decision makers prefer to help upskill current staff and 64% favor recruiting experts to bridge the AI talent gap. However, budget limitations and retention challenges are significant barriers to these solutions.

Solution: Low Code and No-Code Platforms

Enterprises are increasingly embracing low-code/no-code platforms to democratise app development and ease workloads. Faced with an estimated 85.2 million global software engineer deficit by 2030, businesses have found that low-code/no-code tools can increase their value by millions of dollars without hiring additional IT developers.

For instance, our no-code platform streamlines the entire process of AI model training for pharmaceutical clients, including feature selection and deployment. What used to take a team of data scientists 2 months now just takes 1 week.

Gartner has projected that the low-code technology market is set to reach $44.5 billion by 2026. This is due to a number of factors, such as an increasing demand for more rapid application delivery, persistent talent shortages, and the proliferation of hybrid workforces. Low-code platforms have become an essential element of successful hyper automation, with 50% of all new clients expected to come from business buyers outside the IT organisation by 2025.

However, to achieve their full market potential, low-code/no-code platforms must continually innovate in areas like real-time iteration, DEVops workflows integration, scalability and API development.

3- Challenge: High expenditure

AI investments have reached nearly $118 billion in 2022 and surpass $300 billion in the next few years. The cost of AI varies, with companies paying anywhere from $6,000 to over $300,000 for custom solutions. Few factors that influence the costs include the type of AI software (chatbots, analysis systems, or virtual assistants), whether the enterprise requires a pre-built or custom solution, additional features, and how the platform will be managed (in-house or outsourced). Project duration and complexity also impact the overall cost.

Solution: Optimising Deployment Techniques

In addressing the pressing issue of high costs associated with AI implementation for enterprises, it becomes crucial to optimise data processing methods applicable to multiple forms of data, including videos, text, and other pertinent information depending on size and nature. By adopting strategic and efficient deployment techniques, it is possible to achieve substantial cost reduction.

Using specialised hardware for video processing and taking advantage of edge computing solutions can present a more economical option compared to solely relying on cloud-based solutions. Taking it a step further, a hybrid deployment can be implemented based on the specific use case and scenario, with a portion of the solution deployed on a dedicated server and the remaining on the cloud. This multifaceted approach not only simplifies data management processes but also significantly reduces the overall expenses involved in integrating AI in today’s business environment.

Similarly, if you opt for an AI API, you’ll be charged an invoice for every digitization process. However, with open source or commercial models, you can deploy the model without the need for API-based payments, resulting in long-term cost savings. Developing and deploying customised models for businesses is ultimately a more cost-effective option than relying on AI APIs.

Final Thoughts

Organisations looking to take advantage of AI need an intentional and methodical approach that combines user interfaces, regulations, data infrastructure, data storage solutions and labelled datasets. Once these systems are in place, enterprise-focused machine learning algorithms can be trained with structured and unstructured datasets. Ultimately, successful adoption of AI results in operational efficiencies as well as critical insights that can foster growth and success for any organisation willing to embrace this technology.

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill out the form here.

The post Council Post: Mastering the AI Maze: Insider Strategies for Tackling Implementation Challenges appeared first on Analytics India Magazine.

Zerodha Creates ‘AI Policy’ to Address Job Loss Concerns

Zerodha has been unapologetic about its decision to not use any AI/ML technologies that are hyped among businesses. “Pretty much every instance of an “AI-first mindset” that I have seen in the industry has been a strong case of misguided assumptions, outright delusions, and often intellectual dishonesty,” said Zerodha CTO Kailash Nadh.

But, looks like that is no longer the case. Zerodha CEO Nithin Kamath took to social media to express that the recent breakthroughs in AI have led them to think AI will take away jobs and can disrupt society. In this regard, the company has created an internal AI policy “to give clarity to [their] team, given the AI/job loss anxiety.”

When it comes to leveraging machine learning or artificial intelligence tools, Zerodha uses very minimal AI/ML. “We use little to no AI or ML apart from some basic image/document recognition ML models for document processing,” Nadh had told AIM.

On the other hand, Groww uses AI/ML in image processing, automating manual workflows, reducing errors, and increasing user ease throughout the journey. In addition, its tech stack includes React Native, Springboot, and SpringCloud, along with a microservices-based architecture that enables it to scale.

Despite releasing an AI policy, it is not clear if Zerodha itself will look to incorporate generative AI into its operations. Regardless, what Kamath feels will take place now is that many companies will likely let go of employees and blame it on AI. Companies will earn more and make their shareholders wealthier, exacerbating wealth inequality.

According to Kamath, “While the hope is for governments worldwide to put some guardrails in place, it may be unlikely given the rhetoric of deglobalization. No country would want to sit idle while another becomes more powerful on the back of AI.”

“It is unlikely that humans will be able to compete with intelligent machines in many walks of life,” Kamath concluded.

However, recently, in a written reply in the Lok Sabha, the Ministry of Electronics and IT (MeitY) said, “The government is not considering bringing a law or regulating the growth of artificial intelligence in the country.” So, clearly, India is far from even considering the impact the newer AI systems can have.

The post Zerodha Creates ‘AI Policy’ to Address Job Loss Concerns appeared first on Analytics India Magazine.

Data Scientist’s Guide to Cognitive Biases: A Free eBook

Data Scientist's Guide to Cognitive Biases: A Free eBook

If you are interested in exploring the topic of cognitive biases, and gaining insight into how biases can affect your daily life as well as your practice as a data scientist, the free eBook, "Thinking Clearly: A Data Scientist’s Guide to Understanding Cognitive Biases" may be a good resource to check out.

This latest eBook offering from Data Science Horizons covers a number of well-known biases, providing an overview of how they can interfere with day-to-day logical reasoning, methods for managing these biases and mitigating their influence, and specific concerns related to data science practice.

From Data Science Horizons:

Cognitive biases are mental shortcuts that can influence our thoughts, decisions, and judgments, often leading to errors or misconceptions. As data scientists, we are not immune to these biases, which can potentially affect the quality of our work and the insights we derive from data. This comprehensive ebook explores various cognitive biases and provides practical strategies to help you recognize and overcome their impact on your work as a data scientist.

Here's just a sneak peek of the biases covered in the book:

Learn how confirmation bias can lead to cherry-picking data and reinforce pre-existing beliefs, and discover strategies to counteract this bias in your data analysis.
Explore the self-serving bias and its impact on self-evaluation and interpersonal interactions, and learn how to develop a more balanced understanding of your experiences and personal growth.
Understand the halo effect and its consequences on perception and judgment, and discover ways to reduce its impact on your assessments.
Delve into groupthink and its dangers in decision-making and problem-solving, and uncover strategies for preventing and combating its influence.
Uncover the negativity bias and its effects on emotional well-being and decision-making, and learn tips for managing and overcoming this bias in your work as a data scientist.

Cognitive biases are everywhere. You have them. I have them. We all have them. The key to minimizing their interference with our logical selves is to be able to recognize them and form strategies to keep their effects at bay. This free resource intends to help you do just that.

Download Thinking Clearly: A Data Scientist’s Guide to Understanding Cognitive Biases now, and your hindsight bias will eventually tell you that this was the absolute right move 🙂

Matthew Mayo (@mattmayo13) is a Data Scientist and the Editor-in-Chief of KDnuggets, the seminal online Data Science and Machine Learning resource. His interests lie in natural language processing, algorithm design and optimization, unsupervised learning, neural networks, and automated approaches to machine learning. Matthew holds a Master's degree in computer science and a graduate diploma in data mining. He can be reached at editor1 at kdnuggets[dot]com.

More On This Topic

  • Super Study Guide: A Free Algorithms and Data Structures eBook
  • eBook: A Practical Guide to Using Third-Party Data in the Cloud
  • Data Science and Machine Learning: The Free eBook
  • Data Science at the Command Line: The Free eBook
  • ebook: Learn Data Science with R — free download
  • Causal Inference: The Free eBook

Human or bot? This Turing test game puts your AI-spotting skills to the test

Human or not

Named after the celebrated computer scientist Alan Turing, the Turing test is a way to determine if AI can convincingly act like a real person. The test typically asks someone to engage in an anonymous conversation and then deduce if their chat partner is real or artificial. Now, a new game takes advantage of the recent buzz about AI to dare you with its own version of a Turing test.

Also: I asked ChatGPT, Bing, and Bard what worries them. Google's AI went Terminator on me

Made by Tel Aviv-based and AI systems developer AI21 Labs, the online app known simply as Human or Not describes itself as a social Turing game. The premise is simple but challenging. You participate in a chat with someone (or something). You can ask any questions or offer any responses you want. But the chat lasts for just two minutes. After the time is up, you have to speculate whether the entity on the other end was human or AI.

To take this concept for a spin yourself, browse to the Human or Not page. Click the Start Game button. You're then asked either to initiate the chat, or wait for your partner to start things off. As you ask questions and make statements to drive the conversation, pay attention to how the entity on the other end speaks and acts.

Also: This AI chatbot sums up PDFs and answers your questions about them

Once your time is up, the question pops up: Who did you talk to — Human or AI bot? Guess right, and you naturally win the game. Try it as many times as you like to see how many games in a row you can win.

Human or Not was designed by AI21 creative director Amos Meron, who told me how he cooked up the game.

Also: The best AI art generators

"I've been having a lot of conversations with friends and co-workers about AI over the past few months and realized we have a lot of assumptions about how people would interact with AI bots in the near future, as well as what is perceived as human behavior online," Meron said. "This is where I thought about creating a social experiment that would allow everyone to challenge these assumptions themselves."

To design Human or Not, Meron said that AI21 used a blend of state-of-the-art large language models (LLMs), including the company's own Jurassic-2. LLMs rely on deep learning to help chatbots and other AI tools deliver more human-like text. Beyond turning to these models, AI21 developed a framework that would generate a different bot character in each game.

Also: How I used ChatGPT and AI art tools to launch my Etsy business fast

The excitement and curiosity about AI seem new, triggered mostly by the popularity of ChatGPT. But the use of artificial intelligence has been around for a long time. And a key player in that area was Alan Turing.

Famed for his efforts at Bletchley Park during World War II, Turing led a team of cryptanalysts whose mission was to crack the code of encrypted German military and intelligence messages. With his interest and expertise in AI, Turing posed the question that we're still trying to answer today: Can machines think?

Also: How to get early access to Google's new AI search engine

Toward that end, Turing created a test that he called the Imitation Game. In this scenario, one person and one computer are each asked questions by a third person who can't see the other two. Based on the answers to the questions, the questioner ultimately has to determine which of the two is human and which is machine.

"Although it was conceived in the early fifties, the Turing test might be playing its most significant role now," Meron said. "It can help us understand where we stand, put our assumptions to the test, and become the catalyst for asking the right questions and starting a much-needed discussion."

Also: ChatGPT vs. Bing AI: Which AI chatbot is better for you?

What does AI21 hope to accomplish with its new Turing test game?

"On a fundamental level, we want people to have an interesting and thought-provoking experience with the game," Meron said. "As a result, and especially after we will release the results of the experiment, we hope that it will encourage a greater public conversation about how we should make more informed, fair and safe use of AI technology."

How to make ChatGPT provide sources and citations

ChatGPT Sources

One of the biggest complaints about ChatGPT is that it provides information, but the veracity and accuracy of that information is uncertain. That's because ChatGPT doesn't provide sources, footnotes, or links to where it derived information used in its answers.

Also: I asked ChatGPT, Bing, and Bard what worries them. Google's AI went Terminator on me

But that's not fully true.

How to make ChatGPT provide sources and citations

If you know how to properly prompt ChatGPT, it will give you sources. Here's how.

FAQ

How do you put sources in APA format?

APA style is a citation style that's often required in academic programs. APA stands for American Psychological Association, and I've often thought that they invented these style rules in order to get more customers. But, seriously, the definitive starting point for APA style is the Purdue OWL. It provides a wide range of style guidelines.

Also: How to use ChatGPT to build your resume

Be careful: online style formatters may not do a complete job, and you may get your work returned by your professor. It pays to do the work yourself, and use care doing it.

How can I make ChatGPT provide more reliable sources for its responses?

This is a good question. I have found that sometimes — sometimes — if you ask ChatGPT to give you more sources, or re-ask for sources, it will give you new listings. If you tell ChatGPT that the sources it provided were erroneous, it will sometimes give you better ones. It may also just apologize and give excuses. Another approach is to re-ask your original question with a different focus or direction, and then ask for sources for the new answer.

Also: 5 ways you can use ChatGPT to help you write essays

Once again, my best advice is to avoid treating ChatGPT as a tool that writes for you and more as a writing assistant. Asking for sources so you can just cut and paste a ChatGPT response is pretty much plagiarism. But using ChatGPT's response and any sources you can tease out of it as clues for further research and writing is a completely legitimate way to use this intriguing new tool.

Why are ChatGPT sources often so wrong?

For some links, it's just link rot. Since all sources are at least three years old, some links may have changed. Other sources are of indeterminate age. Since we don't have a full listing of all of ChatGPT's sources, it's impossible to tell how valid they were to begin with. But since ChatGPT was trained mostly without human supervision, we know that most of its sources weren't vetted, and so could be wrong, made-up, or completely non-existent.

Trust, but verify.

You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

This Job Role will Still be Relevant When Data Scientists be Gone

In an era when being a Data Scientist was the epitome of cool, college graduates flocked to the field, drawn by the allure of its potential. The hype was real, and the demand for these professionals was skyrocketing. However, as artificial intelligence (AI) and machine learning (ML) continue to advance at an astonishing pace, doubts have arisen about the very existence of Data Scientists. The rapid adoption of AI and ML has ignited a passionate debate about the future of this once-revered profession.

On one side of the argument, there are those who assert that the recent announcement of OpenAI that the company will be introducing plugins to ChatGPt while teasing the launch of a code interpreter and web browser plugin will render traditional data science roles obsolete. They believe that the plugins may replace many of the common workflows of a data scientist, including visualization, trend analysis, and even data transformation. When looking at the code interpreter in tandem with the other advancements in the data science field, there is a notion that the algorithms and automation offered by AI will replace the need for human intervention in data analysis. Conversely, there are those who staunchly maintain that AI and ML will open up new and exciting opportunities in the field of data science.

One such role is of ML Engineers, where experts believe that the role of Data Scientists will gradually transform into. According to a report by Indeed, the job title “machine learning engineer” is growing at a rate of 344%, while the job title “data scientist” is growing at a rate of 25%. While another report by O’Reilly Media found that 80% of data scientists are planning to learn machine learning in the next year.

Since the age of generative AI is catching up, and the models often involve large-scale data processing and sophisticated algorithmic architectures, ML Engineers will be in more demand than ever. The engineers possess the technical expertise to handle the computational challenges associated with training and deploying these models effectively. They have a deep understanding of distributed computing, parallel processing, and GPU acceleration, allowing them to optimize the performance of generative AI models and scale them to handle vast amounts of data.

Additionally, ML Engineers are skilled in the deployment and productionisation of ML models. Generative AI models are not just research prototypes; they are increasingly being integrated into real-world applications. ML Engineers have the know-how to deploy these models into production environments, ensuring their stability, scalability, and robustness. They are proficient in building end-to-end ML pipelines, handling data preprocessing, model deployment, and monitoring, which are crucial steps in incorporating generative AI into practical use cases.

Furthermore, generative AI models often require fine-tuning and customization to align with specific business objectives and user requirements. ML Engineers possess the expertise to fine-tune and adapt these models, leveraging techniques such as transfer learning and hyperparameter tuning. They can tailor generative AI models to address specific challenges and optimize their performance for the intended application domain. Moreover, ML Engineers have a comprehensive understanding of the ethical implications and considerations associated with generative AI. They are aware of the potential biases, fairness issues, and privacy concerns that can arise when deploying AI models that generate content. ML Engineers are equipped to address these challenges and implement safeguards to ensure the responsible and ethical use of generative AI.

Data Designers

The role of a “Data Designer” is also becoming increasingly crucial in today’s data-driven organizations, particularly in the era of Generative AI. These professionals hold the responsibility of defining the organization’s “unique norm of data,” encompassing aspects such as data literacy, models, topics, and ontology. Moreover, they play a pivotal role in establishing a unified and coherent data vision across the entire organization, ensuring that everyone adopts a “common language” when dealing with data.

The primary focus of a data designer is to establish a structured framework for data management, ensuring that data is organized, accessible, and usable across the organization. They design and implement data models, which serve as blueprints for how data is structured, stored, and interconnected. These models help in capturing and representing the relationships between different data elements, enabling efficient data analysis and interpretation.

In addition to data modelling, data designers also define data standards and guidelines for data governance. They establish data quality criteria and ensure that data is accurate, consistent, and reliable. Data designers collaborate with various stakeholders, including data engineers, data scientists, and business analysts, to understand their data requirements and translate them into practical data design solutions.

Another important aspect of a data designer’s role is to create a common language or ontology for data within the organization. They develop a standardized vocabulary and terminology that allows different teams and departments to communicate effectively when working with data. This helps in avoiding confusion, improving collaboration, and promoting data literacy across the organization.

The post This Job Role will Still be Relevant When Data Scientists be Gone appeared first on Analytics India Magazine.