How AI is Transforming the Retail Industry

Retail industry transformation
Source: Canva

Until recently, demand forecasting and inventory optimization were among some of the key AI applications leveraging AI. However, the recent developments in AI have led to an array of innovative AI offerings that have revolutionized the retail business.

Think Outside the Box!!!

One unique use case involves optimizing the quality of product images to improve sales conversion rates. In this scenario, retailers leverage AI, specifically computer vision techniques, to enhance the visual appeal of product images, increasing the likelihood of the buyer clicking on the product.

As part of the sales funnel structure, the increase in clicks, in general, flows down to the higher conversion rates, i.e. sales. Extending it further, the retailers can conduct hypothesis tests and experiment with different image qualities, determining which image attributes, such as high-definition pixels and appropriate lighting, contribute to driving conversions.

Computer vision techniques for retailers
Source: Canva

Computer vision techniques enable retailers to automate the process of implementing fixes to improve image quality, ensuring they resonate with customers.

Product Descriptions

Most e-commerce websites host products from different sellers, resulting in heterogeneous ways of listing product descriptions, which are often inconsistent and incomplete.

The lack of awareness and insights on what constitutes a compelling shopping experience leads to inconsistency in style, length, tone, and completeness, potentially impacting users’ purchasing decisions.

However, with Generative AI – the AI technique to generate content, such as text, images, videos, etc. based on large datasets, the companies can generate effective product descriptions which can lead to increased clickability, and in turn, a high conversion rate.

Such AI-generated content ensures that not only information is comprehensive but also engaging and persuasive, tapping into the attention of users. Extending this a step further, the algorithms can even learn user preferences and provide product descriptions that resonate most with them. It is worthwhile to note that the retailers can continue enhancing and building on such AI systems, based on the responsiveness of model outcomes with the users.

This iterative process enables companies to refine their product descriptions over time, optimizing them for maximum effectiveness in driving conversions.

Seller Risk Management

Talking about sellers, the e-commerce platforms can also build an AI-powered seller risk-management system to monitor risks related to product quality, customer service, and adherence to ethical standards.

AI can learn from factors like past seller behavior, customer feedback, and transaction records, to detect irregularities or deviations from expected norms. Such deviations flag the sellers that might exhibit non-compliance with the platform's policies and the code of conduct.

Analyze factors such as timely shipping, accurate product descriptions, fair pricing, and responsiveness to customer inquiries to highlight seller behavior. The ones who consistently receive negative reviews or complaints, engage in fraudulent activities, or violate terms of service agreements can lead to bad customer experience as well as tarnish the platform's reputation.

Hence, e-commerce platforms can proactively identify such sellers and suspend their engagement by leveraging AI. In addition to ensuring the quality of reliable sellers, such AI systems also foster trust and build confidence among customers.

While sellers’ behavior may change over time, the AI-powered risk management system can continuously learn and adapt based on the evolving patterns of non-compliance.

Fraud Detection

Often, fraud detection is assumed to be the bank’s responsibility and that’s true to a certain extent. But think of a customer who became a victim of a fraudulent transaction on the retail platform and tries to connect with the retailer to reverse the sale.

Fraud detection
Source: Canva

Typically, the retailer is unable to help and is assumed of no fault. However, for the customer, the retailer is the first layer of trust aka defense. Imagine, if the retailer would have an AI-powered algorithm that could identify the potential fraud based on the buyer’s purchase history and could introduce an additional identification step to proceed to the sale, then the fraud is stopped at the very layer of defense already.

We are living in a highly competitive world where it is crucial to have a differentiator. Managing and mitigating risks of fraud highlights the retailer's commitment to customer-centricity, leading to increased trust and brand loyalty from customers.

Quality Control

Imagine AI being your quality control assistant, checking every product before it reaches the shelves, especially in the case of perishable products, where it is crucial to maintain freshness and ensure consumer safety.

Similarly, computer vision can analyze the quality of clothes by detecting imperfections in stitching, fabric consistency, and print alignment. By automating quality control procedures, retailers can maintain consistent product standards, delivering superior products to their customers.

Cognitive Overload

Different brands have different size guides and it is often a pain among customers to remember the size specific to a brand. AI is known for taking away the cognitive load from the customer and can help make relevant recommendations, enhancing their shopping experience. For example, if the algorithm suggests the size based on purchase history, user characteristics, and possibly the feedback on sizing preferences. There you go – full points on customer delight.

Summary

From optimizing product images and generating compelling product descriptions to managing seller risks and detecting fraud, AI has the potential to revolutionize every aspect of the retail industry. With the open sourcing of more AI products and pre-trained models, the era of free AI for every industry is here to stay.

Vidhi Chugh is an AI strategist and a digital transformation leader working at the intersection of product, sciences, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, an author, and an international speaker. She is on a mission to democratize machine learning and break the jargon for everyone to be a part of this transformation.

More On This Topic

  • Transforming the Shop Floor: A No-BS Look at Data Science in Manufacturing
  • Transforming your business with SAS® Viya® on Microsoft Azure
  • Breaking the Data Barrier: How Zero-Shot, One-Shot, and Few-Shot…
  • The AIoT Revolution: How AI and IoT Are Transforming Our World
  • Transforming AI with LangChain: A Text Data Game Changer
  • KDnuggets News, July 27: The AIoT Revolution: How AI and IoT Are…

UiPath Stock Plunges Nearly 30% as CEO Rob Enslin Abruptly Resigns

UiPath shares plummeted nearly 30% in after-hours trading Wednesday after the robotic process automation software company announced the surprise resignation of CEO Robert Enslin and provided disappointing guidance, overshadowing better-than-expected first-quarter earnings results.

The New York-based company said Enslin will step down as CEO and board member effective June 1, 2024, after just two years in the role. the company said co-founder and former CEO Daniel Dines, who currently serves as Executive Chairman and Chief Innovation Officer, will retake the reins as CEO. Enslin will remain as an advisor during the transition.

For the first quarter ended April 30, UiPath reported adjusted earnings per share of $0.13, beating the consensus estimate of $0.12. Revenue grew 16% year-over-year to $335 million, also surpassing the $333.04 million analysts expected. Annualised renewal run-rate (ARR), a key metric, increased 21% to $1.508 billion.

However, UiPath’s outlook fell short of expectations. For Q2, the company guided revenue of $300-305 million, well below the $342.07 million consensus. It lowered its full-year fiscal 2025 revenue forecast to $1.405-1.41 billion, down from its prior outlook of $1.55-1.56 billion and missing the $1.56 billion analysts anticipated.

On the earnings call, Dines acknowledged challenges with sales execution and elongating sales cycles for larger deals. He said some investments have made UiPath less agile but expressed optimism in the company’s long-term prospects, especially around generative AI, which he sees as a “secular trend” benefiting the business. Under his renewed leadership, Dines plans to refocus on product innovation.

Analysts said the abrupt CEO change indicates Enslin’s failure to drive faster growth, and weak guidance suggests deeper issues at UiPath. Once valued at nearly $36 billion after its April 2021 IPO, the company has seen its stock struggle, down over 50% from the IPO price even before Wednesday’s after-hours plunge.

“Given the increase in spend during the quarter, I guess there must have been some late surprises in a quarter of larger deals not coming through,” said Holger Mueller of Constellation Research. “The odd thing is that the board didn’t want to have Dines leading the company two years ago, but it has now put him back in place. Potentially, the board might be looking for a new CEO, or else it’s going to focus its efforts on building a next-generation product for the generative AI era under Dine’s leadership.”

Founded in 2005, UiPath makes software that helps companies automate repetitive business tasks. Its platform is powered by AI models that learn how employees perform common tasks in business applications. While its tools have been hailed as a game-changer, the company faces increasing competition from tech giants and generative AI upstarts that could steal business.

“In the long run, AI will be a tailwind for companies that can apply machine intelligence to end-to-end automation,” said Dave Vellante, Chief Analyst at TheCUBE Research. “In the near term customers may feel it’s easier to do full automation with gen AI, but I think they’ll find they need deeper relationships and tech to actually realise significant value. In the meantime, firms like UiPath have to educate customers on how gen AI combined with end-to-end automation can be achieved.”

The post UiPath Stock Plunges Nearly 30% as CEO Rob Enslin Abruptly Resigns appeared first on AIM.

5 Best End-to-End Open Source MLOps Tools

5 Best End-to-End Open Source MLOps Tools Cover Image
Image by Author

Due to the popularity of 7 End-to-End MLOps Platforms You Must Try in 2024 blog, I am writing another list of end-to-end MLOPs tools that are open source.

The open-source tools provide privacy and more control over your data and model. On the other hand, you have to manage these tools on your own, deploy them, and then hire more people to maintain them. Also, you will be responsible for security and any service outage.

In short, both paid MLOps platforms and open-source tools have advantages and disadvantages; you just have to pick what works for you.

In this blog, we will learn about 5 end-to-end open-source MLOps tools for training, tracking, deploying, and monitoring models in production.

1. Kubeflow

The kubeflow/kubeflow makes all machine learning operations simple, portable, and scalable on Kubernetes. It is a cloud-native framework that allows you to create machine learning pipelines, and train and deploy the model in production.

Kubeflow Dashboard UI
Image from Kubeflow

Kubeflow is compatible with cloud services (AWS, GCP, Azure) and self-hosted services. It allows machine learning engineers to integrate all kinds of AI frameworks for training, finetuning, scheduling, and deploying the models. Moreover, it provided a centralized dashboard for monitoring and managing the pipelines, editing the code using Jupyter Notebook, experiment tracking, model registry, and artifact storage.

2. MLflow

The mlflow/mlflow is generally used for experiment tracking and logging. However, with time, it has become an end-to-end MLOps tool for all kinds of machine learning models, including LLMs (Large Language Models).

MLflow Workflow Daigram
Image from MLflow

The MLFlow has 6 core components:

  1. Tracking: version and store parameters, code, metrics, and output files. It also comes with interactive metric and parametric visualizations.
  2. Projects: packaging data science source code for reusability and reproducibility.
  3. Models: store machine learning models and metadata in a standard format that can be used later by the downstream tools. It also provides model serving and deployment options.
  4. Model Registry: a centralized model store for managing the life cycle of MLflow Models. It provides versioning, model lineage, model aliasing, model tagging, and annotations.
  5. Recipes (Pipelines): machine learning pipelines that let you quickly train high-quality models and deploy them to production.
  6. LLMs: provide support for LLMs evaluation, prompt engineering, tracking, and deployment.

You can manage the entire machine learning ecosystem using CLI, Python, R, Java, and REST API.

3. Metaflow

The Netflix/metaflow allows data scientists and machine learning engineers to build and manage machine learning / AI projects quickly.

Metaflow was initially developed at Netflix to increase the productivity of data scientists. It has now been made open source, so everyone can benefit from it.

Metaflow Python Code
Image from Metaflow Docs

Metaflow provides a unified API for data management, versioning, orchestration, mode training and deployment, and computing. It is compatible with major Cloud providers and machine learning frameworks.

4. Seldon Core V2

The SeldonIO/seldon-core is another popular end-to-end MLOps tool that lets you package, train, deploy, and monitor thousands of machine learning models in production.

Seldon Core workflow Daigram
Image from seldon-core

Key features of Seldon Core:

  1. Deploy models locally with Docker or to a Kubernetes cluster.
  2. Tracking model and system metrics.
  3. Deploy drift and outlier detectors alongside models.
  4. Supports most machine learning frameworks such as TensorFlow, PyTorch, Scikit-Learn, ONNX.
  5. Data-centric MLOPs approach.
  6. CLI is used to manage workflows, inferencing, and debugging.
  7. Save costs by deploying multiple models transparently.

Seldon core converts your machine learning models into REST/GRPC microservices. I can easily scale and manage thousands of machine learning models and provide additional capabilities for metrics tracking, request logging, explainers, outlier detectors, A/B Tests, canaries, and more.

5. MLRun

The mlrun/mlrun framework allows for easy building and management of machine learning applications in production. It streamlines the production data ingestion, machine learning pipelines, and online applications, significantly reducing engineering efforts, time to production, and computation resources.

MLRun workflow Diagram
Image from MLRun

The core components of MLRun:

  1. Project Management: a centralized hub that manages various project assets such as data, functions, jobs, workflows, secrets, and more.
  2. Data and Artifacts: connect various data sources, manage metadata, catalog, and version the artifacts.
  3. Feature Store: store, prepare, catalog, and serve model features for training and deployment.
  4. Batch Runs and Workflows: runs one or more functions and collects, tracks, and compares all their results and artifacts.
  5. Real-Time Serving Pipeline: fast deployment of scalable data and machine learning pipelines.
  6. Real-time monitoring: monitors data, models, resources, and production components.

Conclusion

Instead of using one tool for each step in the MLOps pipeline, you can use only one to do them all. With just one end-to-end MLOPs tool, you can train, track, store, version, deploy, and monitor machine learning models. All you have to do is deploy them locally using Docker or on the Cloud.

Using open-source tools is suitable for having more control and privacy, but it comes with the challenges of managing them, updating them, and dealing with security issues and downtime. If you are starting as an MLOps engineer, I suggest you focus on open-source tools and then move to managed services like Databricks, AWS, Iguazio, etc.

I hope you like my content on MLOps. If you want to read more of them, please mention it in a comment or reach out to me on LinkedIn.

Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in technology management and a bachelor's degree in telecommunication engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

More On This Topic

  • 7 End-to-End MLOps Platforms You Must Try in 2024
  • A Beginner's Guide to End to End Machine Learning
  • A Full End-to-End Deployment of a Machine Learning Algorithm into a…
  • Closed Source VS Open Source Image Annotation
  • How to Build Data Frameworks with Open Source Tools to Enhance…
  • Generate Synthetic Time-series Data with Open-source Tools

How is GenAI Proving to be a Useful Tool for Data Engineers?

Generative AI has brought about revolutionary changes across numerous domains. How has it influenced data engineering?

Data Engineering remains pivotal in enabling streamlined data processing, storage, and analysis, which is crucial for informed decision-making.

Prasenjit Ghosh, vice president at Genpact, believes data engineering plays a pivotal role in the transformation of businesses through the effective utilisation of data assets. “Even though they don’t get the recognition, they play a foundational role,” he said while speaking at the Data Engineering Summit (DES) 2024, hosted by AIM Media House in Bengaluru.

In his presentation titled ‘GenAI for Data Engineering’, Ghosh outlined the three key stages of the data engineering lifecycle- data input, data processing, and data output.

Various challenges arise across these stages, including data input mismatch, data relationship management, data coding, and data catalogue enrichment.

Ghosh believes large language models (LLMs) are proving to be important tools for data engineers in overcoming these challenges.

By leveraging these models, Genpact’s data engineers are automating and enhancing various tasks, leading to increased efficiency, productivity, and innovation.

Unstructured to Structured Data

Ghosh believes one of the primary use cases of Generative AI for data engineers has been the ability to turn unstructured data into structured data.

LLMs can autonomously analyse unstructured data sources like text documents, images, and audio files. They perform tasks such as text extraction, entity recognition, sentiment analysis, and categorization, effectively organizing the data into structured formats.

This transformation enables data engineers to convert raw unstructured data into structured datasets, facilitating easier analysis and insights derivation. Ultimately, GenAI streamlines the data engineering process, empowering organisations to extract hidden value within their unstructured data assets.

LLMs could help data engineers in data design, data coding ( copilot, code moderniser), data testing and data analysis.

Increasing Generative AI impact

Ghosh also highlighted that generative AI is going to be transformative as the technology matures. “Now, a very interesting phenomenon is going to happen in the next one to two years. Hardly anybody will go for creating another LLM, right, big players will come, but all the innovations will happen on the edge.

While speaking, Ghosh said in a light-hearted manner that the developments happening in the generative AI space are so rapid that the predictions everyone is making now might not be relevant at all in a year’s time.

Moreover, Ghosh reminisces about a discussion he had with Vishal Sikka, the former CEO of Infosys. At the time, Ghosh, who was working at Infosys, was introduced to OpenAI by Sikka, a close associate of Sam Altman, the CEO of OpenAI.

Interestingly, during Sikka’s tenure at Infosys, the IT giant, along with Elon Musk, AWS, and YC Research, among others, joined hands to make the USD 1 billion donation to OpenAI.

At the end of his end of the session, Ghosh showed a video to the audience, which was developed by prompting an AI text-to-video model.

The post How is GenAI Proving to be a Useful Tool for Data Engineers? appeared first on AIM.

CtrlS Launches 13-MW, AI-ready Data Center in Hyderabad

CtrlS Datacenters has launched its new AI-ready datacenter – Hyderabad DC3, located in the financial district of Gachibowli, Hyderabad. CtrlS Datacenters has invested over Rs 500 crore in this facility, which has approximately 1,300 racks capacity. The facility has been inaugurated by Mr. Sundararaman Ramamurthy, Managing Director & Chief Executive Officer, Bombay Stock Exchange (BSE).

Hyderabad DC3 facility has a built-up area of 1.34 lakh sq. ft. and 13 MW IT load capacity. It is a ground-plus-five-storeyed facility aiming for LEED Platinum certification by leveraging renewable energy and advanced water recycling, among other sustainable initiatives. The datacenter is AI-ready with advanced cooling technologies and designed to comply with seismic zone 2 standards. The fully flood-proof facility also ensures 9-layer physical security.

With access to cloud connect services from Google, Oracle, Azure and AWS via CtrlS Cloud Connect, CtrlS Datacenters is the first and only Google Cloud Partner Interconnect provider in Hyderabad. Hyderabad DC3 provides access to major Internet Exchange (IX) providers via IX Connect portfolio. The facility is interconnected to all major data centres within Hyderabad with CtrlS Metro Connect and major CtrlS data centres across Mumbai, Bangalore & Delhi via CtrlS NLD Connect.

Sundararaman Ramamurthy, MD & CEO, BSE who inaugurated the Hyderabad DC3 facility said, “I am delighted to inaugurate the Hyderabad DC3 facility. Datacenters have become a part and parcel of most of the FinTech companies today. They are important to help the industry grow further. Particularly from a stock exchange perspective, data centres have become very important. I am happy to see that CtrlS Datacenters is growing in a big way. I wish that our relationship continues to grow further.”

Sridhar Pinnapureddy, Founder and CEO of CtrlS Datacenters, stated, “Hyderabad is one of the major data centre hubs in the country, with a growing demand from enterprises and cloud service providers. Owing to its seismic zone-2 status, the city is one of the most preferred locations for hosting disaster recovery services.

Our new facility will further strengthen the city’s role in shaping India’s digital future.”
He further stated, “BSE has been at the forefront of driving digital innovations and ecosystem in India, and it’s a matter of pride that Mr. Ramamurthy – MD and CEO of Asia’s oldest stock exchange – BSE – inaugurated our latest facility. This inspiration will drive our commitment to enable our customers in their digital transformation journey.”

Suresh Kumar Rathod, President of Colocation Business, CtrlS Datacenters, added, “CtrlS Datacenters has been pioneering the overall datacenter landscape in India. Having established large campuses in Mumbai and Chennai, Hyderabad too holds a lot of prominence in the company’s overall growth plans. We have made significant strides in the past and are well on track to create over 1GW capacity in the near future.”

With this launch, CtrlS Datacenters now operates 3 facilities in Hyderabad – one in HITEC City and two in the Gachibowli financial district. The company has a nationwide footprint of 250 MW of datacenter capacity in strategic tier-1 markets such as Mumbai, Chennai, Bangalore, Noida, Hyderabad and Kolkata. CtrlS Datacenters also operates Edge data centre facilities in tier-2 markets such as Patna and Lucknow.

The post CtrlS Launches 13-MW, AI-ready Data Center in Hyderabad appeared first on AIM.

It’s High Time Data Engineers Expanded Left and Right: Pavan Nanjundaiah

It's High Time Data Engineers Expanded Left and Right: Pavan Najundalah

A decade ago, the position of a data engineer was essentially nonexistent. Slowly and gradually, the role changed as the field matured. But with the advent of new generative AI technologies, many people fear their jobs are in jeopardy.

Already in the early months of 2024, GenAI is beginning to upend the way data teams think about ingesting, transforming, and surfacing data to consumers. Tasks that were once fundamental to data engineering are now being accomplished by AI—usually faster and sometimes with a higher degree of accuracy.

As familiar workflows evolve, it naturally begs the question: will GenAI replace data engineers?

Thus, if a data engineer has to secure his position, he has to “Go All In”, said Pavan Najundalah, Head of Trendence Studio.

Speaking at AIM’ Media House’s Data Engineering Summit 2024, Pavan Nanjundaiah, VP – Studio Innovations at Tredence Inc., said today’s data engineer needs a beginner’s mindset to adapt to the growing change.

“2023 was the year of copilots from Github to Fabric and Google. 2024 is the year of agents. Which means AI is after your job. The question you should be asking is, will I become obsolete? We saw what happened to DEVIN. The world will change whether you like it or not,” Nanjundaiah said.

“Today’s data engineers should expand left and right. Focus on your core foundational skills because they are not going to change. You don’t need to handcraft your code anymore. We all know that augmented coding is the future,” he added.

Why do we need a change?

Ten years ago, businesses relied on on-premise infrastructure for data storage. At this time, data engineers were more concerned with fine-tuning their machine configuration than with generating business value.

The cloud companies appeared, promising to offer services they would handle on your behalf. You can then concentrate on your business’s needs. This has changed the game.

So, Nanjundaiah asks an important question: If half your job is going to be done through a copilot, what will you do?

“Understand what is the business problem you’re trying to solve. My recommendation to data engineers is to shift left, which is the person who gave you the requirement and probably understands the business a little better. By doing this, you’re now zooming out of a core data engineer’s profile and understanding the business and the longer impact you can create,” Nanjundaiah added.

“That’s not it. Start looking at the right side to see if a data scientist is building a model based on the data you have staged, and figure out how you can start expanding,” he added.

Division Into Schools Of Thoughts

Nanjundaiah divided today’s GenAI into two schools of thought: one that sees everything as “rainbow and sunshine”,” while the other is “gloomy and fears a thunderstorm”.”

“We have seen rapid changes in AI in such a short time that forces everyone to change. I believe today’s Chief Experience Officer (CXO) should “Bet and Check” while Data Engineering Practitioners should “bet big but in pockets”.

Although humans losing jobs to robots is a lovely story, it is far from the truth for data engineers. AIM research tells us that data engineers continue to be in high demand. Senior developers working in generative AI draw over INR 1 crore per annum, while an entrant’s salary could easily be around INR 18 lakh per annum, much higher than India’s median income.

Data engineers are needed to create and manage AI applications. Data engineers are increasingly responsible for how generative AI is integrated into the business, just as they develop and maintain the infrastructure supporting the data stack. AI infrastructure is created and maintained using the advanced data engineering abilities we discussed, including abstract thought, business comprehension, and contextual creation.

Furthermore, incorrect data can occasionally occur even with the most advanced AI. Things malfunction. Shortly, we don’t see an AI engaging in much self-reflection, unlike a human, who can recognise and fix mistakes.

So, when things go wrong, someone needs to be there babysitting the AI to catch it—a “human-in-the-loop,” if you will.

The post It’s High Time Data Engineers Expanded Left and Right: Pavan Nanjundaiah appeared first on AIM.

Data Engineers: Expand Left and Right: Pavan Najundalah

A decade ago, the position of a data engineer was essentially nonexistent. Slowly and gradually, the role changed as the field matured. But with the advent of new generative AI technologies, many people fear their jobs are in jeopardy.

Already in the early months of 2024, GenAI is beginning to upend the way data teams think about ingesting, transforming, and surfacing data to consumers. Tasks that were once fundamental to data engineering are now being accomplished by AI—usually faster and sometimes with a higher degree of accuracy.

As familiar workflows evolve, it naturally begs the question: will GenAI replace data engineers?

Thus, if a data engineer has to secure his position, he has to “Go All In”, said Pavan Najundalah, Head of Trendence Studio.

Speaking at AIM’ Media House’s Data Engineering Summit 2024, Pavan said today’s data engineer needs a beginner’s mindset to adapt to the growing change.

“2023 was the year of copilots from Github to Fabric and Google. 2024 is the year of agents. Which means AI is after your job. The question you should be asking is, will I become obsolete? We saw what happened to DEVIN. The world will change whether you like it or not,” Najundalah said.

“Today’s data engineers should expand left and right. Focus on your core foundational skills because they are not going to change. You don’t need to handcraft your code anymore. We all know that augmented coding is the future,” he added.

Why do we need a change?

Ten years ago, businesses relied on on-premise infrastructure for data storage. At this time, data engineers were more concerned with fine-tuning their machine configuration than with generating business value.

The cloud companies appeared, promising to offer services they would handle on your behalf. You can then concentrate on your business’s needs. This has changed the game.

So, Najundalah asks an important question: If half your job is going to be done through a copilot, what will you do?

“Understand what is the business problem you’re trying to solve. My recommendation to data engineers is to shift left, which is the person who gave you the requirement and probably understands the business a little better. By doing this, you’re now zooming out of a core data engineer’s profile and understanding the business and the longer impact you can create,” Najundalah added.

“That’s not it. Start looking at the right side to see if a data scientist is building a model based on the data you have staged, and figure out how you can start expanding,” he added.

Division Into Schools Of Thoughts

Najundalah divided today’s GenAI into two schools of thought: one that sees everything as “rainbow and sunshine”,” while the other is “gloomy and fears a thunderstorm”.”

“We have seen rapid changes in AI in such a short time that forces everyone to change. I believe today’s Chief Experience Officer (CXO) should “Bet and Check” while Data Engineering Practitioners should “bet big but in pockets”.

Although humans losing jobs to robots is a lovely story, it is far from the truth for data engineers. AIM research tells us that data engineers continue to be in high demand. Senior developers working in generative AI draw over INR 1 crore per annum, while an entrant’s salary could easily be around INR 18 lakh per annum, much higher than India’s median income.

Data engineers are needed to create and manage AI applications. Data engineers are increasingly responsible for how generative AI is integrated into the business, just as they develop and maintain the infrastructure supporting the data stack. AI infrastructure is created and maintained using the advanced data engineering abilities we discussed, including abstract thought, business comprehension, and contextual creation.

Furthermore, incorrect data can occasionally occur even with the most advanced AI. Things malfunction. Shortly, we don’t see an AI engaging in much self-reflection, unlike a human, who can recognise and fix mistakes.

So, when things go wrong, someone needs to be there babysitting the AI to catch it—a “human-in-the-loop,” if you will.

The post Data Engineers: Expand Left and Right: Pavan Najundalah appeared first on AIM.

Voice-Based GenAI Could be the Future of Customer Care Services

While the general consensus has been that data is the new oil, the current question, for businesses at least, is how to harness this new oil to improve their customer-facing services.

During AIM’s Data Engineering Summit (DES) 2024, NoBroker Data Sciences and Engineering director Zaher Abdul Azeez highlighted why GenAI is the perfect solution to this, especially when it comes to gaining meaningful data out of unstructured human conversations.

“These (customer conversations) are very subjective, very conversational where standard variables are not readily available for analytics. This data is a goldmine, especially for businesses that rely on customer experiences,” he said, during his talk on ‘Navigating Data Chaos: Using Gen AI to Extract Structured Insights from Unstructured Customer Data’.

As mentioned by Azeez, customer conversations are incredibly valuable to C2C businesses like NoBroker in gaining valuable feedback. While the familiar note at the start of most customer care calls – “this call is being recorded for quality and training purposes” – has been a common occurrence for nearly a decade and a half, GenAI, Azeez says, can help parse these conversations for more than just quality and training.

“Generative AI lets us do a bunch of things. You have a broad spectrum of NLP capabilities, where LLMs let you do a variety of language tasks. So unlike conventional NLP applications where you have to build specific models for specific tasks, LLMs let you do a broad spectrum of stuff. And most importantly, it understands unstructured human conversations,” said Azeez.

This ability to understand unstructured human conversations is particularly important. While recordings are currently being reviewed manually, specifically to understand the customer experience, as well as to understand whether ratings given are accurate, the entire process is incredibly labour intensive.

“I focus on customer conversation because GenAI is very good with customer conversations. GenAI gives a very natural interface on top of being able to understand human languages and conversations in your business are a lot,” he said.

What About For the Customer?

On the other hand, while GenAI can help parsing unstructured data, Azeez also pushed the thought of using GenAI to have more interactive and natural customer service chatbots, especially when it comes to understanding context, which was not previously possible.

In a demo, Azeez showcased a chatbot that NoBroker was working on, which allowed the customer to verbally converse with the chatbot in order to parse their needs, budget and contact details. In addition, the bot was also able to advertise certain NoBroker products based on contextual hints.

“Human-like context aware response generation is something which generative AI can do. Especially when you talk about bots. Bots have been conventionally built with flows, diagrams, duties, etc. With LLMs and GenAI, you can do varying contextual conversations with your customers,” Azeez said.

Voice-based GenAI customer service reps are increasingly becoming a reality, with several larger companies already working on voice capabilities for their LLMs.

Recently, OpenAI came out with GPT-4o, which showcased voice capabilities, and Google did the same this month with Project Astra.

With this seemingly being the path taken for businesses, AI-based customer reps could become a thing of the near future.

The post Voice-Based GenAI Could be the Future of Customer Care Services appeared first on AIM.

Tesla FSD Wouldn’t Exist Without Yann LeCun

The AI godfather, Yann LeCun, is usually right about most things AI and even AGI to an extent.

The ripostes between LeCun and Elon Musk started after the latter invited people to join xAI’s mission after the recent funding announcement seems to be not ending anytime soon.

“Join xAI if you can stand a boss who claims that what you are working on will be solved next year (no pressure),” responded LeCun, advising interested candidates against joining Musk’s company.

Further, he said that he likes Musk’s cars, rockets, solar panels, and satellite networks but dislikes his vengeful politics, conspiracy theories, and hype.

LeCun believes he is politically correct because he is a “scientist, not a business or product person,” unlike Musk, who built Tesla and uses CNNs, aka ConvNets, developed by LeCun.

However, Musk replied that they “don’t use CNNs much these days, tbh”.

This left LeCun perplexed; he asked how Tesla does real-time image understanding in FSD without “ConvNets, TBH”.

Musk is yet to respond. It is highly unlikely that Tesla is using anything other than CNNs, and if not CNNs, it is most likely using Google’s Visual Transformer.

Coincidentally, Meta recently released an in-depth introduction to Vision-Language Models, which promise transformative capabilities in image processing and navigation through advanced spatial and contextual understanding.

Joke’s on Musk

Musk shouldn’t have questioned LeCun’s contribution in AI by asking how much research he conducted “in the last five years,”

LeCun, being LeCun, candidly replied, saying: “Over 80 technical papers published since January 2022.”

“One of these papers introduced convolutional neural networks (ConvNets) in 1989. Every single driving assistance system today uses ConvNets. That includes MobilEye (since 2014), Nvidia, Tesla, and just about everyone else. Technological marvels don’t just pop out of the vacuum,” he said.

To this Musk didn’t reply anything and went silent. It highly seems unlikely that Tesla is not using CNN. Condolences to the Tesla FSD team who have to ship a version without CNNs by next week,” joked a user on X.

CNNs also have some limitations. They can be sensitive to variations in the input data, such as changes in lighting, orientation, and scale. This can affect their performance in real-world scenarios where such variations are common. Moreover, while CNNs are good at capturing local spatial relationships, they may struggle with understanding global spatial relationships and context within an image.

On the other hand, Vision Transformers (ViTs) apply the transformer architecture, originally designed for natural language processing (NLP), to computer vision tasks. This approach diverges from traditional Convolutional Neural Networks (CNNs) by focusing on global relationships within an image rather than local features.

In ViTs, images are represented as sequences, and class labels for the image are predicted, which allows models to learn image structure independently. Input images are treated as a sequence of patches where every patch is flattened into a single vector by concatenating the channels of all pixels in a patch and then linearly projecting it to the desired input dimension.

Google even claimed that their ViT outperforms state-of-the-art CNN with four times fewer computational resources when trained on sufficient data.

LeCun is All You Need

Not everyone agrees that Vision Transformers are better than CNN. “Entertainment aside, getting rid of CNNs for real-world AI deployment is almost impossible, ” said Perplexity AI founder Aravind Srinivas.

“Even if you went for a ViT architecture, you must process the input using local patches with shared weights for efficiency and generalisation. This is even more crucial when processing multiple frames at the video level, such as in Tesla FSD,” he added.

Hugging Face’s CTO quickly joined the conversation, siding with LeCun and said, “I would pick Yann LeCun over Elon Musk every single day of the week. Despite getting much less money, recognition, and visibility than entrepreneurs, the scientists who publish their groundbreaking research openly are the cornerstone of technological progress and massively contribute to making the world a better place!”

All in all , AI advancements in companies such as OpenAI, xAI and others would not have been possible without research scientists.

“SpaceX would not exist without the thousands of scientific papers on rocket engine design, propellant chemistry, rocket control, material science, orbital mechanics, heat dissipation, trajectory planning, and the hundreds of scientists who got where they are by studying these papers,” claimed Lecun.

The post Tesla FSD Wouldn’t Exist Without Yann LeCun appeared first on AIM.

Agnikul Cosmos Launches India’s Second Private Rocket, Agnibaan SOrTeD

In a significant milestone for India’s private space industry, Chennai-based startup Agnikul Cosmos successfully launched its Agnibaan SubOrbital Technological Demonstrator (SOrTeD) rocket from the Satish Dhawan Space Centre in Sriharikota on Thursday.

The launch, which took place at 8:12 am, marks India’s first launch from a private launchpad and the world’s first single-piece 3D printed engine designed and built indigenously.

The Indian Space Research Organisation (ISRO) congratulated Agnikul Cosmos on the successful launch, calling it “a major milestone, as the first-ever controlled flight of a semi-cryogenic liquid engine realized through additive manufacturing”.

The Indian National Space Promotion and Authorisation Centre (IN-SPACe) also praised the achievement, highlighting the use of 3D manufacturing in the rocket’s construction.

Agnibaan SOrTeD, which stands 18 meters tall with a diameter of 1.3 meters and a lift-off mass of 14,000 kg, is powered by the innovative Agnilet engine – the world’s first single-unit 3D-printed engine that can be produced within three days.

The rocket executed a series of precise manoeuvres during its two-minute test flight, reaching an altitude of eight kilometres before splashing down in the Bay of Bengal, approximately 30 kilometres from the launch pad.

Initially scheduled for April 7, the launch had to be postponed just 129 seconds before lift-off due to technical glitches. However, the successful launch on May 30 underscores private companies’ growing capabilities and contributions to India’s burgeoning space sector.

Agnikul Cosmos, founded in 2017 by Srinath Ravichandran, Moin SPM, and Satya Chakravarthy, became the first company in India to sign an agreement with ISRO under the IN-SPACe initiative in December 2020. This partnership has allowed the startup access to ISRO’s expertise and facilities to develop the Agnibaan rocket.

The success of the Agnibaan SOrTeD launch is expected to bolster global confidence in India’s private space industry and its growing capabilities, particularly in light of the recently introduced guidelines for the implementation of the Indian Space Policy 2023 by IN-SPACe and new FDI regulations.

“What Agnikul has achieved today is nothing short of a historical milestone since India launched its maiden rocket in 1963 from Thumba launch station,” said Lt. Gen. A.K. Bhatt (Retd.), Director General, Indian Space Association (ISpA).

He added that “Agnibaan SOrTeD has made many firsts, including India’s first launch from a private launchpad, the first semi-cryogenic engine-powered rocket launch, and the world’s first single-piece 3D-printed engine designed and built indigenously.”

The post Agnikul Cosmos Launches India’s Second Private Rocket, Agnibaan SOrTeD appeared first on AIM.