Stanford as a Service

What does it take to build a company? A good product, a good team, and good funding. For the first one, you need to start out with an idea. In today’s world, it is probably anything related to AI or generative AI. For grabbing the latter two, if you look at all the startups that are getting funded lately, there is a secret recipe — having a co-founder out of either Stanford, MIT, or Harvard.

It cannot be exactly causation, but there is some correlation between raising huge funds and being a top university graduate. Each year, Crunchbase tallies up which US university graduates received the highest number of recently funded startups. Through 2022-23, Stanford, MIT, UC Berkeley, Harvard, and Cornell have the highest number of funded founders. Similarly, through 2021-22, it was the same but only Cornell was replaced by Columbia. Sadly, there are no Indian premium institutions like IIT or IISc.

Who’s working on providing startups with a token Stanford grad cofounder?

— Bojan Tunguz (@tunguz) June 25, 2023

Seems like more than finding the right product, it is important to have a founder from the top universities. It is kind of obvious though, given that many of the top companies in history have been created by either a graduate or dropout from these universities such as Mark Zuckerberg’s Meta, Larry Page’s Alphabet, Bill Hewlett’s HP, Elon Musk’s Tesla, and Bill Gates’ Microsoft.

Desperate times, desperate measures

When it comes to generative AI, VCs are ready to fund everyone and everything amid all the hype, especially when it comes from a top university graduate. Unfortunately, a lot of founders now have this figured out and are ready to exploit the situation.

In March, Roshan Patel, founder of Walnut, pulled a prank using this strategy. He created a fake LinkedIn profile by the name Chad Smith with an AI-generated face. Then, all he had to do was mention he was a Stanford dropout, working on a stealth AI startup, and going through Y Combinator — that was it! Within 24 hours, he received a message from a VC about how the firm had heard about “him” from his buddies and would learn more about the startup.

It is funny how VCs funding generative AI startups are also getting fooled by founders made by generative AI. On an interesting note, Stability AI’s founder Emad Mostaque also claimed that he has a master’s degree in computer science from Oxford, but only had a bachelor’s degree after verification.

Let’s provide startups with founders instead of fundings

Big VCs have a chance to provide startups with founders from these top universities. On the flip side, graduates from these universities, such as Stanford or Harvard, can join AI startups, essentially giving them one of the biggest tokens for getting funded, in turn, these graduates would get equity in the startup. Sounds like Stanford-as-a-service.

Not just universities, VCs are also funding companies that are started by former employees of Meta FAIR, Google Brain, or Microsoft Research. The most recent example is Reka, an AI startup that came out of stealth mode and announced a $58 million funding from a bunch of ventures. The founders, Yi Tay, Dani Yogatama, Qi Liu, and Cyprien de Masson d’Autume, have all worked for big projects at Google, DeepMind, Meta, and Microsoft.

Similarly, Mistral.AI, the Paris-based startup raised the highest ever seed funding in the world. The $113 million funding was directly affiliated with the founders as the startup was exactly four-weeks old during the time of funding with no product. One of the founders has previously worked with DeepMind and the other two of them are behind the best open source project of Meta — LLaMA. Clearly, the VCs see potential, and money in these founders.

The case is the same with Inflection.AI by Mustafa Suleyman, who has worked with DeepMind before. Anthropic AI and Dario Amodei also have the same story, being a former OpenAI employee. Though these both are now more than a year old companies, it becomes clear that investors see value within the founders, which can be more than the actual product they are building.

The biggest benefactor of all of this is NVIDIA. Coincidently its chief Jensen Huang, also seems to be from Stanford.

The post Stanford as a Service appeared first on Analytics India Magazine.

ChatGPT CLI: Transform Your Command-Line Interface Into ChatGPT

ChatGPT CLI: Transform Your Command-Line Interface Into ChatGPT
Image by storyset on Freepik

ChatGPT is a part of everyone's life right now. The GPT model has provided the user with something that years ago did not exist, such as easy knowledge searching, marketing planning, code completion, and many others. It’s a system that would only evolve further in the future.

One common way to use ChatGPT is through the web platform, where we can explore and store the prompt result. But we can also use the OpenAI API, which many developers do. In turn, the API can also be used to extend the result into our Command-Line Interface (CLI).

How do we access ChatGPT into our CLI? Let’s learn about it.

ChatGPT CLI

ChatGPT CLI is a Python script to use ChatGPT in our CLI. Using the OpenAI API, we can easily access ChatGPT in our CLI, similar to when we use it on the website. Let’s try it out ourselves.

First, we need OpenAI API Key, which you can get by registering on the OpenAI Developer Platform and visiting the View API keys within your profile. After you create and acquire your API key, store them somewhere, as the key will not reappear after being generated.

Next, clone the ChatGPT CLI repository onto your system using the following code in the CLI.

git clone https://github.com/marcolardera/chatgpt-cli.git

If you have cloned the repository, change your directory to the chatgpt-cli folder.

cd chatgpt-cli

Inside the folder, install the requirement using this code.

pip install -r requirements.txt

Then, we would need to explore the folder previously cloned with IDE. In this example, I would use Visual Studio Code. When you have explored the folder, the content should look like the image below.

ChatGPT CLI: Transform Your Command-Line Interface Into ChatGPT

Inside them, access the config.yaml file and replace the api-key parameter with your OpenAI API Key.

ChatGPT CLI: Transform Your Command-Line Interface Into ChatGPT

You can also change the parameter you want to pass into the API. You can refer to my previous article to understand all the parameters available from the OpenAI API.

We can now use the CLI as ChatGPT with all the settings set. To do that, you only need to run the following code.

python chatgpt.py

ChatGPT CLI: Transform Your Command-Line Interface Into ChatGPT
Just try to type anything on the CLI, and you will get the result immediately. For example, I pass the prompt, “ Give me the list of song recommendations from the 1990s.”

ChatGPT CLI: Transform Your Command-Line Interface Into ChatGPT

The result shows up in the CLI, similar to the image above. We can also continue the prompt similar to how we use the ChatGPT in the web platform.

ChatGPT CLI: Transform Your Command-Line Interface Into ChatGPT

The number that shows up before each prompt is the number of tokens that have been used so that we can be cautious about them as well.

Additionally, you can use the multi-line mode if you have a long prompt by adding the -ml parameter before launching the script.

ChatGPT CLI: Transform Your Command-Line Interface Into ChatGPT

Finally, use the /q command if you want to quit. When you finish, the ChatGPT CLI will show you the number of tokens you have used and the estimated expense from your activity.

ChatGPT CLI: Transform Your Command-Line Interface Into ChatGPT Conclusion

ChatGPT is here to stay, and we should use them as maximum as possible. In this tutorial, we will learn how to use ChatGPT CLI to perform ChatGPT prompting in our Command-Line Interface.
Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and Data tips via social media and writing media.

More On This Topic

  • Visual ChatGPT: Microsoft Combine ChatGPT and VFMs
  • How to Clean Text Data at the Command Line
  • 5 More Command Line Tools for Data Science
  • Stop Running Jupyter Notebooks From Your Command Line
  • Data Science at the Command Line: The Free eBook
  • ChatGPT: Everything You Need to Know

The Golden Rule and the AI Utility Function – Part I

Slide1

“Do unto others as you would have them do unto you.”

The Golden Rule. This may have been the first Sunday School lesson I learned (thanks, Mrs. Monroe).

The Golden Rule is a moral principle that states that you should treat others as you would like others to treat you. It is a foundational principle across many cultures and religions. As such, the Golden Rule should play an essential role in developing AI systems that deliver meaningful, relevant, responsible, and ethical outcomes.

Let’s use this blog to explore how we might integrate the Golden Rule into the AI Utility Function, the mathematical formula governing the AI model’s decision-making process.

Understanding the AI Model Golden Rule Ramifications

Encoding the Golden Rule into the AI utility function would involve specifying a set of rules – and the variables and metrics against which would measure the effectiveness of those rules – that govern the behavior of the AI system. This would include rules such as:

  • The AI system should treat humans with respect and dignity.
  • The AI system should not harm or allow humans to come to harm.
  • The AI system should be transparent in its actions and explain its decisions when humans request it.
  • The AI system should treat humans fairly and impartially without discriminating based on race, gender, or other protected characteristics and traits.
  • The AI system should use data responsibly and respect the privacy and consent of data subjects.
  • The AI system should be inclusive and work equally well across all spectra of society, avoiding bias and discrimination.
  • The AI system should have a positive purpose and contribute to the well-being and flourishing of human beings and the environment.
  • The AI system should be explainable and provide understandable and meaningful reasons for its actions and decisions.
  • The AI system should be trustworthy and act reliably, consistently, and honestly.

This is a great start and starts to make actionable the aspirations of the White House Office of Science and Technology Policy AI Bill of Rights (Figure 1).

Golden-Rule-AI-Utility-Function

Figure 1: The AI Bill of Rights

These rules become a mandatory checklist for any organization that seeks to design, develop, deploy, and monitor AI models that deliver meaningful, relevant, responsible, and ethical outcomes. And to make these rules actionable, we must integrate these rules, and their associated measures, into the AI Utility Function.

Refresher on the AI Utility Function

The AI utility function is a mathematical function that defines the goal or goals that the AI system is programmed to optimize.

The AI Utility Function assigns values to certain actions that the AI system can take. It captures the AI system’s preferences over possible alternatives. The higher the value, the more desirable the action or outcome is for the AI system. The AI utility function guides the decision-making process of the AI system by helping it to choose the action that maximizes its expected utility.

Slide3

Figure 2: Defining the AI Utility Function

Integrating the rules associated Golden Rule into the AI utility function can guide the AI system to behave ethically. To integrate the Golden Rule into the AI utility function, one would want to assign higher utility values to actions or outcomes that are consistent with the Golden Rule and lower utility values to actions or outcomes that violate the Golden Rule. For example, if the AI system is faced with a choice between helping a human in need or ignoring them, it could assign a higher utility value to help them because that is what it would want others to do for it if it were in need.

Integrating the Golden Rule into the AI Utility Function

We could use the following process to integrate the Golden Rule into the AI Utility Function:

  1. Define context. The Golden Rule can be interpreted in different ways depending on the context and scope of the AI system. For example, an AI system that interacts with customers in a retail store would have different rules than an AI system that monitors traffic flow in a city. Defining the context and scope helps narrow the relevant rules and metrics.
  2. Align rules. Identifying and aligning the rules that define the Golden Rules to the specific and relevant context and scope of the AI system is critical to the delivery of meaningful, relevant, responsible, and ethical outcomes. For example, in the case of a retail store AI system, rules could include treating customers with respect, providing accurate information, and protecting their privacy.
  3. Codify rules. Translating and codifying the rules into quantifiable measures is the heart of the process. Once the rules are aligned with the AI system context, the next step is to translate those rules into metrics that measure the effectiveness of those specific rules. For example, in the retail example, if the rule that we were seeking to integrate into the AI Utility Function was to “treat humans with respect and dignity,” then the metrics could include the number of customer complaints, the percentage of correct information provided, and the degree of transparency in data collection.
  4. Assign weights. Assigning weights to the metrics is necessary to determine the importance of each metric and the level of adherence required. This step requires the involvement and guidance of domain experts and stakeholders who are at the front lines of customer engagement and operational execution. In the retail example, our experts might decide that the weight assigned to customer privacy should be 50% higher than the weight assigned to the accuracy of product recommendations.
  5. Incorporate into the AI utility function. Once the metrics and their weights are defined, they can then be incorporated into the AI utility function. The AI utility function then maps the inputs to the desired outputs to facilitate the trade-off decisions necessary to deliver meaningful, relevant, responsible, and ethical outcomes.

By identifying and integrating the metrics associated with the Golden Rule into the AI Utility Function, the AI system can be designed, deployed, managed, and monitored to ensure that the behaviors exhibited by the AI system align with the principles of the Golden Rule (Figure 3).

Slide4

Figure 3: Integrating Golden Rule into AI Utility Function

Summary: The Golden Rule and the AI Utility Function – Part I

In Part 1 of the 2-part series on integrating the Golden Rule into the AI Utility Function, we first reviewed the Golden Rule (hey, it’s been a few years since Bible School for some of us). We then decomposed the Golden Rule into a series of rules that could provide actionable and measurable teeth to the AI Bill of Rights.

Then after a quick review of the AI Utility Function, we reviewed a simple process that any Citizen of Data Science could leverage to ensure that AI models are integrating the concepts of the Golden Rule to design, define, develop, and manage AI models in the delivery of meaningful, relevant, responsible, and ethical outcomes.

In part 2, we’ll dive into the specific variables and metrics one could use to integrate the Golden Rule into the AI Utility Function, but not before another (hehehe) lesson in economics.

Why NVIDIA Acquired OmniML

NVIDIA’s winning the AI race, and it’s not stopping any time soon. The chip-making giant has been hard at work investing in the AI ecosystem, funding startups like Inflection AI, RunwayML, Cohere, and more. It also seems like the company is building out its own AI tech stack with acquisitions, as seen by their stealthy takeover of OmniML.

OmniML is a company specialising in shrinking down ML models so they can be moved to the edge. Last year, it launched a platform known as Omnimizer which adapts computationally heavy AI models to lower-end hardware. It seems that the secret sauce behind shrinking models down is something NVIDIA wants, going so far as to acquire the startup in February of this year.

This acquisition hasn’t been announced by NVIDIA, leading many to speculate about the reasoning behind this move. Considering NVIDIA’s presence in edge AI, especially in fields like automobiles, robotics, and drones, OmniML might be what Team Green needs to take over the edge.

What OmniML does

The company was founded in 2021 by Dr. Di Wu, a software engineer from Facebook’s PyTorch division, and Dr. Huizi Mao, who worked in the mobile vision team under Google Research. Combining their expertise along with Dr. Song Han, the pioneer of the deep compression technique for reducing compute footprints, they created OmniML.

The main offering of OmniML was Omnimizer, a platform targeted at making AI optimisation quick and easy at scale. Instead of relying on different platforms and products to deploy and optimise models for edge devices, Omnimizer provided a single platform for deployment, training, and measurement. In addition to this, the platform also optimised models to be able to run on even the lowest powered devices.

It does so by including the target hardware in the testing loops, resulting in models that are not only smaller and faster, but also better fitted to the hardware they are running on. The model is optimised by using neural architect search, a technique used to detect the best architecture for a given neural network, constrained by certain variables like latency and available power.

Before the launch of this platform, OmniML raised $10 million in seed funding, in a round headed up by GSR Ventures and Foothill Ventures, along with Qualcomm Ventures. The funding round shows that OmniML’s efforts were not in vain, as there was very solid need for a platform that could automatically optimise models for the edge.

Through OmniML, companies can easily adapt their models to run on devices like drones, smart cameras, and automobiles. This would effectively allow models to move away from cloud dependency, democratising the ability to run ML models quickly and easily. Looking at the possible value that this can bring in the coming years, it’s no wonder NVIDIA snapped up the company in its infancy.

NVIDIA’s edge dreams

NVIDIA mainly has 3 edge offerings: the NVIDIA EGX platform for enterprise edge computing, IGX platform for industrial applications, and Jetson for autonomous machines and embedded edge use-cases. The company also has optimised NVIDIA software to run on any of these devices, and its customers can also command their edge fleets with the Fleet Command platform.

This comprehensive set of offerings will be further strengthened by the Omnimizer platform, as it fits well into NVIDIA’s edge strategy. By adding the automated model optimisation techniques included in Omnimiser, NVIDIA can make models at the edge even more efficient. This will not only allow it to fit bigger models on smaller devices, but also create more capable models catered towards running on the edge.

As mentioned previously, OmniML also keeps the hardware in the loop when it comes to optimisation. This means that NVIDIA can create custom profiles for its hardware suite using OmniML’s tech stack, which will make sure that deployed models use edge hardware as efficiently and effectively as possible.

The crux of the strategy lies in the NGC (NVIDIA GPU Cloud) Catalog for enterprises. This catalog offers GPU-optimised software containers for enterprises looking for quick and easy way to deploy models at scale. The edge models available in the catalog stand to benefit highly from optimisation by OmniML.

If these models are optimised with this technique, NVIDIA stands to gain a lot of performance and efficiency gains for models offered at the edge. Moreover, customers’ hardware can also be included in the optimisation software through the managed platform, creating a good solution-architecture fit no matter the configuration.

Just as with their other acquisitions and investments, OmniML serves to solidify NVIDIA’s leadership position in the AI market. If OmniML had continued as an independent player, it would have still brought value to NVIDIA’s tech stack, albeit through second-order effects. However, as a part of the green giant, OmniML provides a unique value add to cement NVIDIA’s position in edge AI.

NVIDIA’s relentless pursuit in the AI race continues as the chip-making giant invests in the AI ecosystem and acquires startups like OmniML. OmniML’s expertise in shrinking ML models for edge deployment aligns perfectly with NVIDIA’s presence in fields like autonomous machines and industrial applications. By integrating OmniML’s technology into its edge offerings, NVIDIA can optimize models for efficient deployment on lower-end hardware. Additionally, OmniML’s ability to keep hardware in the loop enables NVIDIA to create custom profiles, maximizing the utilization of its edge hardware suite. This strategic move strengthens NVIDIA’s position in the AI market and enhances its comprehensive edge AI strategy.

The post Why NVIDIA Acquired OmniML appeared first on Analytics India Magazine.

God’s Take on Tech: Vatican Launches AI Ethics Think Tank

In an interesting turn of events, Pope Francis, in collaboration with Santa Clara University’s Markkula Center for Applied Ethics, released a manual discussing the ethical aspects of AI. Together, they have established the Institute for Technology, Ethics, and Culture (ITEC), which serves as a Vatican-led think tank focusing on AI ethics to facilitate meaningful discussions about the impact of technology on humanity.

This comes three months after the Pope got a fancy makeover in a Balenciaga puffer jacket, thanks to AI image generator Midjourney.

OKAAYYY pic.twitter.com/MliHsksX7L

— leon (@skyferrori) March 25, 2023

Around the same, time major tech players, including Elon Musk, Steve Wozniak, and Evan Sharp, signed an open letter urging AI developers to temporarily halt the training of advanced AI experiments, expressing concerns about potential risks that could lead to a loss of control over the civilisation. It emphasised that the current rapid development of AI, exemplified by the likes of ChatGPT, Dalle-2, Midjourney, and voice-cloning softwares, may be irresponsible and fail to consider the consequences for society. The letter called for a pause in testing technologies more powerful than ChatGPT-4 for six months. Furthermore, it urged developers to establish robust safety protocols that involve rigorous audits and oversight from independent experts.

Pope’s Call to Silicon Valley Moral Guidance is Not New

The handbook titled ‘Ethics in the Age of Disruptive Technologies: An Operational Roadmap’ aims to help technology companies navigate the ethical considerations of AI. ITEC claims to have discussed with several AI and ML tech leaders in Silicon Valley before making the handbook. It covers various topics, with a focus on AI development, and emphasises the importance of upholding high ethical standards in the technology industry.

When the Pope’s AI-generated fake images took social media by storm, he said he believes in the positive impact that AI and ML can create, however, “At the same time, I am certain that this potential will be realised only if there is a constant and consistent commitment on the part of those developing these technologies to act ethically and responsibly.”

Even in 2019, Pope Francis spoke about AI misuse at “The Common Good in the Digital Age” conference in the Vatican. “If mankind’s so-called technological progress were to become an enemy of the common good, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest,” he told Reuters during the event.

Soon, along with support from Microsoft and IBM, the Vatican City released a set of principles called the ‘Rome Call for AI Ethics‘, which promotes the responsible and ethical use of AI.

Read more: The role of ‘God’ in the ‘Matrix’

The post God’s Take on Tech: Vatican Launches AI Ethics Think Tank appeared first on Analytics India Magazine.

Open Source Chatbots Are Nowhere Close to ChatGPT 

Open Source Chatbots Are Nowhere Close to ChatGPT

Every day we see a new chatbot — either built by a big-tech/recently funded company, or from the open source community. In the race to replicate OpenAI’s ChatGPT, developers have been taking a lot of shortcuts. The most common one these days is training the chatbots on data generated by ChatGPT.

The most recent chatbot claims to outperform ChatGPT. OpenChat, an open source chat alternative that is touted as decentralised, recently achieved a score of 105.7% compared to ChatGPT on Vicuna GPT-4 Benchmark. A huge feat, but not true when looked at closely.

This is the second model that claims to perform better than ChatGPT on the same Vicuna Benchmark. Earlier, Orca, a 13-billion parameter model, which was also trained on GPT-4 data, claimed to outperform OpenAI’s model.

Same-old, same-old

To start with, OpenChat is built on top of LLaMA-13B. This means that the model is already not up for commercial use as Meta’s LLaMA is for research purposes only, and not up for commercial use. Furthermore, there is another thing that should be considered before boasting about the benchmarks of the model — the dataset used for fine-tuning. This LLaMA-based model is trained on a set of 6k conversations from the 90k conversations available on ShareGPT, a hub for sharing ChatGPT and GPT-4 generated outputs on the internet.

When it comes to evaluating and benchmarking on Vicuna GPT-4 Benchmark, it only tests for style and not the information generated by the model. Furthermore, it is a GPT-based evaluation metric, which means that any model trained on ChatGPT or GPT-4 data will be rated higher when tested by GPT, making the benchmarking untrustworthy.

Recently, Hugging Face found a similar problem with other open source models as well. The founders of Hugging Face claimed there was a lot of discrepancy between the evaluation benchmarks put up on the papers of the models and when the models are evaluated on the Hugging Face benchmarks. David Hinkle, VP of software engineering at Securly, pointed out that a lot of recent models that claim they are outperforming LLaMA or GPT-4 are nowhere on the Open LLM Leaderboard.

Tall claims, short results

In short, it is a big claim to make that a model trained on ChatGPT data outperforms the same when benchmarking on a metric built on top of the same model. For an analogy, it is like a student giving an exam, rewriting the answers to match with the correct answers provided by the teacher, to which the teacher matches the answers again. Obviously, it is going to perform better.

Andriy Mulyar, from Atlas Nomic AI, points this out saying this is all just a false hype. People imitating ChatGPT using ChatGPT generated output is a false path to follow. Furthermore, the only thing that these models are copying is the style of ChatGPT, making the chatbots’ quality sound better on individual tasks. If evaluated across the board considering all types of general tasks, ChatGPT is a much better assistant than any other.

Interestingly, after all the criticism, the researchers have realised that there is some problem with evaluating the model on Vicuna GPT-4 Benchmark. Hence, they have transitioned to MT-bench, to test OpenChat’s performance. In this case, the model performed significantly poorer than GPT-3.5 based ChatGPT, describing the discrepancy between the evaluation benchmarks.

Users on Twitter have been pointing out that the model hallucinates even more than ChatGPT, and more than just the evaluation metrics used for the models. “I just tried this model and it is not good at all. Did you even try the model before posting this?” said a Twitter user.

Thanks to GPT

Whatever the metrics and benchmarks, one thing is getting increasingly clear about LLM-based chatbots — high-quality data works wonders. For this, the only model that should be thanked is ChatGPT, as every model today is being trained on synthetic data generated by the chatbot. No one has the secret sauce that OpenAI made for GPT. Recently, OpenAI was asked if open source would be able to replicate what the company has built through Vicuna or LLaMA, to which, Ilya Sutskever replied with a negative.

The trend of “this new model beats all other models in benchmarks” has been going on for a while now, but when evaluated on the same metrics as the others, “the news model” fails to perform. Moreover, even though the open source community has been trying to replicate ChatGPT, training it on ChatGPT’s data might not be the best way forward, as OpenAI is already under several lawsuits for training on internet data.

The post Open Source Chatbots Are Nowhere Close to ChatGPT appeared first on Analytics India Magazine.

What is AI Hyperpersonalization? Advantages, Case Studies, & Ethical Concerns

Featured Blog Image-What is Hyperpersonalization in AI

For decades, marketers have been researching the best strategies to create effective marketing campaigns to keep up with the ever-evolving consumer preferences. AI hyperpersonalization is a recent addition to a marketer’s arsenal.

Traditional marketing strategies rely on broad consumer segmentation that is beneficial for reaching larger groups. But this approach is sub-optimal for understanding individual needs.

Marketers have also successfully experimented with personalization techniques based on historical consumer data. An estimate suggests that worldwide revenue generated by customer experience personalization and optimization software will exceed $11.6 billion by 2026.

But this is not enough.

Modern consumers' needs are constantly evolving. They expect brands to understand their wants and needs – anticipate and exceed them. Hence, a more precise approach tailored to individual needs is required.

Today, marketers can use AI and ML-based data-driven techniques to take their marketing strategies to the next level – through hyperpersonalization. Let’s discuss it in detail.

What Is AI Hyperpersonalization?

AI hyperpersonalization or AI-powered hyperpersonalization is an advanced form of personalized marketing strategy that uses real-time data and individual journey maps along with AI, big data analytics, and automation to deliver highly contextualized and tailored content, products, or services to the right users at the right time through the right channels.

Real-time customer data is integral in hyperpersonalization as AI uses this information to learn behaviors, predict user actions, and cater to their needs and preferences. This is also a critical differentiator between hyperpersonalization and personalization – the depth and timing of the data used.

While personalization uses historical data such as customers’ purchase history, hyperpersonalization uses real-time data extracted throughout the customer journey to learn their behavior and needs. For instance, a customer journey powered by hyperpersonalization would target each customer with custom advertising, unique landing pages, tailored product recommendations, and dynamic pricing or promotions based on their geographic data, past visits, browsing habits, and purchase history.

The Mechanics of AI Hyperpersonalization

Hyperpersonalization using AI starts from data collection and ends in highly tailored user experiences. Let’s get a brief overview of the relevant steps.

1. Data Collection

There is no AI without data. In this step, customer data is collected from various sources such as:

  • Browsing patterns
  • Transaction history
  • Preferred device
  • Social media activity
  • Geographic data
  • Demographics
  • Customers with similar preferences
  • Existing customer databases
  • IoT devices and more

2. Data Analysis

AI and ML algorithms analyze the collected data to identify patterns and trends. Depending upon the problem, customer data analysis can be:

  • Descriptive (what's going on?)
  • Diagnostic (why did it happen?)
  • Predictive (what could happen in the future?)
  • Prescriptive (what should we do about it?)

This step is significant as it extracts actionable insights from the raw data and helps understand each customer.

3. Prediction & Recommendation

Based on the data analysis, the AI & ML models can predict the customer's behavior. This could involve anticipating a customer's interests or potential objections, enabling businesses to serve the customer's specific preferences proactively and deliver real-time personalized content, offers, and experiences. For instance, Starbucks generates 400,000 variants of hyperpersonalized emails each week via its real-time personalization engine, targeting individual customer preferences.

Advantages of AI-powered Hyperpersonalization

Advantages of AI-powered Hyperpersonalization

Enhanced Customer Experience (CX) & Customer Engagement (CE)

When customers see the content/products/services tailored to their needs, it creates an intimate experience and enhances customer satisfaction. According to McKinsey research, 71% of customers expect a personalized experience, and 76% feel disappointed when they don’t get it.

Hyperpersonalization, therefore, eliminates generic experiences and replaces them with interactions that feel personalized and unique to each customer leading to increased engagement. The heightened level of engagement increases the likelihood of conversion and promises long-term customer loyalty.

Increased Sales & Revenue

A more relevant shopping or content experience means customers are more likely to find products or content they love and purchase, directly boosting sales and revenue. A whopping 97% of marketers report that personalization efforts positively impact business results. And a well-executed personalization strategy can deliver 5-8x ROI on marketing spend. Hence, by making the customer journey more intimate, hyperpersonalization improves conversion rates and increases average order value.

Prominent Case Studies of Hyperpersonalization Using AI

Case Study 1: E-commerce Industry (Amazon)

Amazon is a prime example of hyperpersonalization in the e-commerce industry. In 2022, Amazon's sales reached $469.8 billion, a 22% increase from 2021. The company uses a sophisticated AI-based recommendation engine that analyzes individual customer data, including;

  • Past purchases
  • Customer demographics
  • Search query
  • Items in the shopping cart
  • Items that were checked out but not clicked
  • Average spend amount

Amazon analyzes this data to create personalized product recommendations and send highly contextualized emails to each of its shoppers. As a result, their recommendation engine generates a healthy 35% conversion rate based on personalization.

Case Study 2: Entertainment Industry (Netflix)

Netflix has revolutionized the entertainment industry through its use of hyperpersonalization. Former VP of product innovation at Netflix has stated in an interview that:

“If one member in this tiny island expresses an interest for anime, then we're able to map that person to the global anime community. We know which are the best movies and TV shows for people in the world in that community.”

Reportedly, personalized recommendations save Netflix more than $1 billion every year. The company uses AI to analyze a vast array of customer data points, including:

  • Viewing history
  • Ratings given to different shows or movies
  • Time of day when a user watches certain content

By analyzing vast amounts of highly contextualized data, Netflix suggests hyperpersonalized content according to the user’s preference. As a result, 80% of the content hours watched on Netflix come from the recommendation system, while 20% comes from searches. This enhances customer experience and engagement and reduces the churn rate.

Concerns & Ethical Implications of AI Hyperpersonalization

While the benefits of hyperpersonalization are tremendous, there are also crucial concerns and ethical implications to consider:

Privacy Issues

Users may be uncomfortable that their every click, purchase, or interaction is being tracked and analyzed, even if tracking intends to improve user experience. In September 2021, Netflix faced a fine of $190,000 imposed by the Personal Information Protection Commission (PIPC) of South Korea. Reportedly, Netflix violated its Personal Information Protection Act (PIPA) by engaging in the unlawful collection of personal information from users.

Consumer Manipulation

Hyperpersonalization could lead to increased consumer manipulation. With the knowledge of individual preferences and behaviors, companies can influence decision-making to a high degree, raising ethical questions about autonomy and consent. When companies know where you are, what you purchased, and your likes and dislikes, they are treading a tightrope between cool and creepy – with a high chance of entering the creepy realm.

In conclusion, hyperpersonalization, powered by AI and ML, has already brought significant advancements to various industries. However, its potential is yet to be fully actualized. For example, hyperpersonalization could translate into personalized medicine, with treatments and preventative strategies tailored to an individual patient's genetic makeup and lifestyle. However, these opportunities also have significant ethical implications and challenges that must be addressed.

For more AI-related content, visit unite.ai.

Top 10 Analytics Conferences in the USA for the Second Half of 2023

The upcoming months of 2023 promise a wealth of learning and networking opportunities for data professionals, with an exciting array of analytics conferences scheduled across the United States. These events will host some of the most brilliant minds in the field of analytics and artificial intelligence (AI), providing a platform to discuss current trends, share valuable insights, and explore what the future holds for the industry.

Here are the top ten analytics conferences to attend from July to December 2023:

  1. MachineCon 2023 (July 21, New York) – An exclusive gathering for leaders in the world of analytics and AI, MachineCon explores the transformative potential of advanced AI technologies and innovative analytics solutions that are changing the face of various industries. Organized by AIM Media House, a leading global technology media firm, this conference celebrates those who have mastered the art of turning data into a competitive advantage​.
  2. DataConnect Conference (July 20-21, Columbus, OH) – As reported by KDnuggets, the DataConnect Conference is a major event in the field of data analytics. It also offers virtual participation, making it accessible to a global audience​.
  3. The 2023 International Conference on Data Science (July 24-27, Las Vegas, NV) – This conference emphasizes the latest developments in data science, serving as a platform for researchers and practitioners to share their discoveries and insights​.
  4. Ai4 2023 (August 7-9, Las Vegas, NV) – Ai4 2023 is an all-encompassing conference that covers a broad spectrum of topics related to AI and analytics, bringing together business leaders and data practitioners to facilitate the adoption of AI and machine learning technologies​.
  5. Chief Data & Analytics Officer (CDAO), Chicago 2023 (August 8-9, Chicago, IL) – The Chicago edition of the CDAO conference is a significant event that unites Chief Data Officers and other analytics leaders to deliberate on strategies, trends, and challenges in the data analytics industry​.
  6. SAS Explore 2023 (September 11-14, Las Vegas, NV) – SAS Explore is a prominent conference that focuses on analytics and data science. The event includes a wide range of sessions and workshops, providing an excellent opportunity for learning and networking​.
  7. Chief Data & Analytics Officer (CDAO), Government 2023 (September 19-20, Washington, DC) – This iteration of the CDAO conference highlights the use of data analytics in the government sector. It provides a forum for discussion about how data and analytics can be used to enhance government services and operations​.
  8. ODSC West 2023 (October 31 – November 3, San Francisco, CA) – The Open Data Science Conference (ODSC) West is one of the world’s largest applied data science conferences. The event encompasses a wide range of topics, including AI, machine learning, data visualization, and data engineering. The conference also features a virtual component, making it accessible to attendees worldwide​.
  9. Data Science Salon SF: Applying AI & ML in the Enterprise (November 29, San Francisco, CA) – This conference focuses on the application of AI and machine learning in enterprise settings. It provides an opportunity for data science professionals to learn about the latest trends, techniques, and best practices in the industry​.
  10. Chief Data And Analytics Officers, APEX West (CDAO) (December 5, Arizona City, United States) – The CDAO APEX West conference is a significant event for data and analytics officers. It provides an opportunity for these leaders to come together to discuss the latest trends, strategies, and challenges in the field of data analytics​.

These ten conferences represent some of the most influential and anticipated events in the data analytics and AI industry for the second half of 2023. Whether you’re a data scientist, AI practitioner, or business leader, these events offer a wealth of knowledge, networking opportunities, and a glimpse into the future of data-driven technologies. Be sure to mark your calendars and register in advance to secure your spot.

The post Top 10 Analytics Conferences in the USA for the Second Half of 2023 appeared first on Analytics India Magazine.

Generative AI Will Change Game Dev Forever

The gaming industry has always been ahead of the tech curve when it comes to adopting new technologies. Unity, a top game engine, recently announced an AI marketplace that could revolutionise the way studios make games. Last month, Unreal Engine announced it would be integrating generative AI into its workflows. NVIDIA recently released ACE, a generative AI service that aims to bring intelligence to NPCs.

When looking at these innovations, a generative AI revolution seems to be brewing in the gaming world. From making more believable non-playing characters, to generating game assets with a click of a button, to creating whole 3D worlds with a single prompt, generative AI holds the potential to change how games are developed forever.

A new revolution in game development

Unity’s AI marketplace is the biggest announcement in generative AI for gaming yet. Other products like NVIDIA ACE and Inworld AI only tackle a small part of game development, while Unity’s marketplace handles many more aspects of the process. Along with the recently-launched Unity Sentis and Muse generative platforms, this marketplace fleshes out Unity’s genAI capabilities.

Unity Muse allows game developers to harness AI-driven assistance during their workflows, such as real-time 3D creation for digital twins. It also includes a chatbot called Muse Chat, which will eventually allow devs to create textures and even animate a character using prompts.

Sentis, on the other hand, directly integrates neural networks into the game engine. This cross-platform tool will allow developers to run AI models on the edge, and integrate it with various aspects of the game.

The marketplace completes Unity’s tri-tipped AI strategy, offering pre-created AI solutions to accelerate certain game development workflows. At launch, the platform included some big names like Inworld AI, Polyhive, LMNT, and Atlas, which brought a host of AI-powered improvements to all aspects of the development process.

The marketplace offerings can be split into 3 main categories, namely asset creation, NPC platforms, and voice generators. These 3 categories are representative of the larger use-cases of generative AI in gaming. In addition to these, another interesting trend we are seeing emerge is the rise of game developer tooling powered by AI.

Take modl.ai for example, one of the offerings on Unity Marketplace. This AI engine tries to improve game testing workflows by automating coverage for devs, including by managing service report errors, events, and crashes in one platform. These kind of vertical-focused AI tools are yet to be created, as a majority of studios are still using general-purpose tools. Moreover, industry experts believe that generative AI has the potential to completely change the way games are made. John Riccitiello, the CEO of Unity, said in an interview,

“I think this is 10x the potential transformation because I don’t think anybody looks at their games and thinks of them as real worlds. we’re about to find out what happens when we make these worlds fully alive in terms of how it feels to the player.”

It seems that the industry is also moving towards this trend, notwithstanding the current hype around generative AI. Even as the world continues to hype up the field, game developers are finding actual use-cases for the technology.

More than just hype

A study conducted by a16z games, an arm of the VC giant Andreeseen Horowitz, shed some light on the state of generative AI in game development today. Drawing on their network of 243 independent game studios, a16z’s survey found that a whopping 87% of studios are already using generative AI, with 99% stating that they plan to do so in the future.

It also seems like generative AI can bring optimisations to both big and small studios. Troy Kirwin, a games investor at a16z, said in a tweet, “AI allows small teams of creators to build games previously only achievable at AAA budgets. Meanwhile, large studios are chomping at the bit to find any way to accelerate production timelines cheaply.”

Another interesting trend found by the survey is that 64% of studios plan to fine-tune and train their own models in-house. There have already been whispers of game studios doing so, such as Ubisoft’s in-house GenAI tool called ‘Ghostwriter’ for writing NPC dialogue. Roblox has also expressed its enthusiasm in helping its user-generated content grow by using generative AI.
This newfound interest seems to have come from a larger focus on vertically optimised tooling for game development. While a majority of studios are currently using general purpose tools, like ChatGPT, Midjourney, Stable Diffusion, and GitHub Copilot, the launch of tooling by established players like Unity, NVIDIA, and Unreal Engine will catapult generative AI into complete adoption by game studios.

The post Generative AI Will Change Game Dev Forever appeared first on Analytics India Magazine.

​Killing ‘Project Iris’ can be a Blessing in Disguise for Google 

Google Search is Killing the SEO Experience

Google has scrapped its latest augmented reality (AR) headset, according to a report by Insider. The publication reported that the project was shelved earlier this year following layoffs, reshuffles, and the departure of Google’s AR/VR chief Clay Bavor, according to three people familiar with the matter. However, Google is yet to confirm or deny whether Project Iris has been shelved.

This can be a blessing in disguise for Google as its main rival Meta and Apple haven’t been able to leave a mark in AR/VR wearables and are still finding their feet.

In Apple’s recent WWDC event, they introduced their Vision Pro AR/VR headset. In contrast, Meta announced the upcoming release of their Meta Quest 3 headset in the fall and also reduced the price of the Quest 2. From Meta’s perspective, it was the key to enter the metaverse imagined by Zuckerberg.

The verdict on metaverse in the market is that it is dead. “The Metaverse is now headed to the tech industry’s graveyard of failed ideas,” wrote Insider in their report.

While Apple’s Vision Pro is costly, Meta’s Quest series does not come up with a strong use case apart from gaming. The Apple Vision Pro provides a mere two hours of usage without being connected to a power source, which falls short of even watching a complete movie on OTT platforms or lasting through a flight.

VR/ AR Headsets aren’t the future

There isn’t much evidence currently that proves people would regularly love to wear headsets for a long period of time. Sitting in front of a laptop/ desktop for long hours in itself is a cumbersome task, just imagine wearing snorkel shaped Apple’s Vision Pro .

Several users of Meta’s Quest 2 complained about motion sickness.

In a world that we currently live in, it is already going far away from reality with many of us mindlessly scrolling social media apps on our smartphones. The distinction between reality and the virtual world has already started to blur. With these AR/VR headsets coming into the picture, the differentiation will be erased and virtual reality will become a part of the real world.

Anand Mahindra, chairman of Tech Mahindra expressed his concerns on Vision’s Pro launch.

by tweeting”And what about community-watching of movies & sports matches? Will that now be replaced by a roomful of zombies wearing headsets?”

Even if we forcefully come to accept that the VR/AR headsets are the future. It would take a substantial amount of time for the common public to own it. Apple’s Vision Pro AR/VR headset comes with a staggering price tag of $3,499, which can make anyone feel anxious. To put it in perspective, you could buy four iPhone 14 units with that amount and still have money left over.

In my understanding, VR/AR headsets are just a gizmo that won’t pass the test of time. That explains why even Cook was not wearing the headset at the event.

Google’s not so good experience with Glass

Google is not new to the market of eye wearables. They came up with Google Glass in 2013 but it failed to achieve success. As a result, Google restricted its usage to only enterprises.

The failure of Glass can be attributed to the creators’ oversight in defining and validating the target users and the specific problems the product aimed to address. Rather than focusing on providing real solutions and value, they mistakenly relied on the product’s hype and assumed it would naturally appeal to everyone.

One can easily draw parallels between Metaverse’s and Glass’s failure.

Google isn’t fully out of the game

Google is playing safe by partnering with Samsung, and Qualcomm to create a mixed reality platform. They aren’t not fully into the game but also aren’t out.

As per the report, Google has shifted its focus from hardware to software. The company is currently developing a “micro XR” platform intending to license it to various headset manufacturers, similar to how Google offers the Android operating system to a wide range of smartphone manufacturers.

That being said the real use case from Metaverse is yet to be discovered and possibly with Google playing too safe, Meta and Apple might take the lead.

The post ​Killing ‘Project Iris’ can be a Blessing in Disguise for Google appeared first on Analytics India Magazine.