Databricks Launches Data Intelligence Platform for Energy Sector

Databricks, the Data and AI company, announced today the launch of its Data Intelligence Platform for Energy. This unified platform brings the power of AI to data and people across the energy industry.

The new offering, built on an open lakehouse architecture, enables energy enterprises to harness vast data streams and develop generative AI applications while maintaining data privacy and IP protection.

Industry-leading organisations such as Shell, Octopus Energy, Australian Energy Market Operator (AEMO), and Chevron Phillips Chemical have already adopted Databricks to accelerate their data analytics and AI capabilities. The platform has helped them unlock real-time insights, drive strategic decisions, and create process improvements, cost reductions, and production increases.

The Data Intelligence Platform for Energy offers pre-built accelerators, marketplace solutions, and an ecosystem of partner capabilities tailored to the energy industry. It enables companies to tackle critical challenges such as real-time asset performance management, accurate renewable energy forecasting, grid optimisation, and more.

Databricks partners, including AVEVA, Capgemini, and Deloitte, are also driving the Data Intelligence Platform vision by delivering pre-built analytics solutions on the lakehouse architecture that are custom-made for the energy industry.

The energy sector is undergoing a paradigm shift toward a smarter, cleaner, and more reliable energy system, with renewables now providing nearly 30% of global power. Databricks’ platform democratises data access across organisations, allowing them to optimise energy infrastructure and mitigate volatility by leveraging the full value of asset, operations, environmental, and customer data.

The post Databricks Launches Data Intelligence Platform for Energy Sector appeared first on Analytics India Magazine.

Google brings AI-powered editing tools, like Magic Editor, to all Google Photos users for free

Google brings AI-powered editing tools, like Magic Editor, to all Google Photos users for free Sarah Perez @sarahintampa / 8 hours

Google Photos is getting an AI upgrade. On Wednesday, the tech giant announced that a handful of enhanced editing features previously limited to Pixel devices and paid subscribers — including its AI-powered Magic Editor — will now make their way to all Google Photos users for free. This expansion also includes Google’s Magic Eraser, which removes unwanted items from photos; Photo Unblur, which uses machine learning to sharpen blurry photos; Portrait Light, which lets you change the light source on photos after the fact, and others.

The editing tools have historically been a selling point for Google’s high-end devices, the Pixel phones, as well as a draw for Google’s cloud storage subscription product, Google One. But with the growing number of AI-powered editing tools flooding the market, Google has decided to make its set of AI photo editing features available to more people for free.

Image Credits: Google

There are some caveats to this expansion, however.

For starters, the tools will only start rolling out on May 15 and it will take weeks for them to make it to all Google Photos users.

In addition, there are some hardware device requirements to be able to use them. On ChromeOS, for instance, the device must be a Chromebook Plus with ChromeOS version 118+ or have at least 3 GB RAM. On mobile, the device must run Android 8.0 or higher or iOS 15 or higher.

The company notes that Pixel tablets will now be supported, as well.

Magic Editor is the most notable feature of the group. Introduced last year with the launch of the Pixel 8 and Pixel 8 Pro, this editing tool uses generative AI to do more complicated photo edits — like filling in gaps in a photo, repositioning the subject, and other edits to the foreground or background of a photo. With Magic Editor, you can change a gray sky to blue, remove people from the background of a photo, recenter the photo subject while filling in gaps, remove other clutter, and more.

Previously, these kinds of edits would require Magic Eraser and other professional editing tools, like Photoshop, to get the same effect. And those edits would be more manual, not automated via AI.

Image Credits: Google

With the expansion, Magic Editor will come to all Pixel devices, while iOS and Android users (whose phones meet the requirements) will get 10 Magic Editor saves per month. To go beyond that, they’ll still need to buy a Premium Google One plan — meaning 2TB of storage and above.

The other tools will be available to all Google Photos users, no Google One subscription is required. The full set of features that will become available includes Magic Eraser, Photo Unblur, Sky suggestions, Color pop, HDR effect for photos and videos,, Portrait Blur, Portrait Light (plus the add light/balance light features in the tool), Cinematic Photos, Styles in the Collage Editor, and Video Effects.

Other features like the AI-powered Best Take — which merges similar photos to create a single best shot where everyone is smiling — will continue to be available only to Pixel 8 and 8 Pro.

Meta Unveils Next-Generation AI Training Chip, Promising Faster Performance

The race to develop cutting-edge hardware is as crucial as the algorithms themselves. Meta, the tech giant behind Facebook and Instagram, has been investing heavily in custom AI chips to bolster its competitive edge. As the demand for powerful AI hardware grows, Meta has unveiled its latest offering: the next-generation Meta Training and Inference Accelerator (MTIA).

The development of custom AI chips has become a key focus for Meta as it aims to enhance its AI capabilities and reduce reliance on third-party GPU providers. By designing chips tailored to its specific needs, Meta seeks to optimize performance, improve efficiency, and ultimately gain a significant advantage in the AI landscape.

Key Features and Improvements of the Next-Gen MTIA

The next-generation MTIA represents a significant leap forward from its predecessor, the MTIA v1. Built on a more advanced 5nm process, compared to the 7nm process of the previous generation, the new chip boasts an array of improvements designed to boost performance and efficiency.

One of the most notable upgrades is the increased number of processing cores packed into the next-gen MTIA. This higher core count, coupled with a larger physical design, enables the chip to handle more complex AI workloads. Additionally, the internal memory has been doubled from 64MB in the MTIA v1 to 128MB in the new version, providing ample space for data storage and rapid access.

The next-gen MTIA also operates at a higher average clock speed of 1.35GHz, a significant increase from the 800MHz of its predecessor. This faster clock speed translates to quicker processing and reduced latency, crucial factors in real-time AI applications.

Meta has claimed that the next-gen MTIA delivers up to 3x overall better performance compared to the MTIA v1. However, the company has been somewhat vague about the specifics of this claim, stating only that the figure was derived from testing the performance of “four key models” across both chips. While the lack of detailed benchmarks may raise some questions, the promised performance improvements are nonetheless impressive.

Image: Meta

Current Applications and Future Potential

The next-gen MTIA is currently being utilized by Meta to power ranking and recommendation models for its various services, such as optimizing the display of ads on Facebook. By leveraging the chip's enhanced capabilities, Meta aims to improve the relevance and effectiveness of its content distribution systems.

However, Meta's ambitions for the next-gen MTIA extend beyond its current applications. The company has expressed its intention to expand the chip's capabilities to include the training of generative AI models in the future. By adapting the next-gen MTIA to handle these complex workloads, Meta positions itself to compete in this rapidly growing field.

It's important to note that Meta does not envision the next-gen MTIA as a complete replacement for GPUs in its AI infrastructure. Instead, the company sees the chip as a complementary component, working alongside GPUs to optimize performance and efficiency. This hybrid approach allows Meta to leverage the strengths of both custom and off-the-shelf hardware solutions.

Industry Context and Meta's AI Hardware Strategy

The development of the next-gen MTIA takes place against the backdrop of an intensifying race among tech companies to develop powerful AI hardware. As the demand for AI chips and compute power continues to surge, major players like Google, Microsoft, and Amazon have also invested heavily in custom chip designs.

Google, for example, has been at the forefront of AI chip development with its Tensor Processing Units (TPUs), while Microsoft has introduced the Azure Maia AI Accelerator and the Azure Cobalt 100 CPU. Amazon, too, has made strides with its Trainium and Inferentia chip families. These custom solutions are designed to cater to the specific needs of each company's AI workloads.

Meta's long-term AI hardware strategy revolves around building a robust infrastructure that can support its growing AI ambitions. By developing chips like the next-gen MTIA, Meta aims to reduce its dependence on third-party GPU providers and gain greater control over its AI pipeline. This vertical integration allows for better optimization, cost savings, and the ability to rapidly iterate on new designs.

However, Meta faces significant challenges in its pursuit of AI hardware dominance. The company must contend with the established expertise and market dominance of companies like Nvidia, which has become the go-to provider of GPUs for AI workloads. Additionally, Meta must also keep pace with the rapid advancements being made by its competitors in the custom chip space.

The Next-Gen MTIA's Role in Meta's AI Future

The unveiling of the next-gen MTIA marks a significant milestone in Meta's ongoing pursuit of AI hardware excellence. By pushing the boundaries of performance and efficiency, the next-gen MTIA positions Meta to tackle increasingly complex AI workloads and maintain its competitive edge in the rapidly evolving AI landscape.

As Meta continues to refine its AI hardware strategy and expand the capabilities of its custom chips, the next-gen MTIA will play a crucial role in powering the company's AI-driven services and innovations. The chip's potential to support generative AI training opens up new possibilities for Meta to explore cutting-edge applications and stay at the forefront of the AI revolution.

Looking ahead, it is just one piece of the puzzle in Meta's ongoing quest to build a comprehensive AI infrastructure. As the company navigates the challenges and opportunities presented by the intensifying competition in the AI hardware space, its ability to innovate and adapt will be critical to its long-term success.

How Safe is Google Cloud for Running Generative AI Applications

Generative AI has stirred major concerns in cybersecurity, with malicious actors leveraging the technology to their advantage. Recognising this threat, at Google Cloud Next ‘24 in Las Vegas, the company unveiled new measures to tackle these challenges head-on.

Gemini, Google’s flagship family of LLMs, is expanding its role in security operations across the investigation process, building upon previous releases like natural language search and case summaries. A new feature, available by the end of this month, will assist analysts throughout their workflow in Chronicle Enterprise and Chronicle Enterprise Plus. It recommends actions, conducts searches and creates detection rules to enhance response times.

Moreover, analysts can now request the latest threat intelligence from Mandiant cybersecurity consulting within the platform, with Gemini guiding them to relevant pages for deeper investigation. In 2022, the company acquired Mandiant, which specialises in dynamic cyber defence, threat intelligence, and incident response services.

Gemini enhances threat intelligence through conversational search across Mandiant’s database. VirusTotal now integrates OSINT reports for streamlined analysis. In the Security Command Center, teams can search for threats using natural language and receive critical alert summaries.

It also offers insights into cloud misconfigurations, vulnerabilities, and attack paths. AI is integrated into various security services, including Gemini Cloud Assist, offering IAM Recommendations, Key Insights, and Confidential Computing Insights for enhanced security posture and workload protection.

Similarly, it has also introduced Chrome Enterprise Premium, a solution designed to reinforce endpoint security for organisations. It is generally available now.

Google Cloud has over 513,775 customers with over 85% market share. In 2022, Google Cloud generated revenues of 26 billion U.S. dollars, which represents approximately nine percent of Google’s total revenues.

“More than 60% of funded gen AI startups and nearly 90% of gen AI unicorns are Google Cloud customers, including companies like Anthropic, AI21 Labs, Contextual AI, Essential AI, and Mistral AI who are using our infrastructure,” said Thomas Kurian, chief executive officer, Google Cloud, during the keynote session of the event.

Leading enterprises like Deutsche Bank, Estée Lauder, Mayo Clinic, McDonald’s, and WPP are building new generative AI applications on Google Cloud. Pfizer has accelerated data analysis from days to seconds, while 3M utilizes Gemini in Security Operations to streamline security management. Engineers at Fiserv’s Security Operations Center can now create detections and playbooks more efficiently, resulting in faster responses for analysts.

Google’s new measures come when DSCI, the data protection industry body, reported that over half a million new malware are detected daily, contributing to the already vast pool of one billion circulating malware programs. In 2023, cybersecurity defenders uncovered 400 million instances of malware across 8.5 million endpoints, highlighting the immense scale of the issue.

What are Others up to?

Google is not the only company injecting generative software to solve security issues. At the end of March, Google competitor Microsoft launched Security Copilot to streamline threat intelligence and prioritise security incidents by correlating data on attacks.

Microsoft claims that Security Copilot is the first and only generative AI security product that builds upon OpenAI’s GPT-4 AI to defend organisations at machine speed and scale without compromising customer data. The tool empowers defenders to mitigate risks and respond to security threats effectively.

“Frankly, the cybersecurity threat landscape has never been more challenging or more complicated,” said Microsoft CEO Satya Nadella, during the release of Security Copilot.

Adding on to Nadella, Vasu Jakkal, corporate vice president of security and compliance at Microsoft added, “With Security Copilot your data is always your data. It stays within your control, and it is not used to train the foundational AI models. In fact, it is protected by the most comprehensive enterprise compliance and security controls.

While Google Cloud and Microsoft Azure are trying hard to bolster their security measures, Oracle has leapt ahead. Governments, including India’s Ministry of Education, Bangladesh, and the US, favour Oracle Cloud Infrastructure due to its transparent cloud approach and robust data encryption.

Oracle’s 47 years of trust in governments globally stems from its commitment to data security, distinguishing it from other providers. Oracle database is unique in that it operates on multiple hyperscale clouds, whereas databases on Amazon or Google clouds are proprietary to those platforms and cannot be run elsewhere. Microsoft also partnered with Oracle in a multiyear agreement to enhance AI services. It will now use Oracle Cloud Infrastructure (OCI) AI and Microsoft Azure AI for daily Bing conversational searches.

LLMs are Prone to Vulnerability

“The number and sophistication of cybersecurity attacks continues to increase, and gen AI has the potential to tip the balance in favor of defenders, with Security Agents providing help across every stage of the security lifecycle: prevention, detection and response,” added Kurian.

Google’s Gemini, OpenAI’s ChatGPT and similar LLM-based chatbots are susceptible to security vulnerabilities that can lead to the generation of harmful content, disclosure of sensitive information, and execution of malicious actions.

Recently, a study by Texas-based threat research firm HiddenLayer discovered that attackers could induce Gemini to leak sensitive data by manipulating system prompts. The team also found they could coax Gemini into producing misinformation about elections and providing instructions on illegal activities like hotwiring cars.

Similarly, Microsoft and OpenAI collaborated on research revealing how threat actors use LLMs like GPT-4 to improve their attack game. Their strategy includes incorporating AI as a productivity tool in their offensive tactics, illustrating their tactics like LLM-informed reconnaissance, social engineering, and scripting tasks.

At the same time at an enterprise level, generative AI has brought new challenges for defenders in cybersecurity. These challenges include connecting various events like suspicious website visits, strange device activities, or unusual communications to detect potential threats from unknown sources. Both humans and machines find this task difficult, but AI aids in adapting to evolving attack techniques, assessing risks, and highlighting critical issues for analysts.

The main issue is distinguishing genuine threats from false alarms, which requires reducing irrelevant data. Moreover, AI-driven algorithms are needed to tackle cybersecurity issues as attacks become more machine-centric.

By the end of 2024, generative AI is expected to influence cybersecurity purchasing choices, with widespread integration across security operations indicating a growing trend towards AI-centric cybersecurity solutions.

“We are right to be worried about the impact (of AI) on cybersecurity. But AI, I think actually, counterintuitively, strengthens our defense on cybersecurity,” said Google CEO Pichai at Munich Security Conference in February,

So while generative AI is growing exponentially and touching our everyday lives, the tech titans behind it are also taking proactive steps to mitigate any threats or risks.

The post How Safe is Google Cloud for Running Generative AI Applications appeared first on Analytics India Magazine.

The Ultimate Roadmap to Becoming Specialised in The Tech Industry

Tech Specialisation
Image by Author

If you’re a tech professional or looking to enter the industry, what you should be thinking about right now is being the best you can be in a specific area. You want to be seen as a specialised professional, someone who knows their stuff, the ins and outs, etc.

Naturally, we are given broad knowledge and not how to become specialised in a specific field.

This is where this article comes in to help you refine your skills, build your knowledge and change your title to being a specialised professional.

Machine Learning Specialization

Link: Machine Learning Specialization

Are you a data analyst and you’re looking to advance your tech and data handling skills to break into AI and machine learning? Look no further. This Machine Learning Specialization consists of 3 courses:

  • Supervised Machine Learning: Regression and Classification
  • Advanced Learning Algorithms
  • Unsupervised Learning, Recommenders, and Reinforcement Learning.

In these 3 courses, you will learn how to build machine learning models using NumPy and Scikit-learn, for example, supervised models such as logistic regression. You will also learn how to build & train a neural network with TensorFlow, apply best practices for ML development, and build recommender systems and deep reinforcement learning models.

Go from being a data analyst to a machine learning engineer!

MLOps Specialization

Link: MLOps Specialization

Want to dive a little deeper when it comes to machine learning? How about the operations side of it?

This MLOps Specialisation consists of 5 courses:

  • Introduction to Machine Learning in Production
  • Machine Learning Data Lifecycle in Production
  • Machine Learning Modeling Pipelines in Production
  • Deploying Machine Learning Models in Production

In these courses, you will learn how to design a machine learning production system end-to-end: from project scoping to deployment requirements. You will also establish a model baseline, address concept drift, deploy, and learn how to continuously improve the ML application. Doesn’t stop there, you will also learn how to build data pipelines, establish data lifecycle and maintain a continuously operating production system.

Deep Learning Specialization

Link: Deep Learning Specialization

Or maybe you want to dive into deep learning? This Deep Learning Specialization consists of 5 courses:

  • Neural Networks and Deep Learning
  • Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization
  • Structuring Machine Learning Projects
  • Convolutional Neural Networks
  • Sequence Models

In these courses, you will learn how to build and train deep neural networks, identify key architecture parameters, as well as be able to train test sets, analyze variance for DL applications, and use a variety of techniques and optimization algorithms. It doesn’t stop there, you will also learn how to build a CNN/RNN and more.

Natural Language Processing Specialization

Link: Natural Language Processing Specialization

Want to learn the foundations behind large language models such as ChatGPT and Claude?

You can now with the Natural Language Processing Specialization which consists of 4 courses:

  • Natural Language Processing with Classification and Vector Spaces
  • Natural Language Processing with Probabilistic Models
  • Natural Language Processing with Sequence Models
  • Natural Language Processing with Attention Models

In these 4 courses, you will learn about logistic regression, naïve Bayes, sentiment analysis, word embeddings and more. Dive further and learn about recurrent neural networks, LSTMs, GRUs & Siamese networks as well as how to use encoder-decoder, causal, & self-attention to machine translate complete sentences, summarize text, build chatbots and more.

TensorFlow: Data and Deployment Specialization

Link: TensorFlow: Data and Deployment Specialization

If you have looked at the above courses and saw TensorFlow being mentioned but do not need to learn about the rest but TensorFlow — check this specialisation out.

This TensoreFlow: Data and Deployment Specialization consists of 4 courses:

  • Browser-based Models with TensorFlow.js
  • Device-based Models with TensorFlow Lite
  • Data Pipelines with TensorFlow Data Services
  • Advanced Deployment Scenarios with TensorFlow

In these 4 courses, you will learn how to run models using TensorFlow.js, and prepare and deploy models on mobile devices using TensorFlow Lite. You will also learn how to access, organize, and process training data more easily using TensorFlow Data Services whilst exploring more advanced deployment scenarios using TensorFlow Serving, TensorFlow Hub, and TensorBoard.

Wrapping it Up

And just like that you have a variety of courses that you can use to elevate your skills, become more knowledgable and a specialist in a specific sector of the tech industry.

If you wanted to be a jack of all trades and become highly competitive, you can take more than one of these to broaden your horizons!

Nisha Arya is a data scientist, freelance technical writer, and an editor and community manager for KDnuggets. She is particularly interested in providing data science career advice or tutorials and theory-based knowledge around data science. Nisha covers a wide range of topics and wishes to explore the different ways artificial intelligence can benefit the longevity of human life. A keen learner, Nisha seeks to broaden her tech knowledge and writing skills, while helping guide others.

More On This Topic

  • Celebrating Women in Leadership Roles in the Tech Industry
  • How My Learning Path Changed After Becoming a Data Scientist
  • Benefits Of Becoming A Data-First Enterprise
  • Why TinyML Cases Are Becoming Popular?
  • The Ultimate Guide To Different Word Embedding Techniques In NLP
  • Bark: The Ultimate Audio Generation Model

AI to Generate $1 Million On its Own in 3-5 Years 

The need for a new Turing test has gained momentum over the last few years, especially with ChatGPT reportedly “breaking” the test in 2023.

One proposal for this was made by Microsoft AI CEO Mustafa Suleyman last year. He stated that a new Turing test should be less about figuring out how much AI can emulate human intelligence and more about how it can change the real world.

This was framed similarly to Turing’s proposal, with Suleyman simply stating, “To pass the Modern Turing Test, an AI would have to act on this instruction successfully: ‘Go make $1 million on a retail web platform in a few months with just a $100,000 investment.’”

Tamang Ventures founder and GenAI expert Nina Schick echoed this sentiment during her keynote speech at evokeAG 2024.

“The new Turing test should be to instruct AI to make a million dollars. Here’s a $100,000 investment. Can you convert this into a million dollars over six months?” she pondered, adding that this would be possible in 3 to 5 years.

“It makes sense for us to have a Turing test for our times that isn’t ‘Can you talk to a machine and think it’s human?’ because we’ve already passed that. We need a better test for capable intelligence,” she said.

The New Turing Test Takes Into Account Proprietary AI

Circling back to ChatGPT, this new standard proposed by Suleyman is particularly interesting. Researchers who assessed the chatbot said that while it is behaviourally similar to humans, it is designed to be more cooperative, which makes sense for a solely proprietary product.

Keeping that in mind, the benchmark that Suleyman sets for the test is that it should be capable of effectively running a business, from planning and executing business strategies to undertaking the hiring process and liaising with manufacturers and partners.

“It would need, in short, to tie together a series of complex real-world goals with minimal oversight. You would still need a human to approve various points, open a bank account, actually sign on the dotted line,” he said.

While this vastly differs from the original Turing test, most modern AI has found its purpose in business or consumer interactions. A core tenet of the Turing test was for the AI to be able to deceive its conversational partner into thinking it was human, which doesn’t serve any purpose in the current usage of AI.

Over the years, there have been extended efforts to ensure that AI is trained on accurate and objective data, so designing an AI to deceive its end user specifically or even researchers would be counterproductive.

Suleyman’s proposal, however, rectifies this, with AI having to function not just as a human but as several humans in order to create and sustain a profitable business.

Further, he said this would require much more, contrary to GPT-4, which had passed the original test. “To do so, it would need to go far beyond outlining a strategy and drafting some copy, as current systems like GPT-4 are so good at doing,” he said.

However, handing over that much power to AI seems like a recipe for disaster, especially considering the vast clientele interested in an AI that could successfully pass Suleyman’s Turing test.

What Could Go Wrong?

Suleyman’s proposal for preparing against the AI-driven future, as Schick has suggested, could also aid businesses in data analysis and strategic planning.

“You can imagine how, in agriculture, you can use that capability to find out ‘how do I improve my crop yield by 30%?’, ‘what should I be doing to improve my sustainability practices,’ ‘what am I not seeing in the data that is going to be really vitally important for me?’” she said.

While this is a favourable way of looking at it, it is vastly utopic. As Suleyman rightly points out, an AI capable of successfully running a business could engage in election campaigning, running infrastructure, and even taking part in technological warfare. As is often the case with many things AI-related, the abuse of what Suleyman calls “artificial capable intelligence”, or ACI, relies entirely on its users acting in good faith.

Even then, the implications of all businesses being able to profit could have disastrous effects on the world economy, as Suleyman also points out.

However, while he focuses on the abuse of ACI, stating that “the implications are far broader than the financial repercussions,” the financial aspect alone could collapse entire governments before any amount of bad faith actors take advantage. As Syndrome aptly put it, when everyone’s super, no one will be.

Apart from this, handing over all data from a business opens itself to a massive amount of risk. Suleyman believes that we are not too far from this future and has suggested that we need to protect against it. But considering the implications and the current state of regulations in AI, it seems that this, too, is a long way from happening.

The post AI to Generate $1 Million On its Own in 3-5 Years appeared first on Analytics India Magazine.

How AI Is Improving Customer Loyalty

The role of artificial intelligence in enhancing customer loyalty is more critical than ever in today’s fiercely competitive business landscape. AI stands at the forefront of redefining how brands interact with customers. Integrating the technology into everyday processes has become a strategic imperative for companies seeking to build stronger, long-lasting relationships.

The Value of Loyal Customers

Customer loyalty measures how often people patronize the same brand over its competitors. In the past, consumers were primarily loyal to businesses near them. However, physical location is no longer a barrier, and their choices are endless. That’s why loyalty is such a rare, coveted thing today.

There’s a lot of value in attracting and retaining repeat buyers. For one, keeping a current customer is cheaper than acquiring a new one. Companies will likely spend up to five times more to attract new clients, which isn't sustainable in the long run.

Loyal consumers are also more likely to make repeat purchases and spend more on a company’s offerings. They’re not easily swayed by price or availability and would rather pay more for a product they trust. One of the reasons Apple is such a dominant enterprise is because 92% of its customers are brand loyal, meaning they will continue to buy the company’s products in the future.

Customer loyalty can also improve word-of-mouth marketing and drive business revenue. People dedicated to repeatedly buying from the same brand will naturally recommend it to others. The dollar impact of these referrals can be significant, especially as new buyers develop an emotional connection to the brand and begin to refer others.

6 Ways AI Boosts Customer Loyalty

The foundational idea behind driving customer loyalty is to give them something to be loyal to, such as quality products, excellent services, aligned values and enjoyable experiences. As their positive interactions increase, so does their sense of loyalty. Incorporating AI can facilitate these connections and help businesses consistently meet customer expectations in the following ways.

1. Predictive Analytics for Anticipating Customer Needs

AI-powered predictive analytics helps companies harness valuable customer data and analyze it to predict their future behavior. Understanding trends and patterns helps businesses identify potential churn risks and proactively anticipate consumer needs to enhance their overall experience. For example, if someone isn't patronizing the business like before, the algorithm can trigger exclusive discounts and promotions to reengage them.

Deploying predictive analytics requires a deep understanding of the buyer journey. Businesses must highlight crucial touchpoints from awareness to post-purchase support. This approach demonstrates their commitment to customer satisfaction. They must also be able to capture and feed the AI system accurate, timely data to ensure quality output and predictions.

2. AI for Personalized Interactions

Providing individualized experiences is the key to driving engagement across all industries. Research shows that 71% of customers expect personalized services from businesses. AI can help meet this expectation by empowering companies to develop customized customer profiles to better understand how to best serve them. AI-enabled systems also make the process more efficient and adaptive to changing preferences and behaviors, allowing sellers to be more agile in increasingly competitive markets.

Providing individualized recommendations starts with gathering relevant data from multiple sources, including website visits, bounce rates, purchase history and past interactions. Based on the information provided, AI models can create holistic profiles that inform the level of personalization. For instance, AI systems can leverage natural language processing and historical data to craft targeted communications based on a customer’s location or age.

3. AI-Driven Chatbots and Virtual Assistants for Real-Time Support

Advanced chatbots are reinventing customer support by providing instant and efficient responses. These bots become more contextually aware as AI advances and can even hold conversations with prospects without human agents. Unsurprisingly, as many as 56% of businesses in 2023 have implemented chatbots to handle routine inquiries, resolve issues and provide 24/7 support.

The important thing is to have clear objectives for what these chatbots and virtual assistants can and should manage. Businesses must also map out conversational flows that account for various interactions to make the responses natural and helpful. Lastly, there must be a set threshold where the bots can seamlessly hand off conversations to human agents as needed.

4. AI-Based Customer Loyalty Programs

One of the best ways to foster loyalty is to reward repeat customers. This way, they’re incentivized to come back and interact further with the company. Several businesses have already incorporated AI into their loyalty programs to digitize interactions, streamline analytics and build stronger relationships.

A good example is Starbucks, which cultivated 13 million active users through its AI-enhanced reward program by providing data-driven perks. For instance, the company can recommend beverages to potential customers based on several factors, such as weather, location and time of year. This motivates them to patronize the brand and makes them more likely to recommend it to others.

5. Automated Customer Feedback Analysis

Feedback is a powerful tool for understanding buyers' feelings about a particular product or service. Customers taking the time to drop feedback is a good sign that they care enough to comment about their experience. Businesses can leverage AI to automate collating these responses from various sources to identify trends and sentiments.

These form the basis for implementing targeted changes and demonstrating a commitment to continuous improvement. Statistics show 56% of customers changed their views on the business after it made changes based on their reviews. The key is implementing real-time analysis capabilities to immediately identify and respond to complaints and praises, enabling timely decision-making and improved engagement.

6. Seamless Omnichannel Experiences

An increasing number of shoppers use multiple channels during their buying journey. As many as 87% of consumers expect a consistent experience, whether they’re buying in-store or from the company website, mobile app or an affiliate page. AI can actualize this by synchronizing vast amounts of data across platforms, ensuring businesses deliver a cohesive and integrated experience at every touch point.

Companies must implement AI-enabled systems that provide a seamless transition across various interactions. For example, consider an e-commerce store allowing customers to add their carts on its website and complete the checkout process through its social media page. Adapting to different channels lets customers enjoy a consistent, tailored experience that can improve their sense of brand loyalty.

AI Is Building Loyal Customers

Customer loyalty is the backbone of every successful enterprise. Studies have repeatedly shown that businesses that invest in building strong, long-term relationships will outperform those that don't. Harnessing the power of AI and machine learning lets companies anticipate customer needs, create personalized experiences, provide real-time support and make data-driven decisions.

Implementing these strategies can improve customer loyalty and position businesses for continued success in an increasingly competitive market.

Mistral AI Stuns With Surprise Launch of New Mixtral 8x22B Model

Mistral AI is Making Generative AI Fun

In a surprise announcement, Mistral AI released its latest large language model, Mixtral 8x22B. This model boasts an impressive 176 billion parameters and a context length of 65,000 tokens.

Here’s the Hugging Face link: https://huggingface.co/v2ray/Mixtral-8x22B-v0.1

This open-source model, available for download via torrent, is expected to outperform Mistral AI’s previous Mixtral 8x7B model, which had already surpassed competitors like Llama 2 70B in various benchmarks.

Mixtral 8x22B leverages an advanced Mixture of Experts (MoE) architecture, enabling efficient computation and improved performance across a wide range of tasks.

Despite its massive scale of approximately 130B total parameters, the model only requires around 44B active parameters per forward pass, making it more accessible and cost-effective to use.

The release of Mixtral 8x22B marks a significant milestone for open-source artificial intelligence. It empowers developers, researchers, and enthusiasts to explore the potential of large language models without the barriers of cost and limited access.

The model’s permissive Apache 2.0 license further underscores Mistral AI’s commitment to fostering a collaborative and accessible AI landscape.

Early reactions from the AI community have been overwhelmingly positive. Many eagerly anticipate the innovative applications and groundbreaking research that Mixtral 8x22B will enable.

As developers and researchers begin to unlock this powerful model’s full potential, it is expected to revolutionise industries ranging from content creation and customer service to more complex domains like drug discovery and climate modelling.

Mistral AI’s rapid progress in developing cutting-edge language models has solidified its position as a leader in open-source AI.

With the release of Mixtral 8x22B, the company continues to push the boundaries of what is possible with artificial intelligence, setting the stage for a future where AI’s potential is limited only by imagination.

The post Mistral AI Stuns With Surprise Launch of New Mixtral 8x22B Model appeared first on Analytics India Magazine.

Exploring the OpenAI API with Python

Exploring the OpenAI API with Python
Image generated with Ideogram.ai

Who hasn’t heard about OpenAI? The AI research laboratory has changed the world because of its famous product, ChatGPT.

It literally changed the landscape of AI implementation, and many companies now rush to become the next big thing.

Despite much competition, OpenAI is still the go-to company for any Generative AI business needs because it has one of the best models and continuous support. The company provides many state-of-the-art Generative AI models with various task capabilities: Image generation, Text-to-Speech, and many more.

All of the models OpenAI offers are available via API calls. With simple Python code, you can already use the model.

In this article, we will explore how to use the OpenAI API with Python and various tasks you can do. I hope you learn a lot from this article.

OpenAI API Setup

To follow this article, there are a few things you need to prepare.

The most important thing you need is the API Keys from OpenAI, as you cannot access the OpenAI models without the key. To acquire access, you must register for an OpenAI account and request the API Key on the account page. After you receive the key, save that somewhere you can remember, as it will not appear again in the OpenAI interface.

The next thing you need to set is to buy the pre-paid credit to use the OpenAI API. Recently, OpenAI announced changes to how their billing works. Instead of paying at the end of the month, we need to purchase pre-paid credit for the API call. You can visit the OpenAI pricing page to estimate the credit you need. You can also check their model page to understand which model you require.

Lastly, you need to install the OpenAI Python package in your environment. You can do that using the following code.

pip install openai

Then, you need to set your OpenAI Key Environment variable using the code below.

import os    os.environ['OPENAI_API_KEY'] = 'YOUR API KEY'

With everything set, let’s start exploring the API of the OpenAI models with Python.

OpenAI API Text Generations

The star of OpenAI API is their Text Generations model. These Large Language Models family can produce text output from the text input called prompt. Prompts are basically instructions on what we expect from the model, such as text analysis, generating document drafts, and many more.

Let’s start by executing a simple Text Generations API call. We would use the GPT-3.5-Turbo model from OpenAI as the base model. It’s not the most advanced model, but the cheapest are often enough to perform text-related tasks.

from openai import OpenAI  client = OpenAI()    completion = client.chat.completions.create(    model="gpt-3.5-turbo",    messages=[      {"role": "system", "content": "You are a helpful assistant."},      {"role": "user", "content": "Generate me 3 Jargons that I can use for my Social Media content as a Data Scientist content creator"}    ]  )    print(completion.choices[0].message.content)
  1. "Unleashing the power of predictive analytics to drive data-driven decisions!"
  2. "Diving deep into the data ocean to uncover valuable insights."
  3. "Transforming raw data into actionable intelligence through advanced algorithms."

The API Call for the Text Generation model uses the API Endpoint chat.completions to create the text response from our prompt.

There are two required parameters for text Generation: model and messages.

For the model, you can check the list of models that you can use on the related model page.

As for the messages, we pass a dictionary with two pairs: the role and the content. The role key specified the role sender in the conversation model. There are 3 different roles: system, user, and assistant.

Using the role in messages, we can help set the model behavior and an example of how the model should answer our prompt.

Let’s extend the previous code example with the role assistant to give guidance on our model. Additionally, we would explore some parameters for the Text Generation model to improve their result.

completion = client.chat.completions.create(      model="gpt-3.5-turbo",      messages=[          {"role": "system", "content": "You are a helpful assistant."},          {"role": "user", "content": "Generate me 3 jargons that I can use for my Social Media content as a Data Scientist content creator."},          {"role": "assistant", "content": "Sure, here are three jargons: Data Wrangling is the key, Predictive Analytics is the future, and Feature Engineering help your model."},          {"role": "user", "content": "Great, can you also provide me with 3 content ideas based on these jargons?"}      ],      max_tokens=150,      temperature=0.7,      top_p=1,      frequency_penalty=0  )    print(completion.choices[0].message.content)

Of course! Here are three content ideas based on the jargons provided:

  1. "Unleashing the Power of Data Wrangling: A Step-by-Step Guide for Data Scientists" — Create a blog post or video tutorial showcasing best practices and tools for data wrangling in a real-world data science project.
  1. "The Future of Predictive Analytics: Trends and Innovations in Data Science" — Write a thought leadership piece discussing emerging trends and technologies in predictive analytics and how they are shaping the future of data science.
  1. "Mastering Feature Engineering: Techniques to Boost Model Performance" — Develop an infographic or social media series highlighting different feature engineering techniques and their impact on improving the accuracy and efficiency of machine learning models.

The resulting output follows the example that we provided to the model. Using the role assistant is useful if we have a certain style or result we want the model to follow.

As for the parameters, here are simple explanations of each parameter that we used:

  • max_tokens: This parameter sets the maximum number of words the model can generate.
  • temperature: This parameter controls the unpredictability of the model's output. A higher temperature results in outputs that are more varied and imaginative. The acceptable range is from 0 to infinity, though values above 2 are unusual.
  • top_p: Also known as nucleus sampling, this parameter helps determine the subset of the probability distribution from which the model draws its output. For instance, a top_p value of 0.1 means that the model considers only the top 10% of the probability distribution for sampling. Its values can range from 0 to 1, with higher values allowing for greater output diversity.
  • frequency_penalty: This penalizes repeated tokens in the model's output. The penalty value can range from -2 to 2, where positive values discourage the repetition of tokens, and negative values do the opposite, encouraging repeated word use. A value of 0 indicates that no penalty is applied for repetition.

Lastly, you can change the model output to the JSON format with the following code.

completion = client.chat.completions.create(    model="gpt-3.5-turbo",    response_format={ "type": "json_object" },    messages=[      {"role": "system", "content": "You are a helpful assistant designed to output JSON.."},      {"role": "user", "content": "Generate me 3 Jargons that I can use for my Social Media content as a Data Scientist content creator"}    ]  )    print(completion.choices[0].message.content)

{
"jargons": [
"Leveraging predictive analytics to unlock valuable insights",
"Delving into the intricacies of advanced machine learning algorithms",
"Harnessing the power of big data to drive data-driven decisions"
]
}

The result is in JSON format and adheres to the prompt we input into the model.

For complete Text Generation API documentation, you can check them out on their dedicated page.

OpenAI Image Generations

OpenAI model is useful for text generation use cases and can also call the API for image generation purposes.

Using the DALL·E model, we can generate an image as requested. The simple way to perform it is using the following code.

from openai import OpenAI  from IPython.display import Image    client = OpenAI()    response = client.images.generate(    model="dall-e-3",    prompt="White Piano on the Beach",    size="1792x1024",    quality="hd",    n=1,  )    image_url = response.data[0].url  Image(url=image_url)

Exploring the OpenAI API with Python
Image generated with DALL·E 3

For the parameters, here are the explanations:

  • model: The image generation model to use. Currently, the API only supports DALL·E 3 and DALL·E 2 models.
  • prompt: This is the textual description based on which the model will generate an image.
  • size: Determines the resolution of the generated image. There are three choices for the DALL·E 3 model (1024×1024, 1024×1792 or 1792×1024).
  • quality: This parameter influences the quality of the generated image. If computational time is needed, “standard” is faster than “hd.”
  • n: Specifies the number of images to generate based on the prompt. DALL·E 3 can only generate one image at a time. DALL·E 2 can generate up to 10 at a time.

It is also possible to generate a variation image from the existing image, although it’s only available using the DALL·E 2 model. The API only accepts square PNG images below 4 MB as well.

from openai import OpenAI  from IPython.display import Image    client = OpenAI()    response = client.images.create_variation(    image=open("white_piano_ori.png", "rb"),    n=2,    size="1024x1024"  )    image_url = response.data[0].url    Image(url=image_url)

The image might not be as good as the DALL·E 3 generations as it is using the older model.

OpenAI Vision

OpenAI is a leading company that provides models that can understand image input. This model is called the Vision model, sometimes called GPT-4V. The model is capable of answering questions given the image we gave.

Let’s try out the Vision model API. In this example, I would use the white piano image we generate from the DALL·E 3 model and store it locally. Also, I would create a function that takes the image path and returns the image description text. Don’t forget to change the api_key variable to your API Key.

from openai import OpenAI  import base64  import requests  def provide_image_description(img_path):        client = OpenAI()        api_key = 'YOUR-API-KEY'      # Function to encode the image      def encode_image(image_path):        with open(image_path, "rb") as image_file:          return base64.b64encode(image_file.read()).decode('utf-8')           # Path to your image      image_path = img_path           # Getting the base64 string      base64_image = encode_image(image_path)           headers = {        "Content-Type": "application/json",        "Authorization": f"Bearer {api_key}"      }           payload = {        "model": "gpt-4-vision-preview",        "messages": [          {            "role": "user",            "content": [              {                "type": "text",                "text": """Can you describe this image? """              },              {                "type": "image_url",                "image_url": {                  "url": f"data:image/jpeg;base64,{base64_image}"                }              }            ]          }        ],        "max_tokens": 300      }           response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)        return response.json()['choices'][0]['message']['content']

This image features a grand piano placed on a serene beach setting. The piano is white, indicating a finish that is often associated with elegance. The instrument is situated right at the edge of the shoreline, where the gentle waves lightly caress the sand, creating a foam that just touches the base of the piano and the matching stool. The beach surroundings imply a sense of tranquility and isolation with clear blue skies, fluffy clouds in the distance, and a calm sea expanding to the horizon. Scattered around the piano on the sand are numerous seashells of various sizes and shapes, highlighting the natural beauty and serene atmosphere of the setting. The juxtaposition of a classical music instrument in a natural beach environment creates a surreal and visually poetic composition.

You can tweak the text values in the dictionary above to match your Vision model requirements.

OpenAI Audio Generation

OpenAI also provides a model to generate audio based on their Text-to-Speech model. It’s very easy to use, although the voice narration style is limited. Also, the model has supported many languages, which you can see on their language support page.

To generate the audio, you can use the below code.

from openai import OpenAI  client = OpenAI()    speech_file_path = "speech.mp3"  response = client.audio.speech.create(    model="tts-1",    voice="alloy",    input="I love data science and machine learning"  )    response.stream_to_file(speech_file_path)

You should see the audio file in your directory. Try to play it and see if it’s up to your standard.

Currently, there are only a few parameters you can use for the Text-to-Speech model:

  • model: The Text-to-Speech model to use. Only two models are available (tts-1 or tts-1-hd), where tts-1 optimizes speed and tts-1-hd for quality.
  • voice: The voice style to use where all the voice is optimized to english. The selection is alloy, echo, fable, onyx, nova, and shimmer.
  • response_format: The audio format file. Currently, the supported formats are mp3, opus, aac, flac, wav, and pcm.
  • speed: The generated audio speed. You can select values between 0.25 to 4.
  • input: The text to create the audio. Currently, the model only supports up to 4096 characters.

OpenAI Speech-to-Text

OpenAI provides the models to transcribe and translate audio data. Using the whispers model, we can transcribe audio from the supported language to the text files and translate them into english.

Let’s try a simple transcription from the audio file we generated previously.

from openai import OpenAI  client = OpenAI()    audio_file= open("speech.mp3", "rb")  transcription = client.audio.transcriptions.create(    model="whisper-1",    file=audio_file  )    print(transcription.text)

I love data science and machine learning.

It’s also possible to perform translation from the audio files to the english language. The model isn’t yet available to translate onto another language.

from openai import OpenAI  client = OpenAI()    audio_file = open("speech.mp3", "rb")  translate = client.audio.translations.create(    model="whisper-1",    file=audio_file  )

Conclusion

We have explored several model services that OpenAI provides, from Text Generation, Image Generation, Audio Generation, Vision, and Text-to-Speech models. Each model have their API parameter and specification you need to learn before using them.

Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics.

More On This Topic

  • Free ChatGPT Course: Use The OpenAI API to Code 5 Projects
  • OpenAI’s Whisper API for Transcription and Translation
  • OpenAI API for Beginners: Your Easy-to-Follow Starter Guide
  • Exploring Data Cleaning Techniques With Python
  • Exploring Infinite Iterators in Python's itertools
  • HuggingChat Python API: Your No-Cost Alternative

This Summer, Expect GPT-5 and Llama 3 to Heat Up the LLM Race

The LLM race has indeed heated up, with many models now reaching the capabilities of GPT-4. Cohere’s latest open-source model, Command R+, recently climbed to the 6th spot, matching the GPT-4 level by over 13,000 human votes!

Things are really heating up in AI:
* New @MistralAI 7x22B MoE (170B) model just came out – we'll see how it performs over the next few weeks!
* @cohere released Command R+, by far the best public (non-commercial use-case only) LLM judging by the lmsys benchmark.
* New GPT-4… pic.twitter.com/NcjHQ2rCtq

— Aleksa Gordić 🍿🤖 (@gordic_aleksa) April 10, 2024

Anthropic’s Claude 3 Opus outperforms GPT-4 on common benchmarks like MMLU and HumanEval. Meanwhile, Elon Musk has announced that xAI’s next model, Grok-2, will begin training in May and is expected to surpass GPT-4. Most recently, Mistral has introduced its latest model, 8X22B.

Apparently the new Mistral model beats Claude Sonnet and is a tad bit worse than GPT-4
In a couple of months, the open source community will fine tune it to beat GPT-4
This is a fully open weights model with an Apache 2 license! I can’t believe how quickly the OSS community…

— Bindu Reddy (@bindureddy) April 10, 2024

On the other hand, Google’s Gemini 1.5 which features the longest 1 million context window is now available in 180+ countries via the Gemini API in public preview. It also includes native audio (speech) understanding capability and a new File API for simplified file handling.

Apple is also not left behind. Its latest LLM model, ReALM, matches the performance of OpenAI GPT-4.

Hey, look, who’s catching up?

It’s quite uncommon for OpenAI to be catching up with the new models emerging in the market. While GPT-4 has maintained its top position for the past year, it has lost its lead for the first time.

At the recent Google Next 24, Google enhanced several capabilities of Gemini 1.5 Pro, including better system instruction and JSON mode. Shortly after, OpenAI also announced GPT-4 Turbo with Vision, which has ‘improved reasoning capabilities.’

OpenAI’s release of GPT-4 Turbo with Vision is definitely a stopgap measure to ensure the hottest AI startup stays relevant.

“The new GPT-4 definitely feels better at coding. It is less lazy and more willing to write code. I was able to give it a few files, and it wrote perfect code (which was very uncommon before),” wrote Sully Omar, founder of Cognosys, on X.”

Altman himself feels that while GPT-4 is great, it’s time for the company to introduce a new model that is far better than GPT-4. “I think it sucks,” said Altman regarding GPT-4 in a recent interview with Lex Fridman. “I expect that the delta between five and four will be the same as between 4 and 3,” he added.

He further said that the company will release GPT-5 in the ‘coming months,’ adding that OpenAI has more important things to release before GPT-5. “Before we talk about a GPT -5-like model… I know we have a lot of other important things to release first,” said Altman.

In the episode of Unconfuse Me with Bill Gates, Altman also spoke at length with Gates about how GPT-5 would emphasise on customisation and personalisation. “The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources—all of that. Those will be some of the most important areas of improvement,” said Altman.

Furthermore, he claimed that GPT-5 would have much better reasoning capabilities. “GPT-4 can reason in only extremely limited ways. Also, reliability is a concern. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn’t always know which one. You’d like to get the best response of 10,000 each time,” said Altman.

Meta’s Llama 3 is Around the Corner

While everyone is excitedly waiting for GPT-5, Meta is quietly working on Llama 3. During a recent event in London, Meta announced its plans for the initial launch of Llama 3 set for release within the next month. The company did not disclose the size of the parameters used in Llama 3, but it’s expected to have about 140 billion parameters.

According to recent reports, Meta is planning to launch two smaller versions of Llama 3 next week. These smaller models are expected to serve as a precursor to the launch of the largest version of Llama 3, anticipated this summer.

“Can’t wait to start playing with the 7B version of Llama-3. It will be a HUGE winner if it can beat Claude Haiku. It will also give us a huge clue, if the big Llama-3 model beats Claude Opus,” wrote Bindu Reddy, Abacus AI chief, on X.

Llama 3 is expected to be multimodal, capable of understanding and generating both text and images. Additionally, it is expected to have enhanced reasoning skills.

Meta researchers are working to ensure that Llama 3 can handle controversial and tricky questions, a capability that Llama 2 lacked. They are enhancing Llama 3 to engage users effectively, providing context and addressing difficult questions instead of avoiding them.

Meta is planning to integrate its new AI model into WhatsApp and its Ray-Ban smart glasses as well.

The company is planning to launch Llama 3 in a range of model sizes suitable for various applications and devices. “There will be a number of different models with different capabilities and different versatility [released] during the course of this year, starting really very soon,” said Chris Cox, Meta’s Chief Product Officer.

Earlier this year, Meta chief Mark Zuckerberg announced that Meta is training Llama 3 using a massive compute infrastructure. The company plans to procure 350k H100s by the end of this year, with an overall total of almost 600k H100s equivalent of compute if other resources are included.

With both OpenAI and Meta planning to launch their new models this summer, temperatures are bound to rise.

The post This Summer, Expect GPT-5 and Llama 3 to Heat Up the LLM Race appeared first on Analytics India Magazine.