This Week in AI: OpenAI considers allowing AI porn

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own. By the way, TechCrunch plans to launch an AI newsletter […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Top 10 AlphaFold 3 Alternatives to Try in 2024

Alphafold 3 Alternative

What is AlphaFold 3?

On 8th May, Google DeepMind and Isomorphic Words unveiled the third generation of its protein folding model, AlphaFold 3. The new AI model has achieved 50% accuracy in predicting the structure and interactions of all biological molecules, including proteins, DNA, RNA, and ligands, making it the first AI system to surpass physics-based tools for biomolecular structure prediction.

However, along with AlphaFold 3, which is now available for non-commercial research purposes, there are popular alternatives to AlphaFold. These models form the basis of the drug discovery process and have other important life science impacts. Let’s take a look at the top alternatives of AlphaFold 3 in 2024.

Top Alternatives of AlphaFold 3 in 2024

  1. RoseTTAFold by Baker lab
  2. OmegaFold by Helixon
  3. I-TASSER by Yang Zhang
  4. Phyre2 by Lawrence Kelley’s
  5. ESMFold by Meta AI
  6. SWISS-MODEL by Torsten Schwede
  7. Robetta by Baker Lab
  8. HHPred by Max Planck Institute for Developmental Biology
  9. AlphaFold-Multimmer by Google DeepMind
  10. CollabFold by Milot Mirdita, Sergey Ovchinnikov, and Martin Steinegger

1. RoseTTAFold

Creator: Baker lab at the University of Washington, led by Minkyung Baek, Ph.D.​

Pros:

  • Utilises a “three-track” neural network architecture that processes one-, two-, and three-dimensional data about proteins simultaneously.
  • Designed for rapid protein structure prediction, capable of computing structures in minutes on standard computing equipment​
  • Focuses more on integrating various data types (sequence, interaction, and structure) within its neural network​
  • Has been successfully applied to predict numerous protein structures, including those not well-understood or directly linked to human health issues like cancer and inflammation​.
  • It provides tools for modeling complex biological assemblies and enhancing understanding of multifaceted biological systems​.
  • Quick prediction times make it accessible for widespread use in both academic and clinical settings.
  • Open-source and availabile through GitHub.

Cons:

  • While highly accurate, it may not reach AlphaFold’s precision level in all scenarios, particularly with extremely complex proteins.

Training Method:

  • RoseTTAFold is trained using both protein sequence data and structural data, allowing it to predict protein structures and their interactions effectively.
  • Uses a combination of deep learning techniques and traditional bioinformatics methods to enhance prediction accuracy.

2. OmegaFold

Creator: Helixon

Pros:

  • Predicts protein structures from a single primary sequence.
  • Uses a protein language model and a geometry-inspired transformer model for predictions.
  • Suitable for orphan proteins and fast-evolving proteins
  • Does not rely on multiple sequence alignments (MSAs), unlike other models.
  • Less dependent on extensive evolutionary data.
  • Broad applicability to various protein types.
  • Shows comparable accuracy to AlphaFold and RoseTTAFold on benchmark datasets like CASP and CAMEO

Cons:

  • May face challenges in achieving consistent accuracy across all types of proteins compared to models like AlphaFold.

Training Method:

  • Trained on unaligned and unlabeled protein sequences.
  • Uses deep transformer-based models to learn residue representations.

Learn more about: Alphafold vs OmegaFold

3. I-TASSER

Creator: Yang Zhang and his team at the University of Michigan.

Pros:

  • Uses an iterative threading assembly refinement approach for protein structure prediction.
  • Capable of function prediction through structure-based annotations.
  • Provides up to five full-length atomic models, ranked by cluster density, with estimations of accuracy, including TM-scores and RMSD​.

Training Method:

  • Employs multiple threading approaches to identify structural templates from known protein data.
  • Constructs full-length atomic models using iterative template-based fragment assembly simulations.
  • Uses a meta-server threading approach, LOMETS, for template identification​.
  • Provides comprehensive outputs including predicted models, secondary structures, solvent accessibility, and functional annotations.
  • Known for achieving high accuracy in structure prediction as demonstrated in various CASP competitions.
  • Generates multiple models allowing selection based on confidence scores.

To learn more Visit I-TASSER

4. Phyre2

Creator: Lawrence Kelley’s team at the Structural Bioinformatics Group, Imperial College London.

Pros:

  • Employs “one to one threading” which allows users to model a sequence against a specific template of their choice, enhancing the accuracy when additional biological information is available.
  • Includes tools like “BackPhyre” for scanning existing structures against genomes, and “Phyrealarm” for ongoing matching against newly added structures in the database.
  • Integrated with “3DLigandSite” for high-accuracy binding site prediction.
  • Provides a user-friendly web interface that is accessible to researchers without deep computational expertise.
  • Offers a range of predictive and analytical tools that go beyond simple structure prediction.

Cons:

  • While Phyre2 is highly effective for many common protein modelling tasks, its reliance on existing templates may limit its effectiveness for highly novel or poorly characterised proteins compared to methods like AlphaFold, which can predict structures without clear homologous templates​​.

Training Method

  • Phyre2 uses advanced homology detection methods to model protein structures based on their alignment with known structures. It leverages a combination of hidden Markov models and heuristics to enhance sequence coverage and model confidence.

Explore more about Phyre2

5. ESMFold

Creator: Meta

Pros:

  • ESMFold is based on a 15 billion parameter Transformer model and does not rely on multiple sequence alignments (MSAs), differing from models like AlphaFold2 which require MSAs.
  • It can make predictions directly from amino acid sequences, significantly speeding up the inference process.
  • ESMFold achieves similar accuracy levels to state-of-the-art models but is significantly faster, predicting structures up to 60 times faster than AlphaFold2 for certain sequences.
  • The model was also designed to handle large-scale structure predictions efficiently, capable of predicting structures for one million protein sequences in less than a day.
  • Does not require external databases or MSAs, simplifying the protein folding prediction process.

Training Method

  • ESMFold uses a Transformer-based language model, specifically the ESM-2 model, which learns interactions between pairs of amino acids in a protein sequence.

Cons

  • Since Meta disbanded the team behind ESMFold, it might not have new features anytime soon.

Explore ESMFold in GitHub

6. SWISS-MODEL

Creator: Torsten Schwede and his team at the Biozentrum of the University of Basel and the Swiss Institute of Bioinformatics.

Pros:

  • Provides a user-friendly, web-based platform for automated comparative protein structure modelling.
  • Integrates tools for structure assessment and comparison, such as QMEAN for model quality estimation.
  • Allows users to explore structural templates interactively and visualise them in 3D within the browser.
  • Free for academic use and supports a wide range of functionalities beyond basic modelling.
  • Integrated with major biological databases and bioinformatics tools, enhancing its utility in research.
  • SWISS-MODEL is specifically designed for ease of use, allowing even non-experts to perform protein modelling.
  • It supports a wide range of functionalities, including modelling homo-oligomeric assemblies and incorporating ligands into the models.

Cons:

  • While highly effective for known protein families, its accuracy may decrease for proteins with less characterised or more distant homologues.

Training Method

  • Utilises homology modelling, relying on evolutionary information to predict protein structures by identifying and using known protein structures as templates.
  • Employs algorithms to find the best match between the target sequence and available templates, optimising the alignment to predict the structure.

Explore: More about SWISS-Model

7. Robetta

Creator: Baker Lab at the University of Washington.

Pros

  • Offers both template-based and de novo protein structure prediction.
  • Users can input custom sequence alignments, apply constraints, and utilise local fragments in their modelling tasks.
  • Includes RoseTTAFold, enhancing its prediction accuracy and speed.
  • Robetta allows for user-interaction in the modelling process, offering customisation that automated systems like AlphaFold do not typically allow.
  • It integrates machine learning techniques with traditional comparative modelling approaches.

Cons:

  • Can experience long wait times due to the high demand and computational intensity of deep learning methods.
  • The accuracy can vary based on the availability and quality of templates or the effectiveness of the de novo modelling.

Training Method:

  • It utilises the Rosetta software suite’s tools, combining methods from comparative (homology) modelling and de novo structure prediction.
  • It has incorporated deep learning methods, specifically RoseTTAFold, which uses a three-track network for structure prediction.

Explore more about Robetta

8. HHPred

HHpred is a sophisticated bioinformatics tool for protein homology detection and structure prediction developed by the group at the Max Planck Institute for Developmental Biology. Here’s a concise summary of HHpred:

Creator: Max Planck Institute for Developmental Biology.

Pros:

  • Implements pairwise comparison of profile hidden Markov models (HMMs), making it highly effective for detecting remote homologs.
  • Capable of searching a vast range of databases including PDB, SCOP, Pfam, SMART, COGs, and CDD.
  • HHpred is unique in its use of HMMs for both the query and database sequences, allowing for more sensitive detection of homologies than methods based on sequence-sequence comparisons.
  • Provides detailed alignments and the option to predict 3D structures via MODELLER if a suitable template is found.
  • Highly sensitive in detecting homology, even among distantly related proteins.
  • Integrates with multiple databases and allows comprehensive analysis across different data sources.

Cons:

  • While powerful, the complexity of its setup and the need for specific alignments may pose challenges for less experienced users.

Training Method

  • Leverages profile-profile comparison methods, which are among the most sensitive sequence search techniques.
  • Profiles are created from multiple sequence alignments of related sequences, enhancing the accuracy of homology detection.

Explore more about HHPred

9. AlphaFold-Multimmer

Creator: Google DeepMind

Pros:

  • Built on AlphaFold2, it is engineered to tackle the complex prediction of protein-protein interactions, which involves understanding how multiple protein chains fit together.
  • Unlike its predecessor, which primarily predicts the structure of individual protein chains, AlphaFold-Multimer predicts the inter-chain interactions and the arrangement of proteins in a complex.
  • It achieves high accuracy in interface prediction, which is critical for functional analysis of proteins within their biological context.
  • Increases the scope of computationally accessible protein structure prediction to include complex assemblies.

Cons:

  • The computational demand is high, possibly limiting accessibility for some researchers without access to significant computing resources.

Training Method:

  • AlphaFold-Multimer uses deep learning algorithms trained on publicly available data of known protein structures. This includes training specifically for multimeric inputs to enhance the accuracy of interface predictions between different protein chains.

Explore more about AlphaFold-Multimmer

10. CollabFold

Creator: Milot Mirdita, Sergey Ovchinnikov, and Martin Steinegger

Pros:

  • Integrates AlphaFold2 and RoseTTAFold with MMseqs2 for fast multiple sequence alignment, significantly accelerating protein structure predictions.
  • Operates as an easy-to-use, notebook-based environment on Google Colab, making advanced protein modeling accessible without requiring installation or high-end hardware.
  • Capable of predicting close to a thousand structures per day with a single GPU
  • Unlike standalone AlphaFold2 which requires more extensive computational resources, ColabFold optimizes resource use via Google Colab, making it accessible to a broader audience.
  • Its integration with MMseqs2 speeds up the homology search process, making it much faster than traditional methods.

Training Method:

  • Utilises existing trained models from AlphaFold2 and RoseTTAFold and combines them with MMseqs2 for rapid sequence alignment and improved prediction accuracy.

Cons:

  • Dependent on Google Colab’s resources, which can limit the size of proteins analyzed due to memory constraints on available GPUs.
  • While it offers significant speed and accessibility, the precision for extremely complex structures might still lag behind more resource-intensive setups.

Explore more about CollabFold

The post Top 10 AlphaFold 3 Alternatives to Try in 2024 appeared first on Analytics India Magazine.

How AI is Revolutionizing the Legacy Industries?

Impact of AI in every industry
Source: Canva

AI has shown immense potential and promise in solving problems that were previously assumed as a work of fiction. Let’s explore the various ways it is impacting the tech industry. As AI enters our lives in a rather seamless way – think of facial recognition helping unlock phones, or Alexa and Siri, enriching experience with voice-enabled technologies, it is truly ubiquitous.

This post highlights some of the pathbreaking AI innovations made possible in the world of technology.

Recommendations

We are living in a very fascinating world of hyper-personalization. Based on historical purchases and user characteristics, we are shown not just the advertisements, but also the news in our feed that reflects our interests. Open up social media and your entire experience is tailored to match your preferences – be it the content you see to the recommended list of people you should connect with.

It feels overwhelming but is almost equivalent to having a mini-world curated to serve you, which extends to the concept of echo chambers. The hyper-personalization has created filter bubbles making the users less receptive to diverse world-views, reinforcing their existing beliefs.

Note-taking and Summarization

As if attending meetings was not enough in the first place, you were previously taking notes by yourself. Welcome to the world of AI, which has taken digitalization to a whole new level. It spares you the need to create, collect, or curate notes from your meetings.

AI provides a summary and takes notes from meetings, stating the action items, and highlighting insights. Otter.ai is one such application, with the ability to assist and keep you updated about project developments

Meeting GenAI
Source: otter.ai

Vision-ary

Quality Control

Moving on from recommendations and automated summarizations, AI now also serves as your quality control assistant, detecting imperfections like wear and tear or discoloration in fabrics, and conducting thorough quality assessments of raw materials like vegetables for farmers or wood for furniture.

Healthcare

Continuing with computer vision applications, AI is an extremely powerful tool to help analyze medical images, such as X-rays and CT scans to diagnose the underlying health condition of the patient. Such advanced algorithms are adept in identifying patterns and anomalies, thereby assisting healthcare professionals in speeding up the diagnosis process and enhancing patient outcomes.

It is worthwhile noting that the use of AI in healthcare qualifies under the high-risk and high-impact categories, hence the probabilistic output of AI models should only be considered as a point of augmentation. The false positives and false negatives could imply misdiagnosis, which if acted upon without healthcare professionals’ supervision, can significantly impact the lives of patients.

Prevention is Always Better

Fraud detection

AI-powered fraud detection systems have proved to be more accurate in predicting potential fraud, as compared to the traditional signature-based techniques. With the increased use of digital banking and online payments, a minor oversight in detecting fraud can cause huge losses to customers, damaging the reputation of online platforms.

Fraud Prevention
Source: Canva

Quoting statistics from the Infosys BPM, “Cybercrime costs the world economy $600 billion annually, which is 0.8% of the global GDP. Studies show that in the first quarter of 2021 alone, fraud attempts rose 149% over the previous year – fuelled, no doubt, by the post-Covid increase in online transactions. In response, more than half of all financial institutions have stepped up to employ AI to detect and prevent fraud in 2022”.

Predictive maintenance

Be it the asset-heavy manufacturing industry or the machinery in general, downtime is inevitable. Acting on such downtime, as and when it occurs, often finds companies unprepared, leading to extended repair times, which can be costly in terms of lost production, revenue, and additional expenses associated with repairing or replacing equipment.

That is where the predictive power of AI comes into the picture. It analyzes vast amounts of data to detect patterns and anomalies indicating potential equipment failures.

Predictive maintenance
Source: Deloitte

Deloitte succinctly highlights the need to minimize and manage the downtime through proactive maintenance leveraging AI – “Whether the concern is cascading damage to the wider system, the quality of products, the safety of the process and facility, or other consequences from aging or failing assets, it may be important to build the capacity to help predict asset failure and help prevent it from occurring in the first place.”

Access to Information

Smart assistants

AI has helped find information easier than ever before with the help of smart assistants. The user does not need to scramble for information and can simply access information at their fingertips. This accessibility to information has reduced the need for manual searching and bypassed the traditional customer support channels for many routine inquiries. These assistants have opened up new frontiers of efficiencies, saving users’ time and enhancing productivity.

Smart home assistants
Source: Canva

As a result, users can focus on more strategic and meaningful activities. However, such convenience does come with the risk of data privacy, security, or potential misuse of personal information. Hence, users must check privacy settings and be cautious about sharing information with AI systems.

The Other Side of Automation

While we can never get done sharing the benefits of efficiencies from automation, there is another side of the coin that concerns workforce displacement. A lot of repeated tasks that are caught under the radar of AI automation early on have pushed the need to constantly upskill and keep oneself relevant in the changing job market.

The fields requiring emotional intelligence and creativity are still not under the purview of AI’s strengths yet and could attract more talent. Having said that, the development of advanced AI models has also opened up a lot of roles, facilitating new open positions. Additionally, responsible AI experts, policymakers, AI ethicists, and cybersecurity experts are some of the skills that are in high demand these days.

And There is a Lot More!!!

Besides, there are a whole lot of applications leveraging AI, making human lives easier and enabling business growth. For example, customer churn prediction, credit risk default models, sentiment analysis of user reviews, automated data cleaning, customer segmentation, and tailor-made content revamping education are some of the many ways AI is making an ever-lasting impact.

While the upside potential is unlimited, it requires the awareness on behalf of the users to make appropriate trade-offs while using such AI-powered applications. By striking a balance between innovation and responsible development as well as utilization, AI can unlock unprecedented opportunities shaping a brighter future for all of humanity.

Vidhi Chugh is an AI strategist and a digital transformation leader working at the intersection of product, sciences, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, an author, and an international speaker. She is on a mission to democratize machine learning and break the jargon for everyone to be a part of this transformation.

More On This Topic

  • Revolutionizing Data Analysis with PandasGUI
  • How Predictive Analytics is Revolutionizing Decision-Making in Tech
  • The Top Industries Hiring Data Scientists in 2021
  • Top Industries and Employers Hiring Data Scientists in 2022

Perplexity AI Partners with SoundHound AI to Bring LLMs-Powered Voice Assistants Across Cars and IoT Devices

Perplexity AI

SoundHound AI, a voice artificial intelligence company, has partnered with Perplexity AI, a conversational AI powered answer engine, to integrate Perplexity’s online LLM capabilities into SoundHound Chat AI.

This partnership will enable Perplexity AI to extend voice assistant services to cars, TVs, and other IoT devices and enable SoundHound Chat AI to provide accurate, up-to-date responses to web-based queries that static LLMs cannot currently answer.

For example, users can ask questions like “How does the price of gas this week compare to last week?” and receive a response that combines accurate, live information on gas prices with a comprehensive generative AI-style explanation.

“Where this technology is already deployed in vehicles, we’re seeing usage habits shift substantially – increasing by multiples. We’re confident that this enhancement will further delight users as more and more people choose to talk rather than type.” said Mike Zagorsek, Chief Operations Officer at SoundHound AI.

We all spend a lot of time on the road. And are curious about local restaurants, trivia about the place, gas prices, weather, etc. or curious about the cast when watching a movie. Here’s a preview of what’s to come to help us get answers to all these questions. pic.twitter.com/W3318iD5e4

— Aravind Srinivas (@AravSrinivas) May 9, 2024

“Through this integration with SoundHound’s Chat AI assistant, we’re one step closer to our goal of making Perplexity available to everyone across every device they use,” said Dmitry Shevelenko, Chief Business Officer at Perplexity AI.

This development comes on the heels of OpenAI planning to launch a search engine soon, which would be able to provide real-time information much like Perplexity does.

Perplexity AI recently raised $63 million at a $1 billion valuation, led by Daniel Gross and others, including Jeff Bezos and NVIDIA. The funding will support Perplexity’s growth and expansion, as well as the launch of Enterprise Pro, a new product designed to enhance security and privacy features for business environments.

The post Perplexity AI Partners with SoundHound AI to Bring LLMs-Powered Voice Assistants Across Cars and IoT Devices appeared first on Analytics India Magazine.

TATA AIG is Building an LLM-Powered WhatsApp Chatbot for Customers 

Tata AIG

TATA AIG, one of the country’s leading insurance providers, is building a large language model (LLM)- powered chatbot for customers and plans to release it soon via WhatsApp.

However, given that hallucinations with LLMs remain an unsolved problem, TATA AIG is taking a cautious approach. For now, the insurer has launched an internal AI chatbot, which its customer service agents are currently using.

According to Krishnan Badrinath, SVP of information technology at TATA AIG, they are among the first insurers in the general insurance space in the country to launch a Generative AI use case.

“Currently, we’ve achieved an accuracy rate of over 80%, and our goal is to further enhance this accuracy before delivering it to our customers through WhatsApp,” he said.

Multiple LLMs-both powers TATA AIG’s AI chatbot closed and open-source models. “Cost efficiency is crucial, so for simpler tasks, we opt for open-source LLMs. But for complex tasks requiring high accuracy and processing power, we utilise the GPT models by OpenAI or Google’s Gemini.”

Further, Badrinath revealed that the company has established an enterprise layer to select the appropriate service strategically. “For instance, out of 20 processes, 15 may use open LLMs, while the remaining 5 utilise GPT models or Gemini for more intricate tasks,” he added.

With thousands of open-source and proprietary models evolving rapidly, the TATA AIG team consistently evaluates these options. It selects models based on thorough assessments to determine the best fit for their specific use cases.

“We maintain a continuous feedback loop, assessing the accuracy of each model based on the specific dataset and other scenarios. This mechanism allows us to adjust and switch between models as needed, including Panther, Llama, and others currently in use,” Ritesh Pandey, head of the Innovations Lab at TATA AIG, told AIM.

Challenges Galore

Pandey further recognised the obstacles of utilising LLMs in the insurance sector and emphasised the challenge of fully deploying an autonomous solution without human intervention.

He believes the regulatory nature of the industry instils scepticism and caution in embracing full autopilot modes. Instead, many companies aim for a copilot approach where algorithms handle the bulk of tasks, with human oversight for validation and feedback.

Badrinath noted that LLM hallucination is one of the key reasons for TATA AIG’s initial internal testing of LLMs. “We’ve begun the process from within, ensuring our agents provide feedback on the responses generated by the chatbot,” he added. Moreover, a dedicated team at TATA AIG reviews each response to gauge its confidence level.

Nonetheless, the company has taken a proactive approach towards AI. Tata AIG embarked on its AI journey well before the Generative AI surge. The insurer has been using AI to analyse the sentiment of received emails. It has also developed an AI-based self-inspection module for customers to assess their performance independently.

“We have trained 150 vision models which identify different vehicle parts and help customers carry out the self-assessment. We have also employed AI during live video streaming to extract pertinent information and evaluate risks associated with the business or property being inspected,” said Badrinath.

Reducing Hallucinations

While generative AI does bring value to the larger banking, financial services and insurance (BFSI) industry, hallucinations in LLMs remain an unsolved problem.

A reputable insurer, like TATA AIG, would not want a chatbot to disseminate nonsensical, inaccurate, or financial information as it could harm customer trust, regulatory compliance, and financial stability.

TATA AIG said it is actively working to minimise hallucinations and enhance confidence in its internal chatbot. “By specifying parameters like the temperature, which controls the model’s creativity, hallucinations can be minimised. In our case, accuracy is paramount, even if the model occasionally cannot provide an answer, but it should not provide incorrect answers.

“Therefore, maintaining a low temperature ensures that the model stays within the prescribed search space, limiting hallucinations and aligning with our use case requirements,” Pandey said.

Recently, ServiceNow reduced hallucination in structured outputs through Retrieval-Augmented Generation (RAG), enhancing LLM performance and enabling out-of-domain generalisation while minimising resource usage.

Another technique is Chain-of-Verification (CoVe) by Meta AI. This method reduces hallucination in LLMs by breaking down fact-checking into manageable steps, enhancing response accuracy and aligning with human-driven fact-checking processes.

Then, there is prompt engineering. Marc Andreessen also recently said with the right prompting, you can unlock the latent super genius in AI models. “Prompting crafts in many different domains such that you’re kind of unlocking the latent super genius,” he added.

What’s next?

TATA AIG is not alone in the generative AI race. Other insurers in India are also exploring generative AI use cases. For example, Bajaj Allianz General Insurance launched a new AI chatbot called Insurance Samjho, which helps customers better understand their policies. Similarly, Aditya Birla Sun Life Insurance partnered with Artivatic.ai last year to use LLMS for underwriting insurance.

As we advance, TATA AIG looks to leverage LLMs to solve more complex business problems, from analytics to servicing, fraud prevention and fraud detection.

Badrinath said generative AI could be used for forecasting as well. Given that TATA AIG has products in multiple verticals, such as motor, health, travel, personal accident, personal lines, extended warranty, etc. LLMs can forecast business numbers for the next six months.

“Businesses today are sitting on a plethora of data, but currently, this forecasting is manual and channel-specific. With LLMs, a simple query in English, such as ‘What are my motor business projections for the next six months?’, businesses can get the data instantly,” Badrinath said.

Moreover, he believes generative AI will be crucial in the complex process of new business ratings within the general insurance industry.

“Unlike life insurance, which typically has a limited range of products, we deal with many categories. This complexity extends to the rating systems, where rule engines are heavily relied upon. We anticipate that Generative AI will eventually replace these rule engines, streamlining and enhancing the rating process,” concluded Badrinath.

The post TATA AIG is Building an LLM-Powered WhatsApp Chatbot for Customers appeared first on Analytics India Magazine.

5 Steps to Learn AI for Free in 2024

5 steps to learn AI for free with courses from Harvard, Google, and Amazon.
Image by Author

Why Should You Learn AI in 2024?

The demand for AI professionals is going to grow exponentially in the next few years.

As companies begin to integrate AI models into their workflows, new roles will emerge, like that of an AI engineer, AI consultant, and prompt engineer.

These are high-paying professions, commanding annual salaries that range between $136,000 and $375,000.
And since this field has just started gaining widespread traction, there hasn’t been a better time to enter the job market equipped with AI skills.

However, there is just too much to learn in the field of AI.

There are new developments in the industry almost every day, and it can feel impossible to keep up with these changes and learn new technologies at such a fast pace.

Fortunately, you don’t have to.

There is no need to learn about every new technology to enter the field of AI.

You just need to know a few foundational concepts that you can then build upon to develop AI solutions for any use case.

In this article, I will give you a 5-step AI roadmap made up of free online courses.

This framework will teach you foundational AI skills — you will learn the theory behind AI models, how to implement them, and how to develop AI-driven products using LLMs.

And the best part?

You will learn all these skills from some of the best institutions in the world, like Harvard, Google, Amazon, and DeepLearning.AI at no cost.

Let’s get into it!

Step 1: Learn Python

Today, there are dozens of low-code AI tools available in the market, which allow you to develop AI applications without any programming knowledge.

However, I still recommend learning the basics of at least one programming language if you’re serious about getting started with AI. And if you are a beginner, I suggest starting with Python.

Here’s why:

  • Versatility and control: No-code tools are often restricted in the types of applications you can build. With these tools, you are confined to the capabilities available within a paid platform.

    You also don’t have any knowledge of what goes on behind the models you are building, which can lead to issues with transparency and control.

  • Wide range of libraries: Python has a ton of libraries that are specifically designed for AI and machine learning.
    It also allows for integrations with databases, web applications, and data processing pipelines, which gives you the flexibility to build an end-to-end AI solution without any restrictions.
  • Employability: Coding knowledge undoubtedly opens up more career opportunities, allowing you to transition easily into fields like data science, analytics, and even web development.

Free Course

To learn Python, I recommend taking Freecodecamp’s Python for Beginners course.

This is a 4-hour long tutorial that will teach you the fundamentals of Python programming, such as data types, control flow, operators, and functions.

Step 2: Learn AI with a Free Harvard Course

After taking a Python course, you should be familiar with the fundamentals of the language.

Of course, to become a good programmer, an online course alone isn’t enough. You need to practice and build projects of your own.

If you want to learn how to improve your coding skills and go from a novice to someone who can actually build cool things, you can watch my YouTube video on learning to code.

After gaining a decent level of proficiency in coding, you can start learning to build AI applications in Python.

There are two things you need to learn at this stage:

  • Theory: How do AI models work? What are the underlying techniques behind these algorithms?
  • Practical application: How to use these models to build AI applications that add value to end users?

Free Course

The above concepts are taught in Harvard’s Introduction to AI with Python course.

You will learn the theory behind techniques used to develop AI solutions, such as graph search algorithms, classification, optimization, and reinforcement learning.

Then, the course will teach you to implement these concepts in Python. By the end of this course, you will have built AI applications to play games like Tic-Tac-Toe, Minesweeper, and Nim.

Harvard CS50’s Artificial Intelligence with Python course can be found on YouTube and edX, where it can be audited for free.

Step 3: Learn Git and GitHub

After completing the above courses, you will be able to implement AI models in Python using various datasets.
At this stage, it is crucial to learn Git and GitHub to effectively manage your model’s code and collaborate with the wider AI community.

Git is a version control system that allows multiple people to work on a project simultaneously without interfering with each other’s work, and GitHub is a popular hosting service that lets you manage Git repositories.

In simple terms, with GitHub, you can easily clone another person’s AI project and modify it, which is a great way to improve your knowledge as a beginner.

You can also easily track any changes you make to your AI models, collaborate with other programmers on open-source projects, and even showcase your work to potential employers.

Free Course

To learn Git and GitHub, you can take Freecodecamp’s one-hour-long crash course on the subject.

Step 4: Mastering Large Language Models

Ever since ChatGPT was released in November 2022, Large Language Models (LLMs) have been at the forefront of the AI revolution.

These models differ from traditional AI models in the following ways:

  • Scale and parameters: LLMs are trained on massive datasets from all over the Internet, and have trillions of parameters. This allows them to understand the intricacies of human language and understand human-like text.
  • Generalization capabilities: While traditional AI models excel at specific tasks that they were trained to do, generative AI models can perform tasks in a wide variety of domains.
  • Contextual understanding: LLMs use contextual embeddings, which means that they consider the entire context in which a word appears before generating a response. This nuanced understanding allows these models to perform well when generating responses.

The above attributes of Large Language Models allow them to perform a wide variety of tasks, ranging from programming to task automation and data analysis.

Companies are increasingly looking to integrate LLMs into their workflows for improved efficiency, making it crucial for you to learn how these algorithms work.

Free Course

Here are 2 free courses you can take to deepen your understanding of Large Language Models:

  • Intro to Large Language Models by Google:
    This course offers a beginner-friendly introduction to Large Language Models and is only 30 minutes long. You will learn about what exactly LLMs are, how they are trained, and their use cases in various fields.
  • Generative AI with LLMs by DeepLearning.AI and AWS:
    In this course, you will learn about LLMs from industry experts who work at Amazon. You can audit this course for free, although you have to pay $50 if you’d like a certification. The topics taught in this program include the generative AI lifecycle, the transformer architecture behind LLMs, and the training and deployment of language models.

Step 5: Fine-Tuning Large Language Models

After learning the basics of LLMs and how they work, I recommend diving deeper into topics like fine-tuning these models and enhancing their capabilities.

Fine-tuning is the process of adapting an existing LLM to a specific dataset or task, which is a use case that generates tons of business value.

Companies often have proprietary datasets from which they might want to build an end product, like a customer chatbot or an internal employee support tool. They often hire AI engineers for this purpose.

Free Course

To learn more about fine-tuning large language models, you can take this free course offered by DeepLearning.AI.

How to Learn AI for Free in 2024 — Next Steps

After completing the 5 steps outlined in this article, you will have a ton of newfound knowledge in the realm of artificial intelligence.

These skills will pave the way for jobs in machine learning, AI engineering, and AI consulting.
However, the journey doesn’t end here.

Online courses are a great way to gain foundational knowledge. However, to improve your chances of getting a job, here are three more things I recommend doing:

1. Projects

Projects will help you apply the skills you’ve learned by giving you hands-on experience with custom datasets.
They can also help you stand out and land jobs in the field, especially if you have no prior work experience.

If you don’t know where to start, this article provides you with an array of unique, beginner-friendly AI project ideas. If you’re interested in projects related to data science and analytics, you can watch my video on the topic instead.

2. Staying on top of AI trends

The AI industry is evolving faster than ever.

New techniques and models are constantly being released, and staying updated with these technologies will set you apart from other industry professionals.

KDNuggets and Towards AI are two publications that break down complex AI topics into layman’s terms.

If you’d like to learn more about AI, programming, and data science, I also have a YouTube channel that provides beginners with tips and tutorials on these subjects.

Furthermore, I recommend browsing the Papers with Code platform. This is a free resource that lets you read academic papers with their corresponding code.

Papers with Code lets you quickly understand cutting-edge research in AI by reading a paper’s summary, methodology, dataset, and code in a single platform.

3. Join a Community

Finally, you should consider joining a community to deepen your knowledge and skills in AI.

Finding like-minded people to collaborate with is the best way to learn new things, and will open up a plethora of opportunities for you in the space.

I suggest joining AI networking events in your area to develop relationships with other individuals in the field.
You can also contribute to open-source projects on GitHub, as this will help you build a professional network of AI developers.

These connections can dramatically improve your chances of landing jobs, collaboration opportunities, and mentorships.

Natassha Selvaraj is a self-taught data scientist with a passion for writing. Natassha writes on everything data science-related, a true master of all data topics. You can connect with her on LinkedIn or check out her YouTube channel.

More On This Topic

  • 5 FREE Courses on AI with Microsoft for 2024
  • 2024 Tech Trends: AI Breakthroughs & Development Insights from…
  • Top Free Data Science Online Courses for 2024
  • Learn How to Run Alpaca-LoRA on Your Device in Just a Few Steps
  • Getting Started with Scikit-learn in 5 Steps
  • KDnuggets News March 16, 2022: Learn Data Science Fundamentals & 5…

Companies Without a Chief AI Officer are Bound to F-AI-L

With generative AI gaining momentum over the past year and a half, it has sparked a new debate about the necessity of a Chief AI Officer in enterprises. “Do you really need an AI chief officer, we asked.”

Many experts that AIM spoke to believe that companies without a chief AI officer are less likely to succeed in the long run and may fall prey to the ongoing AI hype, losing significant money. Notably, several companies from the West, including Snowflake, Deloitte, Accenture, SAP, Intel, and Dell, have already appointed chief AI officers, in short CAIOs.

Even the US government has prioritised the role, with the White House recently instructing federal agencies to appoint chief AI officers.

“AI is incredibly transformative, and it will be over the next decade,” said Baris Gultekin, head of AI at Snowflake, in an exclusive interview with AIM. He said that all businesses will have an AI strategy connected to their data strategy.

On the contrary, many argue that existing leadership, like the CTO (chief technology officer) or CIO (chief information officer), can handle AI strategy alongside their current roles, especially in the early stages of AI adoption.

“I believe a chief AI officer is necessary for a large-scale firm,” avered Krishna Rastogi, CTO of Machine Hack. “let’s say with an employee size greater than 1000+, especially if their business demands it.”

He said for small firms, the role of the AI officer can be handled by the CTO.

According to a recent report by Gartner, while some enterprises may opt for a ‘chief AI officer,’ a ‘head of AI’ is enough for most to integrate AI into business strategy. The report added that currently about 46% of accountability for AI initiatives is shared between the CTO and CIO.

So, what is the point of a chief AI officer then? “While the CTO is responsible for overseeing an organisation’s overall technology strategy and infrastructure, the CAIO’s primary responsibility is to identify opportunities for AI deployment, develop an AI strategy aligned with business goals, and oversee the execution of AI initiatives,” said Sachin S Panicker, Chief AI Officer, Fulcrum Digital Inc.

Simply put, the CAIO oversees the development and implementation of AI projects across the company. This could involve collaborating with data scientists, engineers, and other technical teams. They might also manage partnerships with external AI vendors.

Panicker told AIM that CAIO can establish AI governance frameworks, ethical guidelines, and policies to promote responsible and transparent AI usage within the enterprise.

He said, “They can spearhead AI talent acquisition, retention, and development by hiring data scientists, ML engineers, and AI specialists while also upskilling existing employees in AI technologies.”

However, he added that whether an enterprise needs a dedicated CAIO depends on several factors, such as the organisation’s size, industry, AI maturity, and strategic focus on AI.

AIM paid close attention (not stalking) to Lan Guan’s LinkedIn profile. She is currently the chief AI officer at Accenture. A closer look at her profile revealed that she advises cross-industry C-suite clients to develop and implement data and AI strategies and products with the strategic goal of helping them grow in the new era of Generative AI.

She told FT that her role is “multidisciplinary, requiring a blend of robust technical knowledge and sharp business insight across fields [as diverse] as AI and machine learning, computer science, statistics, data analytics, ethics, regulatory compliance, and industry-specific expertise.”

Similarly, another look at Jeff Boudreau’s LinkedIn profile shows that he is the chief AI Officer at Dell. In his profile, he wrote that he is responsible for leading Dell’s Center for AI Innovation and that he is Dell Technologies’ first-ever chief AI officer.

“My team and I are focused on shaping Dell’s AI strategy and policies, building relevant AI partnerships, and championing the next generation of secure and ethical AI and GenAI technologies.” read his job description.

CISO is all you need

Now that you know the need and requirements for a chief AI officer. It also becomes important to have a chief informational security officer, once the AI strategy is in place, who can guarantee the safety of generative AI tools within the organisation. The challenges posed by generative AI have become a significant headache for SaaS security teams.

According to a recent Salesforce study, more than half of GenAI adopters use unapproved tools at work. The research found that despite GenAI’s benefits, a lack of clearly defined policies around its use may put businesses at risk.

Most likely, CISO roles are also changing with generative AI.

“We definitely need a chief AI security officer,” said Prasanna Naik, co-founder of CloudEagle, saying that when his customers explore a new AI tool, the chief informational security officer (CISO) is the first person they call in to check.

According to Naik, “What is this AI application? What access does it have? Have our engineers or marketing people put data into this AI tool that they were not supposed to and exposed our assets?” are the most common questions customers ask.

Further, he said that not every company is going to be an AI company. “If a company is providing a particular service or platform and doesn’t have much data to train on, what will an AI officer do?” he pondered and said: “Just for the sake of launching AI, if you have an AI officer, it won’t work.”

The post Companies Without a Chief AI Officer are Bound to F-AI-L appeared first on Analytics India Magazine.

Robotics will have ChatGPT Moment Soon

Back in 2024, as AIM expressed excitement for the upcoming GPT moment, little did we anticipate the convergence of numerous robotics advancements.

Recently, Vinod Khosla, founder of Khosla Ventures and the initial investor in OpenAI, underscored his perspective on why robotics will soon have its AI breakthrough.

Khosla said AI’s transformative capabilities foresee a future where AI and robotics liberate humanity from mundane tasks.

He said that robotics will approach a ‘GPT moment’ in the next 2-5 years, when robots will transition from being programmed (following instructions) to learning systems that understand the physical and real-world dynamics, enabling rapid progress in robotics.

It’s Already Happening

A few days back, NVIDIA researchers introduced DrEureka, an LLM-powered agent automating the simulation-to-reality pipeline, effortlessly training a robot dog to balance on a yoga ball without fine-tuning.

We trained a robot dog to balance and walk on top of a yoga ball purely in simulation, and then transfer zero-shot to the real world. No fine-tuning. Just works.
I’m excited to announce DrEureka, an LLM agent that writes code to train robot skills in simulation, and writes more… pic.twitter.com/kuG14LmSOh

— Jim Fan (@DrJimFan) May 3, 2024

Interestingly, DrEureka is built on its prior work, Eureka, the algorithm that teaches a 5-finger robot hand to do pen spinning. “It takes one step further in our quest to automate the entire robot learning pipeline with an AI agent system,” said Jim Fan, senior research manager and lead of Embodied AI (GEAR Lab).

The OpenAI-powered Figure 01 has also been advancing significantly in terms of visual reasoning capabilities. Recently, it was able to differentiate between healthy options like oranges and less desirable choices like chips, with its in-house trained neural network mapping camera input to robot actions at a rapid 10 hz rate.

Brett Adcock, the founder of FigureAI robots, believes that “everyone will own a robot in the future similar to owning a car or phone today,” he added.

Tesla is not left behind. Recently, Optimus was ready to work in factories, sorting battery cells in real-time by leveraging its FSD (full self-driving) computers. It was able to sort battery cells precisely with minimal margins for insertions and automatically target the next available slot.

Trying to be useful lately! pic.twitter.com/TlPF9YB61W

— Tesla Optimus (@Tesla_Optimus) May 5, 2024

Google DeepMind also released three robotics research systems starting this year—AutoRT, SARA-RT and RT-Trajectory—that will aid robots in making faster decisions and better understanding and navigating their environments. The models will help with data collection, speed, and generalisation.

Additionally, Stanford University introduced Mobile ALOHA, a system designed to replicate bimanual mobile manipulation tasks requiring whole-body control.

Google DeepMind supported the project, and the technology addresses the limitations of traditional imitation learning from human demonstrations. These general-purpose robots are demonstrated to assist with various tasks such as cooking, cleaning, lifting weights, and other manual activities.

What’s Next?

While advancements in AI research are still common, companies are rushing for the next big breakthrough in robotics. Like NVIDIA releasing project GR00T a month ago and subsequently releasing Dr Eureka, as mentioned earlier, more companies have also been heavily investing in robotics.

Major players like Google DeepMind, Tesla, and NVIDIA are making robotics a priority, so major breakthroughs will likely come soon. Significant progress has also been made in open-source research, with Hugging Face launching LeRobot, an open-source robotics data library, just a couple of days ago.

As NVIDIA CEO Jensen Huang rightly said, “The enabling technologies are coming together for leading roboticists around the world to take giant leaps towards artificial general robotics.”

Clearly, the ChatGPT moment in robotics is not about when; it is now!

The post Robotics will have ChatGPT Moment Soon appeared first on Analytics India Magazine.

SML & 3AI Launch Hanooman GenAI App on Play Store in 12 Indian Languages

SML Unveils Hanooman, Sets Ola Krutrim On Fire

3AI Holding Limited and SML India announced the launch of ‘Hanooman‘, India’s largest multilingual and affordable GenAI platform, available in 98 global languages, including 12 Indian languages.

The company also made Hanooman available for download in India through the web and mobile application for Android users on the Play Store, with an iOS app expected soon.

Download and try the app at: https://play.google.com/store/apps/details?id=com.hanooman.ai&hl=en_US&pli=1

The platform, developed under joint collaboration, aims to reach 200 million users within its first year of launch and build a GenAI ecosystem for India by leveraging the country’s diverse linguistic and cultural heritage.

As part of the launch, SML India announced partnerships with leading technology companies and government bodies. Yotta will provide GPU cloud infrastructure to support SML India’s operations, while NASSCOM will collaborate on initiatives such as supporting AI startups, fostering fintech innovation, and engaging with 3000 colleges.

SML India has also partnered with the Government of Telangana and the Department of Administrative Reforms and Public Grievances (DARPG) to facilitate seamless translation between English and Telugu.

Hanooman, powered by 3AI Holding’s cutting-edge technology, combines specialised LLMs with a dynamic integration synthesis matrix to deliver clear, adaptive insights and transform complex data into actionable intelligence. The platform is currently available in its free version, with a premium subscription plan to be launched later this year. Hanooman’s versatile features can handle tasks ranging from casual chats to offering professional advice and performing complex technical tasks like coding and tutoring.

The platform aims to cater to four sectors: healthcare, governance, financial services, and education. It will offer an open-source alternative to commercially accessible Large Language Models (LLMs) while providing a closed-source model tailored for enterprises in need of on-premise solutions.

Arjun Prasad, Managing Director of 3AI Holding, emphasised the mission of democratising access to cutting-edge technology for every Indian, making AI inclusive and available to everyone, regardless of their ethnicity or location. Dr. Vishnu Vardhan, Co-Founder & CEO of SML India, highlighted Hanooman’s potential to impact the lives of 200 million users within the first year and bring GenAI to the reach of everyone in India, opening up massive opportunities for companies and startups.

The strategic partnership between SML India and 3AI Holding reflects a commitment to the fundamental mission of ‘AI for All’, dedicating both entities to democratising the GenAI space and bridging the gap between urban and rural India.

The post SML & 3AI Launch Hanooman GenAI App on Play Store in 12 Indian Languages appeared first on Analytics India Magazine.

Small Indian IT Firms are Taking the Acquisition Route for Increasing Capabilities

Small India IT Firms are Taking the Acquisition Route for Increasing Capabilities

TCS Chairman Natarajan Chandrasekaran has recently said that generative AI is going to make a huge impact on the future of the enterprise, which is not yet imagined. He added that there needs to be a substantially bigger investment in generative AI to create the needed impact.

IT firms in India are trying hard to cope up with the changing landscape of tech and increasing their workforce capabilities. Similarly, smaller IT firms are acquiring to strengthen their capabilities. The midsize firms are announcing acquisition in various segments ranging from startups, and consulting firms to engineering services, or the most common one right now is data and analytics companies. And the numbers are bigger than the larger IT firms.

The most recent one is the report of Happiest Minds acquiring US-based Aureus Tech Systems for a sum of $8.4 million. With a team of 150 employees, Aureus is a specialised digital product engineering firm based in AI. Happiest Minds aims to enhance its domain expertise in insurance and reinsurance, healthcare and life sciences, and its product and digital engineering services (PDES) business.

Interestingly, this is the third acquisition by Happiest Minds of FY25. In April, the company acquired PureSoftware Technologies for $94.5 million (INR 784 crore) and Macmillan Learning India for INR 4.5 crore.

All about increasing AI capabilities, expansion, and automation

The larger goal of these acquisitions seems to be headed in the direction of increasing automation and AI capabilities for their customers, while also expanding their footprint in the USA. For example, two of Carlyle Group companies – Hexaware Technologies and Quest Global also announced acquisition this week.

Hexaware Technologies announced the acquisition of Softcrylic, a leading data consulting firm headquartered in Minneapolis. Softcrylic is known for its exceptional expertise in data strategy and engineering. The company specialises in addressing intricate data challenges, ranging from data capture and validation to data modelling and activation.

With a wealth of experience across various marketing stacks, including Adobe, Google, and Salesforce, combined with their proficiency in engineering on Microsoft Azure and Amazon AWS, Softcrylic enables organisations to leverage their data effectively and obtain deeper insights through advanced data activation techniques.

Meanwhile, Quest Global acquired a majority stake in People Tech Group, a digital transformation company for Fortune 500 clients. This would enable Quest Global to better serve its customers in the OEMs in the automotive industry.

On similar lines, Coforge’s CEO Sudhir Singh has announced that the company is shelling out $220 million for acquiring a 54% stake in Cigniti Technologies. This is one of the biggest acquisitions for Coforge after the $73 million acquisition of SLK three years back. Singh said that this would help the company also expand to North America and help the company in three verticals – retail, hi-tech, and healthcare.

Apart from these, in April, ITC Info-tech entered into an agreement of buying %100 stake in Blazeclan Technologies, which is a cloud consulting company for INR 485 crore.

Revenues flowing in soon for IT with generative AI

In the last two quarters, Mphasis, GlobalLogic, and Sonata have also made acquisitions. Such small IT firms making acquisitions in an era where bigger IT firms are still under-delivering on the AI promises brings in hope as well as worry about what would turn out of such investments.

During the recent earnings call of the major IT giants, there was a lot of resistance about announcing the contribution of generative AI to the revenue. While TCS has $900 million generative AI projects in the pipeline, Infosys announced the acquisition of in-tech, which is a firm focused on the automotive industry. Wipro is also shifting its focus to lead generative AI in consulting.

Regardless of announcing the numbers, all of these companies are training their employees with generative AI skills, investing millions into it. While the financial year has just ended, by the next earnings, the IT firms might finally reveal the revenue they increase by investing in generative AI. For smaller firms, it is currently all about acquisition.

The post Small Indian IT Firms are Taking the Acquisition Route for Increasing Capabilities appeared first on Analytics India Magazine.