The Early Days of the Internet

Interview with Wes Kussmaul

The Early Days of the Internet

If you were using computers in the early 80s, this interview is for you! I had the extreme honor of hosting Wes Kussmaul, an internet pioneer whose work has significantly shaped the landscape of online privacy and security. Wes is the creator of the first online encyclopedia, Delphi, and a founding member of the Authenticity Alliance. Our conversation delved into the early days of the internet, Rainbow Magazine, the evolution of social media, and the ongoing challenges and solutions in the realm of digital security.

198502-Rainbow_0000-1

I got to dig up digitized copies of the old Rainbow Magazines and surprise Wes with glimpses of his own 2-page advertisements from Delphi. It was a real treat for me as these magazines were as close as I ever got to the outside world of computing at the time.

The Early Days of the Internet

The Genesis of Online Social Media

Wes’s journey began in a world vastly different from today’s internet-driven society. He quit his job at Technotronics to create the Kussmaul Encyclopedia, which eventually evolved into Delphi—the precursor to modern social media platforms. Reflecting on those early days, Wes noted, “Social media saved our butt in 1982 when we had been losing money. We added social features, bulletin boards, SIGs, and email, which saved us from oblivion.”

The Early Days of the Internet

This shift marked the beginning of a new era where online communities could interact and share information, albeit in a much more rudimentary form than what we experience today. Wes reminisced about the early internet, saying, “You didn’t get there via TCP/IP as you do now with the web. It was an old protocol called X25.”

The Early Days of the Internet

The Birth of the Commercial Internet

As the internet matured, the lines between commercial and non-commercial use began to blur. Wes shared a fascinating story about the creation of CIXNET, a pivotal moment in the commercialization of the internet. “Two smart guys in Aurora, Colorado, created CIXNET to take commercial traffic away from the NSF internet backbone,” he explained. This move catalyzed the commercial internet, leading to the development of services like AOL, which revolutionized online connectivity and accessibility.

The Early Days of the Internet

Grassroots Technology and the Power of Community

Our discussion also touched on grassroots technology efforts and their impact on community connectivity. Wes recalled projects like the Seattle Wireless Project, where local minds created directional Wi-Fi beams using simple materials like Pringles cans. These initiatives provided crucial internet access in areas with limited coverage, demonstrating the power of community-driven technological innovation. He mentioned, “Even now, there’s a group in Boston building a mesh network using end-to-end encrypted nodes.”

The Rise of Privacy and Security Concerns

The conversation naturally transitioned to the critical issue of online privacy and security. Wes’s post-Delphi endeavors have focused on these areas, advocating for the use of Public Key Infrastructure (PKI) to ensure secure and reliable online identities. “Your information should be your personal intellectual property,” Wes emphasized, highlighting the importance of keeping personal data secure and private.

One of the major challenges in implementing PKI on a large scale is the perception of central authority. Wes addressed this by proposing a participatory model of governance, akin to municipal governance. He explained, “The city of Osmio represents authority and is a global city hall, a certification authority, but it is non-commercial and approachable.”

Real-World Applications of PKI

Wes’s work with the Authenticity Institute has led to practical applications of PKI in various fields. He mentioned Authentimatch, a service aimed at creating reliable identities for dating services to combat fraud. “The problem is that dating services are fraught with synthetic identities and fraud. Authentimatch provides a platform where the identities of all participants are measurably reliable,” he explained.

The concept of identity reliability extends beyond dating services. It can be applied to any online platform where authenticity and accountability are crucial. Wes highlighted, “These identity certificates are universal. If you enroll in Authentimatch, that same identity certificate can be used for other services like blogging, ensuring trust across platforms.”

The Path Forward

As we wrapped up our conversation, Wes stressed the importance of moving towards a system where online identities are secure and reliable. He believes that the key to solving many of today’s security issues lies in the widespread adoption of PKI and digital signatures. “We have the solution. The solution is accountability and measurably reliable identities represented by X.509 identity certificates.”

Wes’s insights provide a compelling vision for the future of online privacy and security. His work underscores the need for a robust and reliable digital identity infrastructure, one that empowers individuals and protects their personal information in an increasingly interconnected world.

For more information on Wes’s initiatives and to explore the innovative solutions he’s developing, visit Authentiverse.net and reliableid.com.

Join us as we continue to explore the cutting-edge of AI and data science with leading experts in the field.

Subscribe to the AI Think Tank Podcast on YouTube. Would you like to join the show as a live attendee and interact with guests? Contact Us

Indian AI Researchers Should Move Beyond PhDs

Indian AI Researchers Should Move Beyond PhDs

According to the Global AI Talent Tracker 2.0 by Marco Polo, over the past few years, China and India have significantly expanded their domestic AI talent pool to support the burgeoning AI industry. The percentage of the world’s top AI researchers hailing from China has surged from 29% in 2019 to 47% in 2022.

Historically a major exporter of top-tier AI researchers, India is now seeing an increase in talent retention. In 2019, the majority of Indian AI researchers with undergraduate degrees sought opportunities abroad. However, by 2022, one-fifth of these researchers chose to work in India.

Though there’s an increase in AI research within India, it seems that most of it has been done by researchers for their PhD thesis. None of it actually comes off for production. Similar thoughts were shared by several researchers when they spoke with AIM.

There have been research work focusing on LLMs, voice models, and using AI in several fields coming out of Indian universities, but most of them get stuck at the research phase. However, this is slowly changing with several researchers sending their papers to ICML and NeurIPS.

Putting into Work

“None of the research from the universities actually comes out. They just do research in the field like a final year project, and it dies there,” said Mufeed VH, the creator of Devika, who recently got into Y Combinator.

Researchers should come out of the universities and put their creations into products, or probably build a research lab, such as AI4Bharat.

Similarly, Adithya S Kolavi, the founder of CognitiveLab pointed out that there are also not enough grants in India coming from universities or companies for flourishing research. “You have the VC kind of things, but grants are essential to push research forward. I have not seen that concept flourish in India,” added Kolavi.

Market trends suggest that to get into the AI field and then land a research job, PhD is a must in India. “The research scene in India is good, but if you want to get into a good research institute, you require a PhD, especially if you’re a college student like me,” said Kolavi.

Mufeed agreed with Kolavi’s point and said that even though a lot of research on GitHub and X has been done by anonymous people, but since they do not have a PhD, they do not get enough recognition. They are just pursuing it like a hobby to build amazing products.

“I think in India, kids who are tinkering with this stuff should get the resources to learn more about AI,” he added. “Pretty soon, kids and students would do the same thing as researchers.”

When speaking with AIM, Amit Sheth, the chair and founding director of the Artificial Intelligence Institute at the University of Southern Carolina (AIISC), highlighted that universities like Stanford, Harvard, MIT, help researchers by giving grants to them to move their research to production.

“Though Indian universities produce some very good engineers, they are very successful in the West. I think it’s time we take a good look at India and see if we can build something like a ChatGPT here,” Sheth said, emphasising the need for India to innovate and ship AI products to the rest of the world, including the West.

Pratik Desai, the founder of KissanAI, shared similar thoughts: “India has never led any fundamental research, but we have a golden opportunity as AI can be a levelling field.” “However, this requires a fundamental shift from coaching, and academia to a change in mindset from parents, and founders to investors,” he added.

Shift of Focus

During AIM podcastthe discussion with the young researchers of India, it was pointed out that even though universities have clubs or centres of excellence, there is not much that is achieved there apart from researching on a few GPUs and competing in olympiads. “They are good achievements but credentialism is not the point, we need actual results,” said one of the researchers.

One of the reasons that the research in AI does not get into production is the slow rate of adoption within the country. “There is a big gap that can be bridged with more industry and academia collaborations,” said Adarsh Shirawalmath, the founder of Tensoic.

“College in general has been helpful, but we are lagging a bit in terms of where the SOTA is and what we are doing because some of the professors still might be researching in CNN whereas the SOTA is really ahead,” added Shirawalmath.

When one looks at the curriculum, premier institutions in India, such as the IITs, have been heavily focused on the theoretical aspects of AI. Many of the prominent contributions in the field have also been made by professors from these institutes that have been the bedrock of innovation for several decades.

Vinija Jain, a seasoned ML researcher, who is currently working on building vision language models with cultural awareness, said that Indian researchers need to push forward even more to draw more inspiration for others. “The research from India is not only serving as great research in itself, but also as an inspiration,” said Jain.

“When you see someone else building something for the community, it motivates you to help and contributes as a building block for further developments,” she added, talking about the growing push for AI research in India, while companies such as OpenAI and Google expand their base into the Indic AI space.

The post Indian AI Researchers Should Move Beyond PhDs appeared first on AIM.

Beginner’s Guide to Machine Learning with Python

Machine Learning with Python

Image by Author

Predicting the future isn't magic; it's an AI.

As we stand on the brink of the AI revolution, Python allows us to participate.

In this one, we’ll discover how you can use Python and Machine Learning to make predictions.

We’ll start with real fundamentals and go to the place where we’ll apply algorithms to the data to make a prediction. Let’s get started!

What is Machine Learning?

Machine learning is a way of giving the computer the ability to make predictions. It is too popular now; you probably use it daily without noticing. Here are some technologies that are benefitting from Machine Learning;

  • Self Driving Cars
  • Face Detection System
  • Netflix Movie Recommendation System

But sometimes, AI & Machine Learning, and Deep learning can not be distinguished well.
Here is a grand scheme that best represents those terms.

Machine Learning with Python

Classifying Machine Learning As a Beginner

Machine Learning algorithms can be clustered by using two different methods. One of these methods involves determining whether a 'label' is associated with the data points. In this context, a 'label' refers to the specific attribute or characteristic of the data points you want to predict.

If there is a label, your algorithm is classified as a supervised algorithm; otherwise, it is an unsupervised algorithm.

Another method to classify machine learning algorithms is classifying the algorithm. If you do that, machine learning algorithms can be clustered as follows:

  • Regression
  • Classification
  • Clustering

Like Sci-kit Learn did, here.

Machine Learning with Python

Image source: scikit-learn.org

What is Sci-kit Learn?

Sci-kit learn is the most famous machine learning library in Python; we’ll use this in this article. Using Sci-kit Learn, you will skip defining algorithms from scratch and use the built-in functions from Sci-kit Learn, which will ease your way of building machine learning.

In this article, we’ll build a machine-learning model using different regression algorithms from the sci-kit Learn. Let’s first explain regression.

What is Regression?

Machine Learning with Python

Regression is a machine learning algorithm that makes predictions about continuous value. Here are some real-life examples of regression,

  • Weather Prediction
  • Tesla Stock Price Prediction
  • House Price Prediction

Now, before applying Regression models, let’s see three different regression algorithms with simple explanations;

  • Multiple Linear Regression: Predicts using a linear combination of multiple predictor variables.
  • Decision Tree Regressor: Creates a tree-like model of decisions to predict the value of a target variable based on several input features.
  • Support Vector Regression: Finds the best-fit line (or hyperplane in higher dimensions) with the maximum number of points within a certain distance.

Before applying machine learning, you need to follow specific steps. Sometimes, these steps might differ; however, most of the time, they include;

  • Data Exploration and Analysis
  • Data Manipulation
  • Train-test split
  • Building ML Model
  • Data Visualization

In this one, let’s use a data project from our platform to predict price here.

Machine Learning with Python

Data Exploration and Analysis

In Python, we have several functions. By using them, you can become acquainted with the data you use.

But first of all, you should load the libraries with these functions.

import pandas as pd  import sklearn  from sklearn.linear_model import LinearRegression  from sklearn.ensemble import RandomForestRegressor  from sklearn import svm  from sklearn.model_selection import train_test_split  from sklearn.metrics import r2_score  from sklearn.metrics import mean_squared_error

Excellent, let’s load our data and explore it a little bit

data = pd.read_csv('path')

Input the path of the file in your directory. Python has three functions that will help you explore the data. Let’s apply them one by one and see the result.

Here is the code to see the first five rows of our dataset.

data.head()

Here is the output.

Machine Learning with Python

Now, let’s examine our second function: view the information about our datasets column.

data.info()

Here is the output.

RangeIndex: 10000 entries, 0 to 9999  Data columns (total 8 columns):    #     Column     Non-Null  Count   Dtype  - - -   - - - -    - - - - - - - -   - - - -    0     loc1       10000 non-null     object    1     loc2       10000 non-null     object    2     para1      10000 non-null     int64    3     dow        10000 non-null     object    4     para2      10000 non-null     int64    5     para3      10000 non-null     float64    6     para4      10000 non-null     float64    7     price      10000 non-null     float64   dtypes:   float64(3),   int64(2),   object(3)   memory  usage:  625.1+ KB  

Here is the last function, which will summarize our data statistically. Here is the code.

data.describe()

Here is the output.

Machine Learning with Python

Now, you are more familiar with our data. In machine learning, all your predictor variables, which means the columns you intend to use to make a prediction, should be numerical.

In the next section, we’ll make sure about it.

Data Manipulation

Now, we all know that we should convert the “dow” column to numbers, but before that, let’s check if other columns consist of numbers only for the sake of our machine-learning models.

We have two suspected columns, loc1, and loc2, because, as you can see from the output of the info() function, we have just two columns that are object data types, which can include numerical and string values.

Let’s use this code to check;

data["loc1"].value_counts()

Here is the output.

loc1  2	1607  0	1486  1	1223  7	1081  3	945  5	846  4	773  8	727  9	690  6	620  S	  1  T	  1  Name:  count,  dtype:  int64  

Now, by using the following code, you can eliminate those rows.

data = data[(data["loc1"] != "S") & (data["loc1"] != "T")]

However, we must ensure that the other column, loc2, does not contain string values. Let's use the following code to ensure that all values are numerical.

data["loc2"] = pd.to_numeric(data["loc2"], errors='coerce')  data["loc1"] = pd.to_numeric(data["loc1"], errors='coerce')  data.dropna(inplace=True)  

At the end of the code above, we use the dropna() function because the converting function from pandas will convert “na” to non-numerical values.

Excellent. We can solve this issue; let’s convert weekday columns into numbers. Here is the code to do that;

# Assuming data is already loaded and 'dow' column contains day names  # Map 'dow' to numeric codes  days_of_week = {'Mon': 1, 'Tue': 2, 'Wed': 3, 'Thu': 4, 'Fri': 5, 'Sat': 6, 'Sun': 7}  data['dow'] = data['dow'].map(days_of_week)    # Invert the days_of_week dictionary  week_days = {v: k for k, v in days_of_week.items()}    # Convert dummy variable columns to integer type  dow_dummies = pd.get_dummies(data['dow']).rename(columns=week_days).astype(int)    # Drop the original 'dow' column  data.drop('dow', axis=1, inplace=True)    # Concatenate the dummy variables  data = pd.concat([data, dow_dummies], axis=1)    data.head()  

In this code, we define weekdays by defining a number for each day in the dictionary and then simply changing the day names with those numbers. Here is the output.

Machine Learning with Python

Now, we are almost there.

Train-Test Split

Before applying a machine learning model, you must split your data into training and test sets. This allows you to objectively assess your model's efficiency by training it on the training set and then evaluating its performance on the test set, which the model has not seen before.

X = data.drop('price', axis=1)  # Assuming 'price' is the target variable  y = data['price']  X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

Building Machine Learning Model

Now everything is ready. At this stage, we’ll apply the following algorithms at once.

  • Multiple Linear Regression
  • Decision Tree Regression
  • Support Vector Regression

If you are a beginner, this code might seem complicated, but rest assured, it is not. In the code, we first assign model names and their corresponding functions from scikit-learn to the model's dictionary.

Next, we create an empty dictionary called results to store these results. In the first loop, we simultaneously apply all the machine learning models and evaluate them using metrics such as R^2 and MSE, which assess how well the algorithms perform.

In the final loop, we print out the results that we have saved. Here is the code

# Initialize the models  models = {      "Multiple Linear Regression": LinearRegression(),      "Decision Tree Regression": DecisionTreeRegressor(random_state=42),      "Support Vector Regression": SVR()  }    # Dictionary to store the results  results = {}    # Fit the models and evaluate  for name, model in models.items():      model.fit(X_train, y_train)  # Train the model      y_pred = model.predict(X_test)  # Predict on the test set            # Calculate performance metrics      mse = mean_squared_error(y_test, y_pred)      r2 = r2_score(y_test, y_pred)            # Store results      results[name] = {'MSE': mse, 'R^2 Score': r2}    # Print the results  for model_name, metrics in results.items():      print(f"{model_name} - MSE: {metrics['MSE']}, R^2 Score: {metrics['R^2 Score']}")  

Here is the output.

Multiple Linear Regression - MSE: 35143.23011545407, R^2 Score: 0.5825954700994046  Decision Tree Regression - MSE: 44552.00644904675, R^2 Score: 0.4708451884787034  Support Vector Regression - MSE: 73965.02477382126, R^2 Score: 0.12149975134965318  

Data Visualization

To see the results better, let’s visualize the output.

Here is the code where we first calculate RMSE (square root of MSE) and visualize the output.

import matplotlib.pyplot as plt  from math import sqrt    # Calculate RMSE for each model from the stored MSE and prepare for plotting  rmse_values = [sqrt(metrics['MSE']) for metrics in results.values()]  model_names = list(results.keys())    # Create a horizontal bar graph for RMSE  plt.figure(figsize=(10, 5))  plt.barh(model_names, rmse_values, color='skyblue')  plt.xlabel('Root Mean Squared Error (RMSE)')  plt.title('Comparison of RMSE Across Regression Models')  plt.show()  

Here is the output.

Machine Learning with Python

Data Projects

Before wrapping up, here are a few data projects to start.

  • Data Engineer Salary 2024- Analyzed Data Engineer Salary trends for 2024
  • 2018-2019 Premier League- Analyzed Manchester United 2018-2019 Statistics
  • Delivery Duration Prediction- Analyzed Delivery Duration for Doordash
  • Customer Churn Prediction- Analyzed Customer Churn for Sony

Also, if you want to do data projects about interesting datasets, here are a few datasets that might become interesting to you;

  • Heart Disease — You can predict heart disease based on given features
  • Human Activity Recognition Using Smartphones — You can predict step count.
  • Forest Fire — You can predict burned areas.

Conclusion

Our results could be better because too many steps exist to improve the model's efficiency, but we made a great start here. Check out Sci-kit Learn's official document to see what you can do more.

Of course, after learning, you need to do data projects repeatedly to improve your capabilities and learn a few more things.

Nate Rosidi is a data scientist and in product strategy. He's also an adjunct professor teaching analytics, and is the founder of StrataScratch, a platform helping data scientists prepare for their interviews with real interview questions from top companies. Nate writes on the latest trends in the career market, gives interview advice, shares data science projects, and covers everything SQL.

More On This Topic

  • A Beginner's Guide to End to End Machine Learning
  • Essential Machine Learning Algorithms: A Beginner's Guide
  • A Beginner's Guide to the Top 10 Machine Learning Algorithms
  • A Beginner’s Guide to Web Scraping Using Python
  • Making Predictions: A Beginner's Guide to Linear Regression in Python
  • Mastering GPUs: A Beginner's Guide to GPU-Accelerated DataFrames in Python

Jio Backed Startup TWO AI Unveils ChatSUTRA, India’s Answer to ChatGPT

Jio backed TWO AI announced the launch of SUTRA through its new AI app, ChatSUTRA, available at two.chat.ai. ChatSUTRA is now accessible via web and will soon be available on iOS and Android.

The startup raised a $20M seed fund in February 2022 from Jio Platforms and South Korean internet conglomerate Naver. “Jio has been one of our key partners for a long time and has invested in us from the very beginning,” said Pranav Mistry, the founder of TWO, in an exclusive interaction with AIM.

He added that Reliance Jio Infocomm chairman Akash Ambani takes a keen interest in the growth of the startup. “I meet with them often. Jio’s vision is to bring the power of AI through its services. Being a Jio partner gives us access to this market,” he said.

Multilingual versatility is at the core of ChatSUTRA’s power, enabling users to engage in AI conversations in over 50 languages without language confusion or cultural hallucination. Supported languages include Hindi, Gujarati, Bengali, Marathi, Korean, Japanese, Greek, and Arabic.

Eighty percent of the world speaks languages other than English. ChatSUTRA aims to give non-English speakers equal access to the best in generative AI, enabling them to seek knowledge, find answers to complex questions, and explore different cultures.

Whether users need advanced translation, text summarisation, creative and collaborative writing, recipes for foreign dishes, or DIY repair instructions, ChatSUTRA delivers quick and comprehensive answers.

The ChatSUTRA interface is simple and straightforward, featuring a list of saved conversations on the left, a center area filled with “Conversation Starter Cards,” and a variety of languages to choose from. Users can log in to store conversations across devices and access them anywhere.

ChatSUTRA-Pro, to be released soon, will offer early access to TWO’s latest features, higher chat message limits, and access to SUTRA’s best-performing models. Additionally, SUTRA is available as an API for developers at docs.two.ai.

Mistry, the driving force behind ChatSUTRA, said, “ChatSUTRA represents the best in Multilingual AI assistance available for the world, not just English speakers. Our mission is to fix the language gap in AI language models, and with the launch of ChatSUTRA, I invite everyone to try AI for themselves in हिंदी, ગુજરાતી, বাংলা, العربية, मराठी, తెలుగు, தமிழ், ಕನ್ನಡ, മലയാളം, ଓଡ଼ିଆ, ਪੰਜਾਬੀ, and over 50 other languages.”

“SUTRA is AI for all, AI that can understand nuances of languages and dialects. AI beyond just English,” he added.

ChatSUTRA is powered by SUTRA by TWO AI, models trained with an innovative LLM architecture that learns new languages independently, making it multilingual, highly scalable, and cost-efficient.

The major advances in AI, primarily in English, have left 80% of the non-English-speaking population on the sidelines. Existing AI models predominantly train and scale in English, limiting access to high-quality LLMs for non-English speakers.

ChatSUTRA aims to address this gap, unlocking AI’s potential in large economies such as India, Korea, Japan, and the MEA region.

ChatGPT, Claude.ai, and Perplexity exhibit some multilingual capabilities but often revert to English and struggle with complex multilingual tasks, especially in lower-resource languages.

Traditional multilingual LLMs train on data biased towards English, leading to confusion and reduced performance across languages.

ChatSUTRA will continue to evolve with new features, advanced SUTRA-Pro models, and mobile versions on iOS and Android.

The post Jio Backed Startup TWO AI Unveils ChatSUTRA, India’s Answer to ChatGPT appeared first on AIM.

The Ultimate Guide to Approach LLMs

The Ultimate Guide to Approach LLMs
Image by Author

Large Language Models (LLMs) are powerful natural language processing models that can understand and generate human-like context, something never seen before.

With all that prowess, LLMs are in high demand, so let’s see how anyone can learn about them, especially in the post-GPT world.

Back to Basics

Fundamentals are evergreen, so it is best to start from the basic concepts by building an agile mindset to ramp up on any new technology quickly. Asking the right questions early on is crucial, such as:

  • What is new about this technology, and why is it considered a breakthrough development? For example, when talking about Large Language Models, consider breaking them into each component – “Large, Language, and Models”, and analyze the meaning behind each of them. Starting with largeness – understand whether it is about the largeness of the training data or concerns model parameters.
  • What does it mean to build a model?
  • What is the purpose behind modeling a certain process?
  • What was the prior gap that this innovation bridges?
  • Why now? Why did this development not happen before?

Furthermore, learning any new technological advancement also requires discerning the challenges that come with it, if any, and how to mitigate or manage them.

Building such an inquisitive mindset helps connect the dots to understand the evolution that if something exists today – is it in some way building on the challenges or gaps of its predecessors?

What’s Different with the Language?

In general, computers understand numbers, hence, understanding language requires the conversion of sentences to a vector of numbers. This is where the knowledge of Natural Language Processing techniques (NLP) comes to the rescue. Further, learning a language is challenging, as it involves identifying intonation, sarcasm, and different sentiments. There are situations where the same word can have different meanings in different contexts, emphasizing the importance of contextual learning.

Then, there are considerations, such as, how far into a sentence is the context, and how a model knows the context window. Going a level deeper, isn’t this how humans pick context by paying attention to specific words or parts of sentences?

Continue thinking along these lines and you will relate with the attention mechanism. Building these foundations helps develop a mind map, shaping an approach to a given business problem.

No One Course!!!

Unfortunately, everyone looks for one single resource which can make it easier to learn a concept. However, that is where the problem lies. Try internalizing a concept by studying it from multiple resources. Chances are high that you would understand a concept better if you learned it from multiple viewpoints rather than just consuming it as a theoretical concept.

LLM courses
Image by author

Following the leading industry experts, such as Jay Alammar, Andrew Ng, and Yann LeCun, is helpful too.

Tips for Business Leaders

As the AI teams get ramped up on learning rapidly evolving developments, businesses are also working on finding the right problems that justify the use of such sophisticated technology.

Notably, LLMs trained on generic datasets can do good to accomplish general tasks. However, if the business case demands domain-specific context, then the model must be provided with sufficient context to give a relevant and accurate response. For example, expecting an LLM to respond to a company’s annual report requires additional context, which can be done by leveraging Retrieval Augmented Generation (RAGs).

But before going deep into the trenches of advanced concepts and techniques, it is suggested that businesses first develop trust with the technology by trying low-hanging projects, that allow them to see the results quickly. For example, picking initiatives that are not directly customer-facing or deal with sensitive data issues is good to start with, so that their downside can be controlled timely if the solution goes rogue.

Building trust with technology
Image by Author

Businesses can start seeing the impact, and thereby reap potential returns, by leveraging AI for creating marketing copy, writing drafts and summaries, or generating insights to augment the analysis.

Such applications give a preview of not just the capabilities and possibilities but also the limitations and risks that come with these advanced models. Once AI maturity sets in, businesses can accelerate efforts in AI to build their competitive edge, delighting customer experience.

The Trust Factor

Talking about trust, business leaders also share a big responsibility of communicating the right and effective approach to using LLMs with their developer community.

As developers begin learning LLMs, inquisitiveness may quickly lead to using them in their day-to-day tasks such as writing code. Hence, it is important to consider whether you can rely on such code, as they could potentially make mistakes, such as writing oversimplified code, or not covering all edge cases. The suggested code might even be incomplete or too complex for the use case.

Hence, it is always advised to use the LLM output as a starting point and iterate over it to meet the requirements. Test it on different cases, review it yourself, pass it through peer review, and refer to some established and trusted resources to validate the code. It's crucial to thoroughly analyze the model output to ensure there are no security vulnerabilities and verify that the code aligns with best practices. Testing the code in a safe environment can help identify potential issues.

In short, keep refining till you are confident it is reliable, efficient, complete, robust, and optimal.

Summary

Adapting to quickly learn and use the new technological advancements takes time, so it is best to resort to the collective knowledge of how peers in the industry are approaching it. This post is in line with sharing some of those best practices and evergreen principles that will allow you to embrace the technology like a leader.

Vidhi Chugh is an AI strategist and a digital transformation leader working at the intersection of product, sciences, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, an author, and an international speaker. She is on a mission to democratize machine learning and break the jargon for everyone to be a part of this transformation.

More On This Topic

  • The Ultimate Guide To Different Word Embedding Techniques In NLP
  • Your Ultimate Guide to Chat GPT and Other Abbreviations
  • The Ultimate Guide to Mastering Seasonality and Boosting Business Results
  • Master Data Science in a Year: The Ultimate Guide to Affordable,…
  • A Guide On How To Become A Data Scientist (Step By Step Approach)
  • Bark: The Ultimate Audio Generation Model

Microsoft Introduces AI Weather Forecasting Model ‘Aurora’

Microsoft Introduces AI Weather Forecasting Model ‘Aurora’

In 2023, when Storm Ciarán hit Northwest Europe, it left a trail of destruction. The storm’s intensity caught many off guard, exposing the limitations of current weather prediction models and highlighting the need for more accurate forecasting in the face of climate change.

In response to the limitations faced by advanced AI weather forecasting systems, a team of Microsoft researchers developed Aurora, an innovative AI foundational model designed to extract valuable insights from extensive atmospheric datasets.

Aurora offers a new approach to weather forecasting with the potential to revolutionise predictions and mitigate the impacts of climatic change events.

Trained on over a million hours of weather and climate simulations, Aurora boasts efficacy, enabling it to develop a comprehensive understanding of atmospheric dynamics. This allows the model to excel in various prediction tasks, even in regions with limited data or extreme weather conditions.

Operating at a spatial resolution of 0.1°, which measures to about 11 km at the equator, Aurora captures intricate details of atmospheric processes with detail, providing unprecedented accuracy in operational forecasts while significantly reducing computational costs compared to traditional systems.

Apart from accuracy and efficiency, Aurora stands out for its versatility. The model can forecast a wide array of atmospheric variables, including temperature, wind speed, air pollution levels, and concentrations of greenhouse gases. Its adaptable architecture is designed to handle diverse inputs and generate predictions at varying resolutions and levels of fidelity.

Aurora’s architecture incorporates a flexible 3D Swim Transformer with perceiver-based encoders and decoders, enabling it to process and predict a range of atmospheric variables across spatial and pressure levels.

Through pretraining on an extensive corpus of diverse data and fine-tuning for specific tasks, Aurora learns to capture intricate patterns and structures in the atmosphere, thereby excelling with limited training data during task-specific finetuning.

Meanwhile, Google has announced SEEDS, an AI technology aimed at enhancing weather forecasts through diffusion models. SEEDS also offers a decrease in computational expenses for producing ensemble forecasts and offers an enhanced depiction of rare or severe weather events.

Some of the other recent weather-related developments by Google include MetNET-3 and GraphCast.

MetNet-3 provides high-resolution forecasts up to 24 hours ahead, and GraphCast is a weather model capable of predicting conditions up to ten days in advance.

The post Microsoft Introduces AI Weather Forecasting Model ‘Aurora’ appeared first on AIM.

Meet the Young Indian Founders Building AI Products for the World

Cred founder and CEO Kunal Shah wasn’t exaggerating when he said India was now ready to build products for the world.

“Earlier, I used to think we were not really ready for that, but I feel now there is enough evidence that we might be able to create something interesting for the world,” said Shah in a recent interview.

Recently, Jivi’s purpose-built medical LLM Jivi MedX ranked number 1 on the Open Medical-LLM Leaderboard, outperforming OpenAI GPT-4 and Google’s Med-PaLM 2.

Meanwhile, Indian founders started QX Lab AI, which recently launched the Hybrid GenAI Multimodal Platform ‘Ask QX PRO’. Similarly, Quizizz supports millions of students in over 150 countries.

The list goes on.

Most recently, two Indian engineering students, Rudransh Agnihotri and Manasvi Kapoor, launched Mayakriti, an image generation platform that uses advanced GenAI to create lifelike visuals — from photorealistic portraits and personalised creations as well as in a variety of art styles such as cartoons, anime, and abstract art.

Rudransh, a third-year mechatronics engineering student from Delhi Skill and Entrepreneurship University, and Manasvi, currently in his second year, pursuing electronics and communication engineering, with a specialisation in AI and ML, from Netaji Subhas University of Technology founded FuturixAI and Quantum Works, a startup focusing on AI research and innovative solutions.

Mayakriti is a ‘Made in India’ product that utilises research, including papers such as Git Re-Basin and Arcee’s MergeKit, and concepts from mathematics and physics to create high-quality images without requiring the same amount of resources and computing power as needed by other popular models.

“To train something like GPT-4 in India is not possible in the foreseeable future due to the limitations of compute and training parameters. Even models like DALL·E 3, Midjourney, and Imagen require enormous training parameters. Since we are a research lab, we worked on the math and used techniques like SLERP in diffusion modelling to develop Mayakriti,” said the founders in an exclusive interview with AIM.

SLERP, or Spherical Linear Interpolation, is used to interpolate between the parameters of two models in a spherical space, ensuring smooth transitions and effective merging of models.

They highlighted that this technique has been used in video games and graphic rendering for some time now, but recent research showed its use in blending a model’s parameters into another so that the properties of both models remain.

“This is how we are able to give image generation qualities that are better than existing models and are not limited by the compute resources in India,” said Agnihotri, who believes that Indians have a higher than average math intellect and should focus on research to build models that can perform the same as the bigger models in fewer parameters.

AdapterFusion, which integrates multiple task-specific adapters into a base model, retaining its general capabilities while improving performance on specific tasks through lightweight modifications, is another technique that was used.

Anuvadini CEO Chandrasekhar Buddha said that the company, which is a Section 8 non-profit company under the Ministry of Education, recently provided FuturixAI and Quantum Works with NVIDIA A100 GPUs on Microsoft Azure Cloud network. Currently, Mayakriti is using 8 NVIDIA A100 GPUs (80 GB each) for deployment.

“I want to let people know through Mayakriti that if we are making that possible in just a few A100 GPUs by research and utilising core concepts of physics and math, then other Indians can also develop models as good as the big players by just focusing on the fundamentals,” said Agnihotri.

Image Generation Using Mayakriti

Apart from using existing open-source datasets, the founders also developed their own datasets. “In our colleges, our friends are graphic designers, so we asked them to go over the internet, collect some images and label them according to the prompt. This way, we collected up to image data from 3,000 images, and then multiplied that to the base variant,” they explained.

Mayakriti sets itself apart with its focus on hyper-realistic outputs and a wide range of customisable art styles. It employs several separate models and each model has its own speciality, from generating anime to creative arts.

Apart from letting users write their prompts, the platform also lets them choose resolution, environment, lighting, camera settings, composition, and style for the images.

It also comes with an in-built image editor, which the founders plan to improve by adding features, including the option to add text, which can allow users to create instant posters.

Agnihotri was once a JEE aspirant who couldn’t make it to an IIT, but that setback didn’t reduce his love for math. Now, with FuturixAI and Quantum Works, the young founder aims to push forward research in the AI field in India making use of research in math and physics.

“Google has the capability, dataset and compute, but at the same time, we have our own methods that are evolving with time,” he said.

In an attempt to make Mayakriti better than the image generation offered by Google and OpenAI, the founders plan to make the product free in the coming months for people to try globally and provide feedback.

Further, in the near future, the startup has another interesting product in the pipeline. They aim to release an AI platform that will let users train their ML models by uploading just a single dataset.

This is just the beginning and, soon in the future, we’ll see more such startups building from India, for the world!

The post Meet the Young Indian Founders Building AI Products for the World appeared first on AIM.

6 AI moves Apple can make at WWDC to leapfrog the competition

glowblobgettyimages-1504173168

As we approach WWDC 2024 on June 10, it's clear that this will be among the most important developer events in Apple's history. The progress made by competitors such as Microsoft and Google — who have incorporated significant generative AI features into their products and services — means Apple must not merely catch up but surpass their advancements.

Recently, Microsoft integrated the GPT-4-based Copilot into its latest Qualcomm Snapdragon Elite X-based PCs, and Google is incorporating its Deepmind Gemini GenAI models across its ecosystem. Apple, as of now, is behind.

Also: 10 things I'd like to see in VisionOS 2.0

Apple's hardware, including the AI-powered M4 chip, has tremendous potential. However, hardware alone is insufficient. Consumers are looking for significant software innovations that enhance their user experience. Apple must demonstrate that its hardware advancements are crucial for the next generation of digital experiences, unlocking new AI capabilities and augmented reality interactions. What we see revealed at next week's WWDC could fundamentally change how users interact with their devices.

What do I believe Apple needs to reveal — or, at least, set in motion — this month? I can think of six things:

1. Develop a clear on-device strategy for generative AI and invest in AI-driven developer tools

Apple needs a robust strategy for integrating gen AI across its devices. Embedding a small language model into MacOS, iOS, iPadOS, and VisionOS will enable real-time processing, improved responsiveness, and increased privacy by keeping more data on-device. Apple should also provide robust APIs to seamlessly utilize on-device, edge, and cloud processing for natural language understanding and computer vision tasks.

Also: Making generative AI more efficient with a new kind of chip

At WWDC, Apple must demonstrate its intention to invest heavily in comprehensive tools, frameworks, and training programs to foster a strong generative AI developer ecosystem, not just the back-end infrastructure. This includes user-friendly gen AI SDKs, detailed documentation, and interactive learning modules such as tutorials, online courses, and coding exercises. Active developer forums and regular Q&A sessions with Apple engineers will be crucial for knowledge sharing and support.

2. Leverage ethical AI and privacy as a competitive advantage

Emphasizing ethical AI development will ensure fairness, transparency, and accountability. Ethical AI involves addressing biases in AI models, ensuring AI decisions are explainable, and adhering to principles that prevent misuse or harm. This approach will help build trust and set a high standard in the AI industry.

Apple's historical commitment to privacy can also give it a significant advantage in the AI race. Technologies like differential privacy and on-device processing would enable Apple to offer powerful AI capabilities while maintaining user trust. Differential privacy ensures that personal information cannot be traced back to individuals, and on-device processing minimizes the need to send sensitive data to cloud servers.

Also: Apple builds a slimmed-down AI model using Stanford, Google innovations

Providing private or family-specific AI instances would further enhance privacy and personalized interactions. For example, HomePod could recognize individual voices and offer personalized responses, while Apple TV+ could recommend shows tailored to each user. AI can coordinate family schedules, manage activities, and send reminders. Robust privacy controls and advanced parental controls ensure secure and healthy digital environments for children.

By focusing on these principles, Apple can lead by example and set new benchmarks in developing and deploying ethical AI.

3. Integrate seamlessly with third-party services and partner with multiple AI providers

To offer the best AI experiences, Apple must integrate its AI services with various third-party platforms and partner with multiple AI and service providers, not just OpenAI and ChatGPT, as the company is expected to do. For example, Siri could provide personalized shopping recommendations by integrating with Amazon and Instacart. It could remind users to reorder items or suggest products based on past behavior.

Collaborating with financial services like Plaid, especially with Apple Card, could offer comprehensive financial insights, including budgeting advice, expense tracking, and alerting users to unusual account activity.

Also: Make room for RAG: How Gen AI's balance of power is shifting

While Watch, Fitness+, and Health are the company's preferred health platforms, excluding third-party health data providers from its AI infrastructure would be inadvisable. Partnering with health and fitness apps like MyFitnessPal and Fitbit would allow Apple's AI to offer tailored workout plans and nutrition advice while seamlessly monitoring health metrics. AI could work with platforms like Khan Academy and Coursera in education to provide personalized learning recommendations and track educational progress.

Apple should also incorporate Retrieval-Augmented Generation (RAG), which combines generative language models with information retrieval techniques to access external knowledge sources and incorporate real-time data into responses.

Partnering with multiple AI providers, including specialists in natural language processing, computer vision, and machine learning, will bring cutting-edge innovations and accelerate the development of advanced features across Apple's ecosystem. This multi-partner approach reduces the risk of over-reliance on a single provider, increases resilience, and allows Apple to tailor AI solutions to different markets and user segments.

4. Deploy AI-accelerated appliances on the edge with dedicated cloud capacity

To meet the growing demand for fast application response times, I believe Apple should consider using AI-accelerated edge devices capable of handling complex AI tasks locally. This would help reduce latency and improve overall performance. Apple's vertically integrated supply chain will likely involve AI servers powered by M2 Ultra and M4 chips, especially within its data centers. This setup would ensure seamless integration with Apple's software and provide greater control over performance and security. Localized processing can be enabled by placing these devices strategically in regional and metropolitan data centers, reducing the reliance on internet bandwidth.

Also: AI at the edge: 5G and the Internet of Things see fast times ahead

Additionally, Apple could collaborate with cloud-based AI providers to manage complex AI tasks in the cloud when necessary. Combining edge and cloud resources, this hybrid approach would create a robust and scalable AI infrastructure that supports real-time AI applications such as augmented reality, language translation, and advanced data analytics.

5. Enhance proactive assistance and personalization

Apple's AI should proactively anticipate user needs and provide personalized experiences across its ecosystem. AI can analyze calendar events, habitual purchases, and traffic conditions to offer contextual reminders, like leaving early for appointments or suggesting groceries. Personalized briefings on Apple Watch could include weather updates, news summaries, traffic alerts, and schedule highlights.

Also: Google unveils big AI features coming to Android phones. Here's what to expect

AI can enhance contextual awareness by integrating with sensors and data sources on Apple devices. For example, starting a workout on Fitness+ could prompt AI to suggest a matching Apple Music playlist, monitor health metrics in real-time with Apple Watch, and provide motivational prompts. AI can analyze user behavior to offer smart recommendations for content, activities, and products, acting as a personal assistant attuned to individual tastes.

Proactive health and wellness features could remind users to take medication via the Health app, suggest wellness tips based on activity levels tracked by Apple Watch, and offer mental health support through mindfulness reminders. Personalized routines on Apple devices, like HomePod adjusting lighting based on daily habits, will enhance user experiences.

6. Ensure AI shines across all products and services

Given Apple's extensive range of consumer products, generative AI capabilities must excel across every product in the ecosystem. I think I can speak for every Apple product user that enhancing Siri to make its responses more relevant and intelligent is crucial, but generative AI must also improve experiences in Apple Music, Apple News, Health, Fitness+, and TV.

In Apple Music, AI could create personalized playlists and provide music recommendations based on user preferences. In Apple News, AI could curate personalized news feeds and summarize articles.

Also: Move over, Alexa and HomeKit: A new Assistant is here to open source your smart home

In Health and Fitness+, AI could offer tailored workout routines and personalized wellness tips, while Apple Watch could provide deeper health insights and track fitness goals.

For Apple TV, AI could improve content discovery by recommending shows based on viewing history and offering interactive features like real-time trivia.

Leveraging AI to enhance HomeKit's capabilities is essential, especially since HomeKit isn't a market leader in home automation. AI can offer smarter home automation by predicting user behavior to automate lights, thermostat settings, and security systems.

Integrating AI across all devices ensures a seamless user experience. Preferences and data from one device would then inform recommendations on another, creating a unified ecosystem.

How Apple wins the generative AI race

Apple's success in the AI race hinges on its ability to innovate and outperform competitors. By developing a clear on-device strategy, deploying AI-accelerated devices at the edge, and partnering with multiple AI and service providers, Apple can ensure comprehensive integration and enhanced user experiences. Emphasizing privacy, ethical AI development, and continuous innovation, Apple must leverage its ecosystem to provide seamless, personalized interactions across all products.

The transition to Apple Silicon chips on the Mac at WWDC 2020 was a game-changer. The M1 significantly improved performance, power efficiency, and integration within Apple's ecosystem, giving the company greater control over its supply chain and product development. However, it didn't drastically transform the user experience, and the M4, while impressively powerful three generations onward, cannot transform the user experience solely on its specifications, either.

Also: The 3 Apple products you shouldn't buy this month (including this iPad)

This year's developer event, however, promises to be transformative. Generative AI could impact every aspect of Apple's ecosystem and applications, enhancing every part of the user experience over the next several years. This is more than just another update; it's about redefining what users can expect from their devices.

For consumers, the AI race is about trust, user experience, and integrating advanced capabilities into daily life. Apple has the opportunity to set new benchmarks and inspire the tech community, starting now with WWDC 2024 — a crucial moment for Apple to demonstrate its vision and commitment to leading the future of AI-driven innovation.

Apple

Zendesk Expects $3 Bn in Revenue From AI by 2027 

California-based Zendesk’s generative AI game to reshape customer service began more than a year ago. Zendesk AI, the company’s core CX product, offers various LLMs-driven customer service tools and ticketing systems.

Brands across the globe use it to manage customer interactions, enhance call centre efficiency, help agents handle inquiries, prioritise tickets, and deliver a smooth customer support experience.

More recently, the company launched autonomous AI agents, workflow automation, agent copilot, and AI-powered Workforce Engagement Management (WEM) and Quality Assurance (QA) tools in Zendesk AI.

In a detailed interaction with AIM, Vasudeva Rao Munnaluri, RVP for India and SAARC at Zendesk, explained that with generative AI in its bucket and nearly two billion dollars in revenue attributable to the adoption of its Zendesk AI and expansion with customers, the SaaS giant plans to reach $3 billion dollars by 2027.

The Role of Data

“We process around eight billion requests daily and four million agents and admins leveraging our systems. This data fuels our AI missions,” said Munnaluri.

Its proprietary data set, consisting of ticket data accumulated over years from thousands of customers, forms the backbone of its AI models. These models are designed to detect intent, sentiment, and emotions, enabling efficient ticket routing and knowledge base provision.

“This data-driven approach helps automate workflows, enhancing both agent productivity and customer satisfaction,” added Munnaluri. The company aims to automate 50% of customer interactions through autonomous agents.

Another important element of the company’s generative AI strategy is its partnership with AWS and Anthropic. The collaboration was announced last month at its flagship event Relate in San Francisco and Las Vegas.

Through AWS, Zendesk leverages Amazon Bedrock to scale generative AI applications, incorporating Anthropic’s leading model Claude 3. “We are able to offer faster, more efficient, and accurate AI features, thanks to our partnerships with AWS and Anthropic,” Munnaluri said.

Zendesk’s primary competitors include Freshworks, Zoho, and Oracle.

Expanding Footprint in India

With over 80 employees and a comprehensive functional representation, the India team continues to contribute significantly to the company’s global operations. It has over 160,000 customers in 160 countries. Some of its prominent customers in India are Cars24, Dream11, Plum, Unacademy and more.

The company began operations in India in 2016, and the country has since become a crucial market within the APAC region. “India is a key market for Zendesk and one of the largest in the APAC region. We have seen continuous growth and plan to expand further,” Munnaluri emphasised.

Although expansion in India is a priority due to the market’s dynamic nature, Munnaluri expressed that one of the major challenges in this country is the slow adoption of AI, with less than 15% of Indian businesses having an enterprise AI strategy as per the Zendesk CX Trends Report 2024.

“Well, four out of five of my conversations in India are about AI, but I think, it’s still not caught into the overall strategy there. Over 50% still need adequate data at the view level. When you say data, it needs to be organised in a way that you have the taxonomy or the metadata available and ready to be understood by the AI models,” he added.

Institutional inertia or resistance to change towards AI adoption further complicates the matter. Additionally, Indian customers are demanding more personalised and faster services, which requires solid investment in AI and data management.

However, to make the generative AI products more diverse and accessible to a wider population of the country, it recently acquired Berlin-based AI startup Ultimate, which supports 109 languages globally, including over 13 Indian languages such as Marathi, Hindi, Telugu, Tamil, Malayalam, Kannada, Odia, and Bengali.

This acquisition is in an effort to improve Zendesk’s ability to support Indian languages and break the barriers in customer service.

The post Zendesk Expects $3 Bn in Revenue From AI by 2027 appeared first on AIM.

Schools in Bengaluru Offer AI & Robotics Courses to Kids in Grades 3 and Onwards

How Schools in Bengaluru are Embracing AI & Robotics

Schools in Bengaluru have started offering AI and robotic courses to children as young as grade 3.

Chaman Bhartiya School in the city is one of many leaders proactively exploring the potential benefits of AI and robotics while advocating for responsible implementation strategies.

Their curriculum explores AI, robotics, and coding, providing students with hands-on experience and a platform to explore their creativity.

Speaking to AIM, Chaman Bhartiya School director Allan Kjaer Andersen said, “I’m from Northern Europe, but I can say that India is not behind in robotics. In fact, I would argue that what we accomplish in Bhartiya schools surpasses what schools in Copenhagen are achieving.”

He believes that from a young age, their students are exposed to this environment, and they demonstrate remarkable intelligence.

The school offers an innovative robotics kit that students can use to build anything they want. The school has also established a very good name in space technology, allowing students to begin developing satellites.

In 2020, Indus International School deployed AI-powered robots — Eagle, Eagle 1.0, Eagle 2.0 — to teach in classrooms. The AI-enabled robots teach lessons in biology, chemistry, geography, history, and physics to classes 7, 8, and 9.

AI Enters the Classroom

Elaborating on the AI programmes at CBS, MYP design and robotics facilitator Shashank Mane said, “Startle, an AI platform, is designed to help teachers with tasks like creating a lesson or unit plan.”

According to Mane, this platform can generate a complete lesson plan even when given only a little input. It also helps teachers conduct quizzes automatically. This use of AI serves as a great time saver for teachers, allowing them more time for other activities with their students.

Over the years, several initiatives have been explored to help empower teachers. For instance, The Connect Institute’s Ujjwal Shiksha recently unlocked the ability for educators to customise their teaching methodologies. This was done to ensure that each student’s unique learning requirements were addressed, thus optimising their academic and personal growth.

Another interesting initiative is Infinity Learn’s Virtual Intelligent System for Tailored Academics, or VISTA for short, designed to harness AI to address issues encountered by both educators and learners. The objective of the initiative is to “empower educators for tomorrow”.

From conceptualising lesson plans to identifying resources and extracting insights, the tool ensures educators are equipped with the most advanced digital tools, enabling them to recalibrate their teaching strategies for a new era.

IT Parents Want AI Children

Bengaluru is home to almost 47.31% of IT professionals over 30, many of whom are parents eager to equip their children for the tech-centric future.

To assure parents of the benefits of an AI-based education, CBS has established an AI Experience Centre. This facility showcases a humanoid robot powered by AI, enabling parents to interact and understand the inner workings of how AI operates. From electronics to voice and facial recognition systems, every element within the lab embodies AI technology.

Through first-hand experiences, parents can gain insight into both the advantages and drawbacks of AI, affirming their confidence in the education provided to their children.

As Bengaluru continues to evolve as a tech hub, the AI education approach by schools like CBS and Indus International ensures that the next generation is not just prepared for the future but poised to shape it.

The post Schools in Bengaluru Offer AI & Robotics Courses to Kids in Grades 3 and Onwards appeared first on AIM.