NaMo App Introduces AI Chatbot NaMo AI, Offers Quick Answers to Government Schemes

Prime Minister Narendra Modi’s NaMo App has launched a new feature called NaMo AI, using generative AI to share details about the government’s flagship schemes and their impact. This AI-powered tool allows users to ask questions about PM Modi and receive quick summaries.

Last night, I was checking this amazing feature on the @NamoApp , called the NaMo AI, an AI assistant which helps you with information about the various schemes of the Modi Govt.
I got instant search results on number of PM Surya Ghar applicants from Assam.
Must try this out! pic.twitter.com/7qEbi2i7d8

— Himanta Biswa Sarma (Modi Ka Parivar) (@himantabiswa) April 12, 2024

For example, users can inquire about initiatives like “Har Ghar Jal” and get insights. The AI also responds to questions about PM Modi’s popularity and awards received.

NaMo AI aims to enhance public connectivity and provide useful information during elections. It can be accessed on desktop and mobile devices via the Narendra Modi website or the mobile app, offering responses in PDF format for easy sharing and access.

Importantly, NaMo AI’s utility extends beyond information dissemination, potentially aiding voters during upcoming elections by providing essential details about the Prime Minister’s constituency and developmental initiatives through the chatbot.

NaMo AI is likely the first of its kind used by a Prime Minister to strengthen public engagement. It’s wise to double-check information on official platforms for accuracy. Users can use NaMo AI to ask questions and receive answers in PDF format. The app is convenient for on-the-go use, accessible from anywhere at any time.

As India gears up for the Lok Sabha election this year, a battle between AI-generated and manual-generated content campaigns is in the air. Currently, AI is used increasingly to help politicians reach voters through phone conversations or chatbots and further draft ads and messages about political opponents.
As an effort to embrace AI in Indian politics, Hari Balasubramaniam, an Angel Network Investor, shared Prime Minister Narendra Modi’s AI image generated using Midjourney on LinkedIn. This post reflects a curiosity about reshaping our perception of political leaders.

The post NaMo App Introduces AI Chatbot NaMo AI, Offers Quick Answers to Government Schemes appeared first on Analytics India Magazine.

Popular Google Certification for All Areas in the Tech Industry

XXX
Image by Author

When people say they work in the tech industry, many assume they are software engineers, know 3 different programming languages, and can build applications overnight. But the tech industry is way more than that.

As it continues to grow, we not only need software engineers and data scientists, but also cyber security analysts, marketers, design professionals, and more. If you’re looking for a career change but want to keep your options open and outside of coding, continue reading…

Data Analytics

Link: Google's Data Analytics Professional Certification

Let’s start with the most technical for those interested in working with data, prepping it and analysing it for the decision-making process.

Google's Data Analytics professional certification allows you to understand the practices and processes used by associate data analysts. You will learn how to clean data, analyze it, and create visualizations using tools such as SQL, R programming, and Tableau.

As we understand the value of data, we also understand that the value of data analysts will continue to grow.

Project Management

Link: Google's Project Management Professional Certification

The tech industry is moving quickly and new projects are getting released every day. This is where I introduce project management and its importance in any industry. Without project management, many of these new tools would never have been deployed for us to have access to them.

Project management is the application of processes, methods, skills, knowledge, and experience to achieve specific objectives to ensure that a project is successful. In this Google Project Management Professional certification, you will learn how to effectively document projects, learn the foundations of Agile project management, Scrum, and also practice strategic communication and develop your problem-solving skills.

Cybersecurity

Link: Google's Cybersecurity Professional Certification

Data is the new gold and just like gold, organisations have processes and tools in place to ensure its security.

In this Google Cybersecurity Professional Certification, you will learn about the best cybersecurity practices and their impacts on organisations. You will identify common risks and vulnerabilities and apply techniques on how to mitigate them.

Cybersecurity is all about protection, so dive into protecting networks, devices, data, and people with a variety of tools as well as hands-on experience with Python, Linux, and SQL.

IT Support

Link: Google's IT Support Professional Certification

The tech industry has so much to it. It’s like building a house from scratch and every contractor is responsible for ensuring that the different levels always meet the gold standard based on their expertise. This is where IT support comes into it.

In this Google IT Support Professional Certification, you will learn about the day-to-day tasks that IT support deals with which include computer assembly, wireless networking, installation, and customer service. You will also learn how to identify problems to troubleshoot and debug using tools such as Linux, Domain Name Systems, Command-Line Interface, and Binary Code.

Marketing & E-commerce

Link: Google's Marketing & E-commerce Professional Certification

You have your software engineers building products. You have your data analysts analyzing the data. You have the project managers ensuring that the product lands into production. You have the cybersecurity and IT support ensuring that everything is smooth and nothing attacks the organisation. So everything is in place, what now?

Selling the product. Making sure everybody knows about it. Making the money from the great product! This is where Google's Marketing & E-commerce Professional Certification comes in.

You will learn about the fundamentals of digital marketing and e-commerce, and how to attract and engage customers through a variety of digital marketing channels. You will then learn how to measure the performance of these through analytics and present insights.

Wrapping it Up

One industry, with 5 potential possibilities on how you can get your foot in the door. All these professionals are required and make up the organisation's success stack.

The tech world will continue to grow and with that, there are more areas and sectors you can transition into.

Start learning today!

Nisha Arya is a data scientist, freelance technical writer, and an editor and community manager for KDnuggets. She is particularly interested in providing data science career advice or tutorials and theory-based knowledge around data science. Nisha covers a wide range of topics and wishes to explore the different ways artificial intelligence can benefit the longevity of human life. A keen learner, Nisha seeks to broaden her tech knowledge and writing skills, while helping guide others.

More On This Topic

  • Celebrating Women in Leadership Roles in the Tech Industry
  • The Ultimate Roadmap to Becoming Specialised in The Tech Industry
  • What's With All the Layoffs in Tech?
  • Google Data Analytics Certification Review for 2023
  • The Most Popular Intro to Programming Course From Harvard is Free!
  • Popular Machine Learning Algorithms

Meta Releases AI on WhatsApp, Looks Like Perplexity AI

Meta Releases AI on WhatsApp, Looks Like Perplexity AI

Meta has quietly released its AI-powered chatbot on WhatsApp, Instagram, and Messenger in India, and various parts of Africa. The feature is slowly rolling out for both iOs and Android users. It might be possibly powered by Llama 2, or the upcoming Llama 3.

It is currently present on the top search bar in the WhatsApp UI.

Meta starts limited testing of Meta AI on WhatsApp in different countries!
Some users in specific countries can now experiment with the Meta AI chatbot, exploring its capabilities and functionalities through different entry points.https://t.co/PrycA4o0LI pic.twitter.com/BB2axOGnEj

— WABetaInfo (@WABetaInfo) April 12, 2024

Funnily enough, Aravind Srinivas, the founder and CEO of Perplexity AI posted on X a screenshot of the chatbot which looks a lot like Perplexity’s UI design. “Honored and proud of our designers,” he said.

Honored and proud of our designers! pic.twitter.com/pU3CLbZNiE

— Aravind Srinivas (@AravSrinivas) April 12, 2024

Regardless, the integration operates separately from the privacy of private conversations on WhatsApp. What users type in the search bar remains confidential and is not shared with Meta AI unless users intentionally send a query to the Meta AI chatbot.

The topics suggested by Meta AI through the search bar or conversation are randomly generated and do not use user-specific information. The search bar continues to function for its primary purpose, enabling users to find chats, messages, media, and contacts within the app.

Users can still search their conversations for specific content without interacting with Meta AI, maintaining the same level of ease and privacy as before. Additionally, personal messages and calls remain end-to-end encrypted, ensuring neither WhatsApp nor Meta can access them, even with the Meta AI integration.

Meta is increasing its AI initiatives following progress made by major tech companies like OpenAI. Meta has begun piloting its AI chatbot in various markets, including the U.S., and is now extending the testing to India, its largest market with more than 500 million Facebook and WhatsApp users.

Last week, the company also confirmed that it would be releasing its next version of AI model, Llama 3 within this month.

The post Meta Releases AI on WhatsApp, Looks Like Perplexity AI appeared first on Analytics India Magazine.

Patchscopes Could be an Answer to Understanding LLM Hallucinations 

Patchscopes Google Research

Google Research recently released a paper on Patchscopes- which is a framework that looks to consolidate a range of past techniques for interpreting the internal mechanisms of large language models. It is designed to understand the behaviour and alignment of LLMs with human values.

Authored by Google researchers Asma Ghandeharioun, Avi Caciularu, Adam Pearce, Lucas Dixon, and Mor Geva, Patchscopes utilises the inherent language skills of LLMs to offer intuitive, natural language explanations of their concealed internal representations. It even offers explanations of their internal workings, addressing key questions about their operations and overcoming limitations of previous interpretability methods.

An Understanding of Hallucinations?

Though the initial focus of the application of Patchscopes is limited to the natural language domain and the autoregressive Transformer model, the potential applications are broader.

The researchers believe that Patchscopes can be potentially used for detecting and correcting model hallucinations, exploring multimodal representations (including image and text), and investigating how models formulate predictions in intricate scenarios.

Patchscopes configuration can be explained in four steps : The Setup, The Target, The Patch and The Reveal. A standard prompt is presented to the model, followed by a secondary prompt designed to extract specific hidden information. Inference is performed on the source prompt, injecting the hidden representation into the target prompt, and the model processes the augmented input, revealing insights into its comprehension of the context.

Patchscopes Illustration. Source: Google Research Blog

The paper only touches upon the surface of the opportunities this framework creates and further studies need to be done to understand its applicability across domains.

The post Patchscopes Could be an Answer to Understanding LLM Hallucinations appeared first on Analytics India Magazine.

Enterprise of the Future: Disruptive Technology = Infinite Possibilities

Artificial Intelligence (AI) has been around for many years and is no stranger to the technology world. Yet, the introduction of generative AI (GenAI) in 2022 has sparked new interest in the transformative potential of AI and its application across sectors.

Clearly, GenAI is challenging the way we traditionally operate, spearheading an acceleration of innovation underpinned by increasing investments and interests around AI usage. Everyone is eager to explore GenAI as it reshapes roles and increases efficiency by augmenting people’s capabilities.

Generative AI as capability enabler and augmenter

Areas with scope for augmenting the capabilities and potential of people – call-centre agents, frontline technicians, back-office employees, or software developers – are the ones where GenAI is steadily gaining traction. The common use cases include tasks such as summarisation, sentiment analysis, Retrieval-augmented generation (RAG)-based knowledge searches, and automated email responses, to name a few.

The extension of these use cases is copilots or workbenches that enhance the way people traditionally work. For instance, business analysts, particularly in domains such as finance, are required to browse through various internal and external data and information sources, search through structured and unstructured data in multiple formats to perform analysis, derive meaningful insights, summarize, and prepare a report or presentation for business leaders.

These activities are now being supported by GenAI-powered workbenches. These workbenches leverage GenAI agents to simplify the process of searching across repositories, pulling data from multiple sources with diverse structures and formats. They provide analysts with user-friendly interfaces and facilitate efficient data access and navigation via natural language searches, allowing them to focus on essential aspects of validation and analysis.

We see a similar transformation happening in the software development area, where now we have copilots that are helping speed up the development process with code suggestions and recommendations. While these use cases increase productivity and bring in efficiency improvements, there’s a huge potential for improving and enhancing employee experience in an organisation.

This is where large language models (LLMs) can play a vital role.

Current outlook: The need to simplify enterprise application interfaces

In the current state, employees are required to navigate multiple enterprise application interfaces and browse many systems to get information for their day-to-day activities. It could be a simple activity such as fetching clarification on leave policies, checking for the leave balance or reporting an incident to have a laptop or printer issue fixed.

To accomplish their daily responsibilities, employees often end up searching for standard operating procedures and other relevant data and knowledge. In general, to perform their day-to-day tasks, they must go through multiple enterprise application interfaces. This often results in a steep learning curve for the employees, necessitating them to remember where specific data and information resides, to understand the functionality of multiple enterprise applications and master the way they operate.

This fragmentation of information across different data sources and the labyrinth of applications that need to be navigated to perform daily activities leads to inefficiencies and confusion that ultimately impact productivity. Moreover, changes within an organisation necessitate training on new systems and new ways of working, requiring employees to learn and adapt continuously.

The promise and the potential of LLMs

Large language models (LLMs) hold the potential to revolutionise current enterprise systems by enabling conversational interfaces for employees and software agents that can dynamically decide the next course of action based on what the employee is looking for. LLMs enable the development of applications that can understand and process natural language inputs allowing users to interact more intuitively.

These LLM-based applications will have the ability to understand context and intent and direct the relevant application flow, freeing the user from the necessity to understand and remember complex interfaces and menu systems. This shift will enhance productivity by reducing the time and effort formerly required to navigate through complex systems. This will also simplify and enable organisational changes without impacting the end users.

One of the most compelling aspects of integrating LLMs into the system is their ability to shield employees from the underlying complexities of multiple systems, while enabling them to complete the tasks at hand quickly and efficiently. These LLMs dynamically interpret the requests and perform the relevant tasks based on the context of the interactions.

Future outlook: Microservices integration with multi-agent LLMs

The future state will embrace and evolve microservice-based architectures to the next level where we will have LLM agents mapped to specialised tools doing specific atomic tasks that will be orchestrated on the fly. As users converse with the application, an orchestrator agent understands the context and intent, and hands over control to other agents that will in turn converse with the user and collaborate with the other agents to perform the required functions and finish the task at hand.

In the future, LLMs will play a crucial role in making our workplace more intuitive, flexible, and efficient. The journey toward a user-friendly intelligent enterprise has just begun, where the possibilities are as extensive as the large language models themselves.

As we continue to explore and integrate GenAI into our work, it becomes imperative to leverage this disruptive technology in a controlled and regulated manner. What lies ahead is exciting as enterprises now stand on the brink of significant disruption in working practices.

The views in this article are those of the author and do not necessarily reflect the views of the global EY organisation or its member firms.

The post Enterprise of the Future: Disruptive Technology = Infinite Possibilities appeared first on Analytics India Magazine.

A Bunch of 20-year-old Programmers are Driving India’s Open Source AI

A Bunch of 20-year-old Programmers are Driving India’s Open Source AI

There has been a lot of buzz lately when it comes to India’s AI moment. A lot of companies are trying to build their own language models, while some are trying to fine-tune existing ones, and use them for enterprise solutions. Most importantly, most of the new AI models in Indic languages are being built by engineers either in universities or just in their 20s.

“The young students that I meet in India are amazing. They’re all excited and eager to understand the shift from traditional computer science to learning-based approaches for solving all kinds of problems,” said Jeff Dean, chief scientist of Google DeepMind and Google Research when he visited India in February.

Take for example Devika, the open source alternative to Devin, was created by Mufeed VH who thought of the idea as a joke. The 21-Year-Old born in Kerala came across Devin when he was just experimenting with different tools, got impressed by its demo and thought, “what could be the name of an Indian version of Devin, and I got to Devika. I just posted it out like a joke,” he told AIM.

It took him just 20 hours to make the project, taking time from his two startups Lyminal and Stition.AI, which are both focused on delivering AI solutions for companies.

Similarly, Adithya S Kolavi, the 20-year-old AI engineer studying at PES University created the Indic LLM Leaderboard. This was after he founded CognitiveLab last year and also released the open source Ambari bilingual Kannada model built on top of Llama 2, an addition to the existing multilingual Indic AI models.

The Indic open source renaissance

Speaking of multilingual AI, that has been the biggest focus of the open source community in India. Ever since Sam Altman visited India and spoke about the capabilities of OpenAI’s model, the Indian AI community took it as a challenge to build the model by themselves.

Initiatives such as Tech Mahindra’s Project Indus led by Nikhil Malhotra and IIT-Bombay led BharatGPT are focused on building models from scratch. Both of these also wish to open source their AI models as soon as they are up and running, which would help grow the community as well.

But the open source AI developers in their 20s got their hands on models such as Llama, and now Mistral and Gemma to build models for specific use cases in Indic languages.

For example, “It all started when Meta’s Llama dropped and then we saw all these Indic language models coming up,” said Adarsh Shirawalmath, the 2nd year B.Tech student at Vellore Institute of Technology, who is the creator of Kan-LLaMA, in an exclusive interaction with AIM. He is also the co-founder of Tensoic, along with Raghav Ravishankar.

“The thing is that people have a lot of flexibility,” explained Kolavi from CognitiveLab. “Even though we don’t have a lot of experience like the others, the lean team methodology works for us. We can quickly test out stuff. You can show it to the world and you get feedback so then you can enjoy it,” he added.

Similarly, Abhinand Balachandran, the Kaggle Grandmaster who is also in his 20s created Tamil Llama, who is also aiming to build AI models in more Indic languages in the future.

Just getting started

Then Satpal Singh Rathore, the creator of Gajendra started Bhabha.AI, the company which recently also built the Hugging Chat for Indic LLMs called Indic Chat. This website hosts most of the open source Indic AI models for people to test out which includes almost all the mentioned fine-tuned AI models.

Apart from building AI models, Arko Chattopadhyay, the co-founder of Xylem AI is also helping developers build by offering inference stack for developers for deploying and scaling LLMs in production.

Not just Indic LLMs, the young Indian talent is also focused on developing self-driving cars. Mankaran Singh, the Bengaluru-based developer converted his modified-Maruti Alto K10 into an autonomous vehicle using a second-hand Redmi Note 9 Pro which was made possible by Flowpilot, his open source driver assistance system.

Apart from the students and just-college passouts, there are several other initiatives such as Ravi Theja Desetty’s Telugu LLM Labs, Shantipriya Parida’s Odia Llama, or the very famous Sarvam AI building models such as OpenHathi on top of Llama, are also accelerating the open source AI race in India.

All of these initiatives are helping the open source community grow and it is just getting started.

The post A Bunch of 20-year-old Programmers are Driving India’s Open Source AI appeared first on Analytics India Magazine.

WhatsApp trials Meta AI chatbot in India, more markets

WhatsApp trials Meta AI chatbot in India, more markets Manish Singh 8 hours

WhatsApp is testing Meta AI, its large language model-powered chatbot, with users in India and some other markets, signalling its intentions to tap the massive user base to scale its AI offerings.

The company recently began testing the AI chatbot, until now available in testing in select markets including the U.S., with some users in India, many of them said. India, home to more than 500 million WhatsApp users, is the instant messaging service’s largest market.

Meta confirmed the move in a statement. “Our generative AI-powered experiences are under development in varying phases, and we’re testing a range of them publicly in a limited capacity,” a Meta spokesperson told TechCrunch.

Meta unveiled Meta AI, its general-purpose assistant, in late September. The AI chatbot is designed to answer user queries directly within chats as well as offer them the ability to generate photorealistic images from text prompts.

WhatsApp’s massive global user base, boasting over 2 billion monthly active users, presents Meta with a very unique opportunity to scale its AI offerings. By integrating Meta AI into WhatsApp, the Facebook-parent firm can expose its advanced language model and image generation capabilities to an enormous audience, potentially dwarfing the reach of its competitors.

The company separately confirmed earlier this week that it will be launching Llama 3, the next version of its open source large language model, within the next month.

Omg Meta just casually dropped it's AI powered by Llama on WhatsApp 😳🤯 pic.twitter.com/6DqnnSJ07l

— Nikhil Kumar (@nikhilkumarks) April 12, 2024

Not All Tokens Are What You Need, Say Microsoft Researchers

Not All Tokens Are What You Need, Say Microsoft Researchers

Microsoft researchers have challenged the traditional approach to language model (LM) pre-training, which uniformly applies a next-token prediction loss to all tokens in a training corpus. Instead, they propose a new language model called RHO-1, which utilises Selective Language Modeling (SLM).

Click here to check out the GitHub Repository.

This method selectively trains on useful tokens that align with the desired distribution, rather than attempting to predict every next token.

They have introduced Rho-Math-v0.1 model with Rho-Math-1B and Rho-Math-7B which achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3% of the pretraining tokens.

Rho-Math-1B-Interpreter is the first 1B LLM that achieves over 40% accuracy on MATH.

Rho-Math-7B-Interpreter achieves 52% on MATH dataset, using only 69k samples for fine-tuning.

RHO-1’s SLM approach involves scoring pre-training tokens using a reference model and training the language model with a focused loss on tokens with higher excess loss. This selective process allows RHO-1 to improve few-shot accuracy on 9 maths tasks by up to 30% when continually pre-training on a 15B OpenWebMath corpus.

The model also achieves state-of-the-art results on the MATH dataset after fine-tuning and shows an average enhancement of 6.8% across 15 diverse tasks when pre-training on 80B general tokens.

Traditional training methods often filter data at the document level using heuristics and classifiers to improve data quality and model performance. However, even high-quality datasets may contain noisy tokens that negatively impact training.

The SLM approach directly addresses this issue by focusing on the token level and eliminating the loss of undesired tokens during pre-training.

SLM first trains a reference language model on high-quality corpora to establish utility metrics for scoring tokens according to the desired distribution. Tokens with a high excess loss between the reference and training models are selected for training, focusing the language model on those that best benefit downstream applications.

In the study, tokens selected by SLM during pre-training were closely related to mathematics, effectively honing the model on the relevant parts of the original corpus. Investigating token filtering across various checkpoints, the researchers found that tokens selected by later checkpoints tend to have higher perplexity towards the later stages of training and lower perplexity in earlier stages.

The discussion section highlights future work, including potential generalisation of SLM beyond mathematical domains, scalability of the technique to larger models and datasets, and exploration of whether training a reference model is necessary for scoring tokens.

Improvements upon SLM may include reweighting tokens instead of selecting them and using multiple reference models to reduce overfitting.

SLM could be extended to supervised fine-tuning to address noise and distribution mismatches in datasets, and to alignment tasks by training a reference model that emphasises helpfulness, truthfulness, and harmlessness to obtain a natively aligned base model during pre-training.

The post Not All Tokens Are What You Need, Say Microsoft Researchers appeared first on Analytics India Magazine.

Building Open Source LLMs is Not for Everyone 

Falcon- TII- UAE

Falcon creators believe that building open source LLMs from scratch is not for everyone. “You need a lot of funding to sustain open source and we believe that not everyone will be able to do it,” said Hakim Hacid, executive director and acting chief researcher at Technology Innovation Institute (TII), in an exclusive interaction with AIM.

Citing Mistral, Hacid said that it is not open source anymore—initially “it started with open source, and now it’s not anymore,” as these things require huge investments and a lot of commitment at policy level.

Interestingly, Mistral AI recently released a large language model Mixtral 8x22B that has 176 billion parameters and a context length of 65,000 tokens.

Falcon For the Kill

The Falcon-makers are super confident that its open-source strategy will remain sustainable due to the commitment of management and leadership who genuinely believe in it.

“It’s not just a matter of building models and having a sort of a final product that is based on the models, but it’s about the impact that this research can have on humanity and then allowing people to access this asset that is the AI,” said Hacid, who had an interesting take on the name of their LLM model.

“For us, the name actually comes from the national bird, the Falcon. It symbolises courage and determination. So, we wanted to associate those characteristics with our Falcon model, and we believe it does so effectively,” said Hacid.

The Falcon 180B model is available on Amazon SageMaker JumpStart to deploy with one-click running inference.

Eyes India

“India is our neighbour, and we have a lot of colleagues coming from India,” said Hacid when talking about the UAE-India relationship. “India is our neighbour, and we have a lot of colleagues who are coming from India.”

Hacid also pointed out how in India, there are a number of fine-tuning of models specifically for Urdu and other languages. “We have internally initiated on multilinguality to integrate Urdu because we have a lot of immigrants in the UAE who are coming from India. The Urdu language is as important as the Arabic language in the UAE.”

“So we are working on that, trying to get a little bit more content to specialise Falcon on the Indian side.”

With a number of emerging Indic language models, such as AI4Bharat, Urdu is offered as one of the languages.

Making UAE an ‘AI’ Nation

UAE is clearly moving away from an oil rich country into a data and knowledge-driven economy. “There is a vision that we want the country to go from an oil producer to a knowledge economy,” said Hacid.

He also said that the idea is to move from a culture where they were used to buying and consuming the technology, to a situation where they actually produce the technology and eventually export that technology to the external world.

In addition to this, the UAE government has been providing a conducive environment for boosting AI developments in the country. Hacid said that the country is also heavily investing and bringing scientists to the UAE to do their research and provide support in all ways possible.

Challenges Galore

It is easier said than done. Hacid believes that one of the biggest problems that all companies working with generative AI will face is the availability of the right talent. “Modern AI is a sort of young discipline. You cannot find people with 20 years of experience in this field,” he added.

Further, he said that finding experts with two to three years of experience is what we will get and he also feels that with the availability of a lot of opportunities in the world, adds to the complexity of the situation. “So, getting the right people will always be a challenge,” he said.

Hacid also emphasised on data localisation and how regulations can pose a challenge for training models. “Data will always be an issue because nowadays we see some regulations coming around limiting the access to data. So that could actually cause some delays in the development of this technology,” he said.

Obviously, compute is another challenge that Hacid also pointed out. While he believes that they have good access to compute, its impact on nature becomes critical. “We are working hard to make sure that the models we build today don’t negatively impact the environment tomorrow,” said Hacid.

Interestingly, the UAE bought thousands of NVIDIA GPUs last year.

AGI, Next?

TII is now looking at building even smaller models and focusing on multimodality – going the Microsoft way.

“We were asking at some point the question, as to how big should we go? I think now the question is how small we could go by keeping a small model,” said Hacid, saying that they are exploring that path.

Further, he said that they are making models smaller because, again, “if we want the pillar of the deployment to succeed, we need to actually have models that can run in devices, and in infrastructure that is not highly demanding.”

From a perspective of wanting to build models that can be serviced to the end users, TII and ATRC (Advanced Technology Research Council) built a startup A171, which will specialise in marketing and supporting Falcon LLMs.

In terms of multimodality, Hacid said that the models will be trained to not only understand text but also images, video, sound, graphs, time series and other types of data.

“We believe if we want to go towards artificial general intelligence, we need to have systems that are able to digest and get a sense out of different data sets. This is how we are learning as humans at the end of the day,” quipped Hacid.

The post Building Open Source LLMs is Not for Everyone appeared first on Analytics India Magazine.

When Two AI Agents Communicate, Beckn Can be the Contracting Infrastructure

Similar to how HTTP powers the internet, facilitating worldwide web connectivity, GSM ensures interoperability in mobile communications, and SMTP governs email exchanges across various platforms, Beckn Protocol enables interoperability in economic transactions.

Beckn’s interoperability allows different sellers and buyers to talk to each other on the Open Network for Digital Commerce (ONDC) platform. Since its genesis, Beckn’s open and interoperable principles have been leveraged by Namma Yatri and other open education, energy, and healthcare networks.

A global first, the idea of the Beckn Protocol was conceived by Nandan Nilekani, Pramod Varma, Sujith Nair, and the Foundation for Interoperability in Digital Economy (FIDE) is the genesis author of the Beckn Protocol specification and the angel donor for its evolution.

“Beckn, if I have to put it in a few words, it is the language of transactions. It is essentially a standardised language comprising a concise set of specifications condensed into virtual documents spanning approximately two to two and a half pages.

When implemented across interfaces, platforms, and systems, it establishes a framework for seamless communication between external systems. This fosters the creation of open networks across various domains,” Sujith Nair, CEO and co-founder at FIDE, told AIM.

Presently, platform aggregators like Uber or Amazon dominate, restricting interactions to users within the same platform. Beckn Protocol aims to change this by providing a common language for decentralised consumer-provider interactions across industries.

This open standard enables communication between consumers and providers regardless of the platform, fostering a more inclusive and interconnected digital ecosystem.

Beckn Establishes Legal Ground in the Age of AI

Nair believes Beckn has a very important and powerful role to play in the age of AI. Today, AI is in its copilot phase, actively supporting humans in various tasks, ranging from aiding customer service representatives to assisting developers with coding to augmenting medical diagnoses.

“But pretty soon AI is going to take action on your behalf, it will book a cab for you, book a hotel or place an order online on your behalf. But when that happens, how do you verify that the AI has indeed made the booking?” Nair asked.

A foundational contract structure is needed to validate and formalise the booking as well as establish the liabilities of both parties. “This necessitates a programmable, machine-readable method of contracting in real-time, and achieving this requires an interoperable protocol like Beckn.”

Beckn ensures there’s a verification process for these transactions when two AI agents (Buyer and Seller) are talking to each other.

“You wouldn’t want to find yourself in a situation where you arrive at a hotel only to be told they have no record of your booking. Similarly, if you need to cancel, who agreed to the terms? Did you or the AI agents?” Nair enquired.

A protocol infrastructure clarifies these matters unequivocally, with each transaction accompanied by a digitally signed micro contract detailing the terms explicitly.

“This establishes a record, instilling trust based on the contractual agreements regarding what’s promised, what’s not, and the cancellation terms. This eliminates ambiguity and establishes a definitive ground truth without relying on verbal communication between agents, thereby enhancing the positive aspects of AI as a contracting infrastructure,” Nair said.

Beckn will enhance AI’s efficacy

Moreover, employing Beckn as a contracting infrastructure not only enhances AI’s efficacy but also bolsters its accountability aspects, amplifying both its positive impact and its responsible usage.

It accomplishes these tasks while safeguarding users’ privacy confidentiality, and ensuring responsible contracting practices.

By developing internal chatbots, FIDE has already shown AI can actually place an order on platforms like ONDC. While this concept is still in the testing phase, the non-profit has successfully conducted demos to validate its feasibility.

“Importantly, the concept of Beckn is AI-neutral, meaning it can function with any Large Language Model (LLM) due to its adherence to open standards. This allows developers to implement it across various cloud platforms and LLMs.

“While we’ve showcased this approach through demos, it’s a collaborative effort within the wider community to develop further and refine it,” Nair said.

He also adds that Beckn also offers the advantage of extending AI capabilities to all nodes of the network. “For instance, consider the case of ONDC, where seller apps may not be AI-enabled. However, if there’s a smart buyer app with AI capabilities, it can still provide an AI experience to users.

“Even though the seller apps may not have AI-ready catalogues, the fact that their data is accessible on the network enables the buyer app to offer an AI-driven experience. This means that non-AI-enabled seller platforms can still benefit from the AI capabilities of the buyer app. Instead of each node attempting AI individually, having AI at one end of the network benefits all nodes,” Nair explained.

Powering open-networks, locally and globally

Beckn Protocol is an open-source specification and anybody in any part of the world can leverage it. Given the success of Namma Yatri and Kochi Open Mobility Network, cities in Europe like Paris, Zurich and Amsterdam are already looking to adopt Beckn to make urban mobility services interoperable.

At home, Beckn is also powering the Unified Energy Interface (UEI) initiative, which enables users to locate nearby EV charging stations and conduct payments seamlessly across multiple service providers.

Another open network called VISTAAR, which is an interoperable and federated public network dedicated to agricultural information and advisory services, is being developed by leveraging Beckn.
Driven by the Ministry of Agriculture, other parties involved include Apurva.ai, the non-profit Wadhwani AI, and the Nandan Nilekani-backed EkStep Foundation.

The post When Two AI Agents Communicate, Beckn Can be the Contracting Infrastructure appeared first on Analytics India Magazine.