The Impact of Lok Sabha Election on India’s AI Progress

Impact of Lok Sabha Election on India's AI Progress

As the world’s largest election begins, political parties have confirmed the use of AI tools for campaigning. Political strategists claim that AI has become a significant asset, with the BJP leading in its utilisation for electoral purposes. The Congress, however, is using minimally or barely at all.

“The BJP has notably excelled in technology adoption since its inception, positioning itself as a frontrunner in AI integration. In contrast, other political parties have been slower in embracing this technology,” said MJ Srikant, Political Consultant and Election Strategist, sharing his insights with AIM on the current status of AI integration in the 2024 elections.

Sagar Vishnoi, an Independent Political Campaigner and Strategist, observed that the BJP is at the forefront as they use AI to translate their messaging into multiple languages.

“Congress is leveraging new social media tactics for election campaigning, aiming to compete with BJP’s strategies. However, regional parties appear to have limited budgets and less interest in adopting AI technologies for political communication,” Vishnoi mentioned.

“We use AI tools to finetune speeches. And also for translation purposes,” shared Surabhi Hodigere, BJP Spokesperson, on how the party is using AI tools.

Milind Dharmasena, a Congress spokesperson, emphasised their application of AI for communication enhancement, particularly aiding Rahul Gandhi in connecting with local dialects and further designing posters for the campaign.

AI BJP vs AI Congress

Interestingly, both parties have differing agendas regarding AI ambitions in the country. The BJP aims to safeguard citizen safety and privacy, leaning towards some form of regulation, while the Congress views AI advancements as an opportunity to create jobs.

Recently, the BJP outlined its ambition to position Bharat as a global leader in AI innovation in its manifesto. The party, which has been in power for the past 10 years, has tasked NITI Aayog with establishing guidelines and policies for the development and use of AI.

In 2018, NITI Aayog introduced the National Strategy for Artificial Intelligence, outlining guidelines for AI research and development across various sectors, including healthcare, agriculture, education, “smart” cities, infrastructure, and smart mobility. In 2021, NITI Aayog published Part 1 – Principles for Responsible AI and Part 2 – Operationalizing Principles for Responsible AI.

Recently, the government enacted the Digital Personal Data Protection Act in 2023, which can be utilised to address privacy concerns related to AI platforms.

“India has set a benchmark for personal data protection globally with its model legislation. Tejasvi Surya, MP of the Bangalore South Lok Sabha constituency, played a significant role in shaping this law as a member of the Joint Parliamentary Committee on Personal Data Protection Bill,” shared the team from Tejasi Surya Office.

“AI can be viewed as a double-edged sword. Problem or not, the advent of AI is inevitable. So, MeitY has introduced initiatives to address some of the privacy issues concerning AI platforms,” they added.

In its 2023 AI report, MeitY has outlined seven pillars for strategic AI development, including Centers of Excellence, Dataset Platform, and future skills initiatives.

Further, the report recommends how India can leverage its demographic dividend and play to its strengths as an IT superpower to further the penetration of AI skills in the country, strengthening the AI compute infrastructure in India to support AI innovation through public-private partnerships (PPPs).

On the other end, the Congress party, in their 2024 manifesto, pledges to encourage the adoption of AI, robotics, and similar technologies to generate new employment opportunities.

“We firmly believe AI is the future. Our priority will be to stimulate greater use of AI and robotics, fostering the creation of additional job opportunities,” said Congress spokesperson Minlind Dharmasena, expanding on this commitment and expressing confidence in India’s AI advancement.

Who will lead the AI revolution in India?

With the BJP’s longstanding commitment to technological innovation, it is well-positioned to harness AI as a progressive force for the country. At the same time, Congress also fits perfectly into the AI narrative, promoting job opportunities in the country.

Rahul Gandhi and Narendra Modi's Views on AI"
Whose perspective on AI aligns more with your own?#narendramodi #bjp #india #modi #rahulgandhi #congress #delhi #indian #artificialintelligence #ai #machinelearning #technology #datascience #python #deeplearning pic.twitter.com/VeF6kZG7zF

— The Source Insight (@DSourceInsight) March 29, 2024

Today, the BJP-led government has introduced initiatives and guidelines for the responsible development of AI technologies, but there are currently no specific laws regulating AI in the country. The ongoing elections could also be one of the reasons why it has taken a backseat.

Generative AI, along with other AI advancements, has already penetrated numerous industries, including education, healthcare, and the workforce. The political parties that come to power can significantly influence the progress of AI in the country.

As AI increasingly becomes the future, the decisions made by those in power can either propel or hinder its development. Therefore, choose your vote wisely.

The post The Impact of Lok Sabha Election on India’s AI Progress appeared first on Analytics India Magazine.

Gartner: IT Spending Expected to Grow 8% in 2024

Worldwide IT spending is likely to grow 8% to $5.06 trillion this year, according to Gartner’s 2024 IT spending forecast. Data center systems and software are the segments expected to drive that growth the most.

Gartner made its predictions by analyzing sales of IT products and services across hardware, software, IT services and telecommunications segments. The analyst firm researches over 1,000 vendors and maintains a quarterly database of market size data.

Generative AI plans push IT to beef up data center services

The amount of money spent on data center services is predicted to jump 10% to $260 billion from $236 billion in 2024 (Figure A). Many enterprises Gartner has spoken to say they plan to roll out generative AI products, services and use cases in 2025, which requires greater spending on data centers. Gartner saw 10% growth year-over-year in data center spending.

Figure A

Table showing that Gartner predicts worldwide IT spending on data center systems will accelerate the most compared to all of the segments the firm tracks.
Gartner predicts worldwide IT spending on data center systems will accelerate the most compared to all of the segments the firm tracks. Image: Gartner

“There is also gold rush level spending by service providers in markets supporting large scale GenAI projects, such as servers and semiconductors,” John-David Lovelock, distinguished vice president analyst at Gartner, stated in the press release.

Mobile device spending recovers within short generation cycles

While data center spending increased, device spending fell in 2023 and recovered slightly from $664 billion to $688 billion in 2024, showing 3.6% growth. The cause for growth might instead be an accelerating refresh cycle, with more phone enterprises and consumers replacing phones more quickly.

“Business and consumers were happy to keep their devices for longer – reducing the number of devices bought even year simply as a replacement to the devices currently owned,” Lovelock told TechRepublic by email. “2024 marks a turnaround year, and device sales are increasing of the prior year.”

The inclusion of generative AI on some phone brands isn’t behind this change, but “sustains” it, Gartner pointed out.

“The AI features (NPU and TPU chips) built into smartphones and PC are insufficient to cause business and consumers to refresh their devices,” Lovelock told TechRepublic. “Without a breakthrough application that runs natively on the device and requiring AI capability, AI capability is a ‘nice to have’ that will help sustain replacements and upgrades without requiring them.”

Hiring money pivoting toward IT service firms

More money is being spent on consulting firms than on internal staff in IT for the first time Gartner has seen, said Lovelock.

That’s because “Enterprises are quickly falling behind IT service firms in terms of attracting talent with key IT skill sets,” Lovelock is quoted as saying in the press release. “This creates a greater need for investment in consulting spend compared to internal staff.

SEE: In Australia and New Zealand, IT spending growth will be driven by cybersecurity, cloud and data, Gartner found.

“With spending on IT services on track to grow by 9.7% to eclipse $1.52 trillion, this category is on pace to become the largest market that Gartner tracks,” said Lovelock in the press release.

Spending trends in software and communications services

The other segments in the IT spending forecast as tracked by Gartner, and their predicted growth in 2024 are:

  • Software, 13.9%.
  • Communications services, 4.3%.

Breakthrough in Quantum Cloud Computing Ensures its Security and Privacy

Businesses are one step closer to quantum cloud computing, thanks to a breakthrough made in its security and privacy by scientists at Oxford University.

The researchers used an approach dubbed ‘blind quantum computing’ to connect two quantum computing entities (Figure A); this simulates the situation where an employee at home or in an office remotely connects to a quantum server via the cloud. With this method, the quantum server provider does not need to know any details of the computation for it to be carried out, keeping the user’s proprietary work secure. The user can also easily verify the authenticity of their result, confirming it is neither erroneous nor corrupted.

Figure A

Blind quantum computing.
The researchers used an approach dubbed “blind quantum computing” to connect two quantum computing entities in a way that is completely secure. Image: David Nadlinger/Oxford University

Ensuring the security and privacy of quantum computations is one of the most significant roadblocks that has held the powerful technology back so far, so this work could lead to it finally entering the mainstream.

Despite only being tested on a small scale, the researchers say their experiment has the potential to be scaled up to large quantum computations. Plug-in devices could be developed that safeguard a worker’s data while they access quantum cloud computing services.

Professor David Lucas, the co-head of the Oxford University Physics research team, said in a press release: “We have shown for the first time that quantum computing in the cloud can be accessed in a scalable, practical way which will also give people complete security and privacy of data, plus the ability to verify its authenticity.”

What is quantum cloud computing?

Classical computers process information as binary bits represented as 1s and 0s, but quantum computers do so using quantum bits, or qubits. Qubits exist as both a 1 and a 0 at the same time, but with a probability of being one or the other that is determined by their quantum state. This property enables quantum computers to tackle certain calculations much faster than classical computers, as they can solve problems simultaneously.

Quantum cloud computing is where quantum resources are provided to users remotely over the internet; this allows anyone to utilise quantum computing without the need for specialised hardware or expertise.

FREE DOWNLOAD: Quantum computing: An insider’s guide

Why is ‘blind quantum computing’ more secure?

With typical quantum cloud computing, the user must divulge the problem they are trying to solve to the cloud provider; this is because the provider’s infrastructure needs to understand the specifics of the problem so it can allocate the appropriate resources and execution parameters. Naturally, in the case of proprietary work, this presents a security concern.

This security risk is minimised with the blind quantum computing method because the user remotely controls the quantum processor of the server themselves during a computation. The information required to keep the data secure — like the input, output and algorithmic details — only needs to be known by the client because the server does not make any decisions with it.

How blind quantum cloud computing works.

“Never in history have the issues surrounding privacy of data and code been more urgently debated than in the present era of cloud computing and artificial intelligence,” said Professor Lucas in the press release.

“As quantum computers become more capable, people will seek to use them with complete security and privacy over networks, and our new results mark a step change in capability in this respect.”

How could quantum computing impact business?

Quantum computing is vastly more powerful than conventional computing, and could revolutionise how we work if it is successfully scaled out of the research phase. Examples include solving supply chain problems, optimising routes and securing communications.

In February, the U.K. government announced a £45 million ($57 million) investment into quantum computing; the money goes toward finding practical uses for quantum computing and creating a “quantum-enabled economy” by 2033. In March, quantum computing was singled out in the Ministerial Declaration, with G7 countries agreeing to work together to promote the development of quantum technologies and foster collaboration between academia and industry. Just this month, the U.K.’s second commercial quantum computer came online.

Due to the extensive power and refrigeration requirements, very few quantum computers are currently commercially available. However, several leading cloud providers do offer so-called quantum-as-a-service to corporate clients and researchers. Google’s Cirq, for example, is an open source quantum computing platform, while Amazon Braket allows users to test their algorithms on a local quantum simulator. IBM, Microsoft and Alibaba also have quantum-as-a-service offerings.

WATCH: What classic software developers need to know about quantum computing

But before quantum computing can be scaled up and used for business applications, it is imperative to ensure it can be achieved while safeguarding the privacy and security of customer data. This is what the Oxford University researchers hoped to achieve in their new study, published in Physical Review Letters.

Dr. Peter Dmota, study lead, told TechRepublic in an email: “Strong security guarantees will lower the barrier to using powerful quantum cloud computing services, once available, to speed up the development of new technologies, such as batteries and drugs, and for applications that involve highly confidential data, such as private medical information, intellectual property, and defence. Those applications exist also without added security, but would be less likely to be used as widely.

“Quantum computing has the potential to drastically improve machine learning. This would supercharge the development of better and more adapted artificial intelligence, which we are already seeing impacting businesses across all sectors.

“It is conceivable that quantum computing will have an impact on our lives in the next five to ten years, but it is difficult to forecast the exact nature of the innovations to come. I expect a continuous adaptation process as users start to learn how to use this new tool and how to apply it to their jobs — similar to how AI is slowly becoming more relevant at the mainstream workplace right now.

“Our research is currently driven by quite general assumptions, but as businesses start to explore the potential of quantum computing for them, more specific requirements will emerge and drive research into new directions.”

How does blind quantum cloud computing work?

Blind quantum cloud computing requires connecting a client computer that can detect photons, or particles of light, to a quantum computing server with a fibre optic cable (Figure B). The server generates single photons, which are sent through the fibre network and received by the client.

Figure B

The researchers connected a client computer that could detect photons, or particles of light, to a quantum computing server with a fibre optic cable.
The researchers connected a client computer that could detect photons, or particles of light, to a quantum computing server with a fibre optic cable. Image: David Nadlinger/Oxford University

The client then measures the polarisation, or orientation, of the photons, which tells it how to remotely manipulate the server in a way that will produce the desired computation. This can be done without the server needing access to any information about the computation, making it secure.

To provide additional assurance that the results of the computation are not erroneous or have been tampered with, additional tests can be undertaken. While tampering would not harm the security of the data in a blind quantum computation, it could still corrupt the result and leave the client unaware.

“The laws of quantum mechanics don’t allow copying of information and any attempt to observe the state of the memory by the server or an eavesdropper would corrupt the computation,” Dr Dmota explained to TechRepublic in an email. “In that case, the user would notice that the server isn’t operating faithfully, using a feature called ‘verification’, and abort using their service if there are any doubts.

“Since the server is ‘blind’ to the computation — ie, is not able to distinguish different computations — the client can evaluate the reliability of the server by running simple tests whose results can be easily checked.

“These tests can be interleaved with the actual computation until there is enough evidence that the server is operating correctly and the results of the actual computation can be trusted to be correct. This way, honest errors as well as malicious attempts to tamper with the computation can be detected by the client.”

Figure C

Dr. Peter Drmota.
Dr Peter Drmota (pictured) said that the research is “a big step forward in both quantum computing and keeping our information safe online.” Image: Martin Small/Oxford University

What did the researchers discover through their blind quantum cloud computing experiment?

The researchers found the computations their method produced “could be verified robustly and reliably”, as per the paper. This means that the client can trust the results have not been tampered with. It is also scalable, as the number of quantum elements being manipulated for performing calculations can be increased “without increasing the number of physical qubits in the server and without modifications to the client hardware,” the scientists wrote.

Dr. Drmota said in the press release, “Using blind quantum computing, clients can access remote quantum computers to process confidential data with secret algorithms and even verify the results are correct, without revealing any useful information. Realising this concept is a big step forward in both quantum computing and keeping our information safe online.”

The research was funded by the UK Quantum Computing and Simulation Hub — a collaboration of 17 universities supported by commercial and government organisations. It is one of four quantum technology hubs in the UK National Quantum Technologies Programme.

Language AI Pioneer DeepL Targets APAC Businesses With Pro Translation Options

Tech employees in APAC know working in the region can involve struggles with language. While most cross-border business is conducted in English, there can still be difficulties communicating, which can lead workers to turn to offerings like Google Translate or ChatGPT for help.

Jarek Kutylowski, founder and chief executive officer of DeepL.
Jarek Kutylowski, founder and chief executive officer of DeepL.

The same goes for enterprises looking to win business in the languages of the region. Jarek Kutylowski, founder and chief executive officer of DeepL, said the firm’s natural language processing AI model offers natural language translations in 32 languages, thanks to years of development and fine-tuning since launching in Europe in 2017.

With additional APAC languages on its roadmap for 2024, DeepL is expanding its footprint into Australia and Singapore, with key business use cases including translation for cross-border business growth. Its Pro subscription (starting at US$8.74 per user per month, rising to US$57.49 for an Ultimate package) and API Pro (beginning at $5.49 per month) allow businesses to translate documents at scale or integrate translations within their workflows.

DeepL is expanding its APAC languages and markets

Founded in Germany, DeepL’s globalisation drive saw it choose Japan and South Korea as its first markets in Asia. This was due to the countries’ strong economies and business connections with the rest of the world, as well as significant language barriers, which supported high volumes of translation use cases.

DeepL’s APAC push will see this market presence expand to Australia and Singapore. With a number of local languages under its belt, including Simplified Chinese and Indonesian, in addition to English, Japanese and Korean, Kutylowski said the firm was looking at adding “some of the bigger Asian languages where we don’t have coverage yet” soon.

SEE: How Australia is adapting fast to the world of generative AI

With over 900 employees globally, DeepL is currently used by 100,000 businesses and organisations worldwide, in addition to millions of individuals. With a growing business user base in Japan and South Korea, DeepL hopes its regional expansion will add to its revenue base of 1 million paid licenses.

DeepL’s model is trained for natural language translations

DeepL’s focus on providing natural language translations comes down to the difference between translation “accuracy” and “fluency” in language communication, according to the company.

“In a business setting, one main aspect is whether something I have written is correct. But it is not only correctness businesses want in communication; they want to persuade, motivate, communicate clearly and influence in the languages they communicate,” Kutylowski said. How native it feels, how natural, is super important.”

How DeepL achieves natural language translations

DeepL achieves a high level of natural fluency in target languages in two main ways:

  • DeepL was one of the first companies to bring AI-based neural machine translation to the market in 2017. Since then, it has been active in academic research to ensure its models were not just translating but expressing themselves naturally in a target language.
  • DeepL fine-tunes its AI translations by engaging more than 1,000 native-speaking trainers globally. They review the outputs that are being created by the model, and are able to train the model on how it can express itself more naturally in their language.

DeepL has Pro and API subscriptions for business translation

DeepL is available as a free web application on its website (Figure A) and can also be used as a browser extension. However, for businesses, DeepL’s growing presence in APAC is ultimately aimed at driving interest in DeepL Pro and API Pro subscriptions, which both offer advanced features designed to help local businesses scale and integrate translation securely.

Figure A

DeepL can be used on the web or upgraded with paid subscriptions.
DeepL can be used on the web or upgraded with paid subscriptions.

In addition to natural language translations, DeepL’s Pro subscription offers:

  • Data security: DeepL deletes all the text it processes from DeepL-operated servers after a translation is completed. The company promises that no customer text is ever passed to third parties or used as AI training data.
  • Unlimited text translations: The DeepL Pro offering includes unlimited text translations. This includes the ability to translate more documents with larger file sizes, while preserving the original document formatting.
  • Customisation control: Businesses can tailor AI outputs to maintain brand consistency and standardise messaging through customisation, which includes having control over things such as specified brand terminology.

DeepL’s API Pro subscription offers access to DeepL’s REST API along with data security and no volume restrictions, though users pay a usage charge of US$25 per 1 million characters. The subscription allows businesses to integrate DeepL’s translation functionality with things like websites, apps and internal communication tools with a few clicks.

DeepL has options for integrating within customer and user workflows

There are a lot of possible use cases for DeepL. “Potentially, there are so many different jobs that are being done by our users. They might be writing in Gmail, writing a note in Salesforce, or creating a document in Microsoft Office — the use cases are very broad,” Kutylowski explained.

The translator can be used on the website itself. However, it can also be accessed through browser extensions, which can immediately translate anything entered into a web browser that works with common software tools like Google Docs, Microsoft Office or Salesforce.

DeepL also works with customers to integrate translations into their own systems and workflows. Kutylowski gave the example of companies where DeepL is integrated and works in the background of customer service centres, providing customer service with instant translations.

Two typical business use cases for DeepL’s translation products

DeepL is seeing two main groups of business use cases for its subscription offerings.

1: Businesses seeking to enter new markets in new languages

Companies that want to expand into new markets in non-native languages are able to use DeepL to support their activities. This can mean not having to immediately hire translators, or onboard agencies, representatives and customer support agents who are fully fluent in the native language.

“You can take your existing team and equip them with a tool that helps them to talk to customers or potential customers, and use this to enter new markets,” Kutylowski said. “For example, it can go as far as pre-translating whitepapers or materials for customers to get onboarded on their own solutions.”

2: Internally within international organisations with language barriers

Some multi-jurisdictional businesses find language barriers a problem internally. Kutylowski said the tool can make sure daily communications like emails are written quickly in a native language and translated immediately, with less effort and time necessary for users.

Elon Musk Set to Meet Indian Spacetech Startups During Upcoming Visit

Musk

Tesla CEO Elon Musk will arrive in India on April 21 to discuss his plans for a substantial investment in the country’s electric vehicle market. On April 22, he will also meet with the founders of pioneering Indian space tech startups and companies at Bharat Mandapam.

The meeting, organised by the Indian National Space Promotion and Authorisation Centre (InSpace), will showcase the technologies and products of various private spacetech players to the American entrepreneur.

Prime Minister Narendra Modi is also expected to be present at the event.

Among the invitees are Hyderabad-based space tech companies such as Skyroot Aerospace and Dhruva Space, Bengaluru-based startups like Pixxel, Digantara, and SatSure, and Chennai-based AgniKul Cosmos and GalaxEye Space

Companies that manufacture critical systems and subsystems for the space sector, including Ananth Technologies, Astra Microwave, Astrome Technologies, and Centum Electronics, have also been invited.

Elon Musk’s upcoming visit to India holds immense significance for the country’s burgeoning space sector. This engagement is expected to be exploratory in nature, allowing Musk to gain insights into the innovations and offerings of these startups.

Musk’s visit comes at a time when India’s space economy is poised for significant growth, with estimates suggesting it could reach $44 billion by 2033.

The government has been actively promoting private sector participation in the space industry through policy reforms, such as the introduction of the Indian Space Policy in 2023 and the relaxation of foreign direct investment (FDI) norms.

These policy changes have sparked a surge in entrepreneurial activity, with startups like Skyroot Aerospace and Agnikul Cosmos preparing for their launches.

The potential collaboration between Musk’s companies, particularly SpaceX and Starlink, and Indian startups could further boost this trend and drive innovation in the sector.

Moreover, the visit gained significance as SpaceX’s Starlink, a satellite communication internet service, sought to enter the Indian market.

Reports indicate that the Indian government has expedited granting licenses to Starlink, which could pave the way for its launch in the country.

The entry of SpaceX into the Indian space tech ecosystem is expected to intensify competition in the launch services sector, transforming it into a highly competitive market.

With SpaceX’s cost-effective rocket launches and the presence of affordable launch vehicle startups like Skyroot and Agnikul, ISRO’s dominance in the sector may face challenges.

The post Elon Musk Set to Meet Indian Spacetech Startups During Upcoming Visit appeared first on Analytics India Magazine.

Top 10 Takeaways from Stanford’s AI Index Report 2024

The Stanford Institute for Human-Centered AI recently released the 2024 edition of its highly influential AI Index report. This comprehensive study offers an in-depth look at the current state of artificial intelligence, analyzing key trends, advancements, and challenges across various domains. As AI continues to reshape our world at an unprecedented pace, the 2024 AI Index provides a timely and invaluable resource for understanding the complex landscape of this transformative technology.

This year's report is particularly noteworthy for its expanded scope and depth of analysis. With a wealth of original data and insights, the 2024 edition delves into crucial topics such as the soaring costs of training state-of-the-art AI models, the lack of standardization in responsible AI reporting, and the growing impact of AI on scientific discovery and the workforce. Moreover, the report features a dedicated chapter exploring AI's influence on science and medicine, highlighting the technology's potential to revolutionize these critical fields.

As we navigate the rapid evolution of AI, the 2024 AI Index serves as an essential guide, empowering policymakers, researchers, industry leaders, and the general public to make informed decisions and engage in constructive discussions about the future of this powerful technology.

1. AI's Performance vs. Humans

The report highlights AI's impressive strides in surpassing human performance across various benchmarks, such as image classification, visual reasoning, and English understanding. However, it also acknowledges that AI still lags behind humans in more complex tasks, including competition-level mathematics, visual commonsense reasoning, and planning. This nuanced assessment underscores the importance of recognizing AI's strengths and limitations as the technology continues to evolve.

2. Industry Dominance in AI Research

In 2023, the AI industry firmly established its dominance in cutting-edge AI research. The report reveals that industry players produced a staggering 51 notable machine learning models, dwarfing academia's contribution of just 15. Interestingly, the year also witnessed a record high of 21 models resulting from industry-academia collaborations, signaling a growing trend of cross-sector partnerships in AI development.

Image: Stanford AI Index report

3. Rising Costs of Training State-of-the-Art Models

The AI Index report sheds light on the soaring costs associated with training state-of-the-art AI models. According to their estimates, OpenAI's GPT-4 required a staggering $78 million worth of compute resources for training, while Google's Gemini Ultra model demanded an even more astronomical $191 million. These figures underscore the immense financial investments required to push the boundaries of AI capabilities and raise important questions about the accessibility and sustainability of frontier AI research.

4. U.S. Leadership in Top AI Models

The United States has solidified its position as the global leader in cutting-edge AI development, according to the 2024 AI Index report. U.S.-based institutions were responsible for originating an impressive 61 notable AI models in 2023, far outpacing the European Union's 21 and China's 15. This disparity highlights the U.S.'s continued dominance in AI innovation and its ability to attract top talent and resources in the field.

5. Lack of Standardization in Responsible AI Reporting

As AI models become increasingly powerful and influential, the need for responsible development and deployment practices has never been more critical. However, the AI Index report exposes a significant lack of standardization in how leading developers report on their models' risks and limitations. Companies like OpenAI, Google, and Anthropic primarily test their models against different responsible AI benchmarks, making it difficult to systematically compare and assess the potential hazards associated with these technologies. This finding underscores the urgent need for industry-wide standards and collaboration to ensure the safe and ethical development of AI.

Image: Stanford AI Index

6. Surge in Generative AI Investment

While overall AI private investment experienced a decline in 2023, the generative AI sector defied this trend, witnessing a remarkable surge in funding. The report reveals that investment in generative AI nearly octupled from 2022, reaching an astounding $25.2 billion. Major players in the space, such as OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds, reflecting the growing excitement and potential surrounding generative AI technologies. This influx of capital is expected to fuel further innovation and competition in the field, as companies race to develop more sophisticated and powerful generative models.

7. AI's Positive Impact on Worker Productivity and Quality

The 2024 AI Index report delves into the growing body of research examining AI's impact on the workforce. Several studies conducted in 2023 suggest that AI technologies are enabling workers to complete tasks more efficiently and to a higher standard. These findings indicate that AI has the potential to augment human capabilities and bridge skill gaps between low- and high-skilled workers. However, the report also cautions that utilizing AI without proper oversight and guidance can lead to diminished performance, emphasizing the importance of responsible implementation and human-AI collaboration in the workplace.

8. AI Accelerating Scientific Progress

The past year has witnessed a remarkable acceleration in the application of AI to scientific discovery, as highlighted by the AI Index report. Building upon the groundbreaking AI-driven scientific advancements of 2022, 2023 saw the launch of even more transformative applications. Notable examples include AlphaDev, which optimizes algorithmic sorting efficiency, and GNoME, which streamlines the materials discovery process. These cutting-edge AI tools are revolutionizing the way scientists approach complex problems, paving the way for unprecedented breakthroughs across various scientific disciplines.

9. Increase in U.S. AI Regulations

As AI technologies become more ubiquitous and influential, governments are grappling with the challenge of regulating their development and deployment. The AI Index report reveals a sharp increase in the number of AI-related regulations in the United States over the past year and the preceding five years. In 2023 alone, there were 25 AI-related regulations introduced, a remarkable increase from just one in 2016. Moreover, the total number of AI regulations grew by 56.3% from 2022 to 2023, reflecting the growing recognition among policymakers of the need to establish clear guidelines and oversight mechanisms for AI technologies.

Image: Stanford AI Index Report

10. Growing Public Awareness and Concern About AI

The 2024 AI Index report also sheds light on the evolving public perception of AI and its potential impact on society. A global survey conducted by Ipsos reveals that the proportion of people who believe AI will dramatically affect their lives within the next three to five years has increased from 60% to 66% over the past year. Additionally, 52% of respondents express nervousness toward AI products and services, a significant 13 percentage point rise from 2022. In the United States, Pew Research Center data indicates that 52% of Americans report feeling more concerned than excited about AI, up from 38% in 2022. These findings underscore the growing public awareness of AI's transformative potential and the need for open, transparent dialogue to address the concerns and aspirations of individuals and communities worldwide.

Assessing the State of AI

The 2024 AI Index report offers a comprehensive and nuanced assessment of the state of AI, highlighting the rapid advancements, challenges, and societal implications of this transformative technology. From the soaring costs of training state-of-the-art models to the lack of standardization in responsible AI reporting, the report underscores the need for collaboration, innovation, and responsible development practices to ensure that AI benefits humanity as a whole. As public awareness and concern about AI continue to grow, it is crucial that policymakers, researchers, industry leaders, and the general public engage in informed, inclusive discussions to shape the future of this powerful technology. The insights provided by the AI Index report serve as a valuable resource in navigating the complex landscape of AI and charting a course toward a more equitable, sustainable, and beneficial AI-driven future.

GPT-5 Likely to be Released After the US Elections

joe biden

With the US presidential elections approaching in November 2024, major players are hesitant to release advanced AI systems and models. The reluctance is primarily due to the fears of spreading misinformation or influencing election outcomes, which could lead to stringent regulations if they disrupt the process in any way.

OpenAI CTO Mira Murati recently confirmed that the elections were a major factor in the release of GPT-5. “We will not be releasing anything that we don’t feel confident on when it comes to how it might affect the global elections or other issues,” she said last month.

And it doesn’t help that their voice engine was cited as being another factor in misinforming voters. “We recognise that generating speech that resembles people’s voices has serious risks, which are especially top of mind in an election year,” the company said.

Recently, Elon Musk’s AI chatbot Grok falsely reported that ‘PM Modi was ejected from the Indian government’, sparking controversy for spreading misinformation. The examples are aplenty.

In the US, a deepfake video falsely showed President Joe Biden claiming that Russia has occupied Kyiv for ten years, confusing it with Crimea. Similarly, an audio deepfake circulated, wherein the US president falsely urged voters not to participate in the primaries, stating that they were rigged.

AI Regulations also Take a Back Seat

While misuse runs rampant, regulations are also moving at a snail’s pace, likely because it is an election year. Things will hopefully come into action early next year, influenced by which party comes into power.

Recently, Stanford Institute for Human-Centred AI senior fellow Erik Brynjolfsson warned that over-regulation of AI could become an issue if not mitigated.

“Smart regulation can be helpful, it can even speed up the adoption of technologies and protect people from harm. But, at the same time, over-regulating can be harmful and slow down adoption,” he said while speaking to CNBC.

However, the reality might be that we’re rushing toward over-regulation at this point, which could make compliance all but a pipe dream for industry players.

AI Regulations So Far

Last week, Representative Adam Schiff introduced a bill into the US House of Representatives requiring companies to report any use of copyrighted materials in training AI systems.

“AI has the disruptive potential of changing our economy, our political system, and our day-to-day lives. We must balance the immense potential of AI with the crucial need for ethical guidelines and protections,” said Schiff on its introduction.

And balance they have. Schiff’s bill isn’t the first piece of regulation introduced on AI and it certainly won’t be the last. Last year alone, there has been a flurry of activity related to AI regulations within the US government.

While an overarching act may not currently be in place, state legislatures have been quick to introduce and pass bills related to AI, with over 18 states enacting legislations, and over 400 AI bills introduced in 2023.

Meanwhile, there have been several federal measures that have been introduced, including the blueprint for an AI Bill of Rights and an executive order on the usage of AI in the country.

The sentiment that AI might be difficult to regulate seems to have everyone on their feet to take a shot at it at the very least.

However, the goal always seems to have been a federal law rather than several state legislations. Californian Senator Scott Wiener said, “I would love to have one unified, federal law that effectively addresses AI safety. Congress has not passed such a law. Congress has not even come close to passing such a law.”

The lack of a federal law could have drastic effects on the industry as a whole as companies try to adapt to several legislations, not to mention international regulations. So the need for such a law comes from both states as well as the companies themselves.

Future AI Regulations

It’s likely, but whether it will happen is hard to tell. With the elections coming up, both the Democratic and Republican parties have similar stances on AI.

The Democrats have promised to “mobilise public and private actors to ensure that new products and new discoveries are bound by law, ethics and civil liberties protections”.

Similarly, the Republican Party has shown its support for AI regulation, though the two parties differ on what grounds they should be regulated. According to studies, while both parties support AI regulation, Democrats have shown a marked concern on ethics, while Republicans are concerned about AI capabilities and data rights.

The 2024 presidential candidates have both also shown an active response to the growing importance of the industry over the last few years. During his time in office, Republican candidate Donald Trump introduced an executive order that pushed for innovation within the industry in 2020.

This was shortly followed by the launch of the National Artificial Intelligence Initiative Office.

Likewise, the Biden administration also introduced an executive order on ensuring “safe, secure and trustworthy” AI last year. The order outlines safety standards for the usage of AI.

However, both of these are still orders. Neither party has committed to a policy that would be both easy to implement and navigate, nor has there been talk of an overarching policy so far.

While, understandably, the consensus is that AI is hard to regulate, continued inaction could have drastic effects on the ecosystem as a whole.

With major updates coming to the industry, like OpenAI’s GPT-5, the new administration could make or break the industry in how they choose to regulate.

The post GPT-5 Likely to be Released After the US Elections appeared first on Analytics India Magazine.

India is Making its Own AI Servers 

Until the government of India introduced the production-linked incentives (PLI) scheme for IT hardware, servers, which are the building blocks of high performance computing (HPC), were mostly imported and then assembled in India.

However, with the government’s increased focus on promoting local manufacturing through PLI schemes, the focus has shifted to developing servers locally.

“We were granted PLI on November 18 2023, and by December 31, we successfully introduced the latest generation of Intel’s 4th Generation Intel Xeon Scalable Processors tailored for HPC, data centres and generative AI.

“We take pride in being the first company to achieve this milestone. As of February 2023, no other entity – multinational or local OEM – has managed to manufacture servers on the latest Intel generation in India,” Amrish Pipada, founder & CEO, Mega Networks told AIM.

Notably, Hewlett Packard Enterprise (HPE) recently announced that its ‘Made in India’ servers are now being deployed at large scale to serve the growing demands of Indian customers.

HPE unveiled its Make in India plan in July 2023 and committed to manufacturing approximately USD 1 billion-worth of high-volume servers in the first five years of production.

Similarly, Lenovo, earlier this year, also announced its plans to capitalise on the PLI scheme and manufacture servers locally to bolster its data centre operations.

Made in India Servers

However, despite the success of the PLI scheme, it is important to understand that some components of these servers are still imported from other countries.

Pipada believes the PLI scheme marks the beginning of India‘s manufacturing venture. “Initially, we import the Printed Circuit Board (PCB) and other components, conduct local Surface-Mount Technology (SMT) assembly, and integrate software stacks developed over the past year.”

Moreover, to encourage local manufacturing of components such as PCB, the government plans to further incentivise with the PLI scheme. “While the mechanical aspect has begun locally, it will progressively expand. With time, the entire ecosystem will develop further,” Pipada said.

Altos Computing, a subsidiary of Acer, is actively engaged in local server manufacturing. Sanjay Virnave, country head and general manager at Altos Computing, shared with AIM that 50% of their servers and components are currently developed in India. As the ecosystem continues to mature, they anticipate this percentage will rise even higher.

Homegrown companies at the forefront

Besides biggies like HPE, Dell, Foxconn, Lenovo among others, the PLI scheme has been granted to a few domestic companies like VVDN, Optiemus, Padget Electronics, SOJO Manufacturing Services, Goodworth, Neolync, Syrma SGS, Panache Digilife, ITI Ltd, Netweb Technologies and MegaNet.

MegaNet already caters to a host of public sector clients like the Bharat Sanchar Nigam Limited (BSNL), the National Remote Sensing Centre (NRSC), IITs and the Indian Railways.

“Our work with Indian Railways involves developing systems for visitor management and face recognition for their smart coaches. Their requirement was to enhance security and streamline visitor management. The problem we are solving for the railways is to provide a secure and efficient way to manage visitors and implement advanced face recognition technology for enhanced security,” Pipada said.

On the private sector side, MegaNet is serving the likes of Reliance, Tata, NTT, and YOTTA. “We are providing them tailored-made storage systems to meet their dynamic business needs. Our solutions are customised to ensure seamless storage and efficient operations,” he added.

Last year, Netweb Technologies announced its role as a manufacturing partner for the NVIDIA Grace CPU Superchip and GH200 Grace Hopper Superchip MGX server designs.

Indeed local manufacturers have an important role to play in advancing India’s manufacturing ambitions. With strong support from the government, India can manufacture each and every component of the servers, including silicon chips, locally.

Catering to AI Demand

The wider adoption of AI since the generative AI explosion has also fueled the demand for servers in India. “With the increasing adoption of AI technologies, the need for powerful, scalable, and efficient server infrastructure has grown exponentially,” Pipada said.

“What’s notable is the increasing number of users who previously weren’t utilising servers but are now doing so. This trend reflects the growing demand for digital infrastructure. Moreover, the existing physical infrastructure is also robust and supportive of this development,” Virnave said.

Interestingly, not just AI, but the data centre landscape is also growing in India and it presents a significant business opportunity for server companies.

For example, Hiranandani Group-backed Yotta has announced its AI Shakti Cloud and plans to have 32,768 NVIDIA GPUs by the end of 2025. NeevCloud, a startup with similar ambitions also plans to have a 40,000 GPU capacity by 2026. Tata Communications, too will provide GPU as a service for AI training and inferencing.

Moreover, the Indian government has also announced its plans to build a 25,000 GPU cluster to make these processors more accessible to Indian startups.

The post India is Making its Own AI Servers appeared first on Analytics India Magazine.

OpenAI Hires Pragya Misra As Its First Employee in India

OpenAI, the developer of ChatGPT, has hired its first employee in India, appointing Pragya Misra as the government relations head amid the country’s ongoing elections that will shape AI regulations, reported Bloomberg.

Microsoft Corporation-backed company has hired Pragya Misra to oversee public policy affairs and partnerships in India, added Bloomberg report. Misra, aged 39, brings experience from her previous roles at Truecaller AB and Meta Platforms Inc. She is expected to commence her role at OpenAI by the end of this month.

The company’s new hire shows they’re working on getting favorable rules for AI as countries figure out how to regulate it. India’s a big market with a lot of growth potential, but it’s tricky because local laws protect local companies.

Pragya Misra graduated with an MBA from the International Management Institute in 2012, as indicated on her LinkedIn profile. She earned a commerce degree from Delhi University and completed a Diploma in Bargaining and Negotiations from the London School of Economics and Political Science.

Last year, reports surfaced that OpenAI is working with former Twitter India head Rishi Jaitly as a senior advisor to facilitate talks with the government about AI policy. Report added that Jaitly has been helping OpenAI navigate the Indian policy and regulatory landscape.

OpenAI is also looking to set up a local team in India.

OpenAI doesn’t have an official base in India yet, except for a trademark they got this month. When CEO Sam Altman visited New Delhi in June last year during his global tour, he met with Prime Minister Narendra Modi and had a positive conversation. However, there were no announcements from Altman or OpenAI during the two-day visit.

The post OpenAI Hires Pragya Misra As Its First Employee in India appeared first on Analytics India Magazine.

Meta Llama 3 Now Available on Databricks For Enterprise

After the release of Meta’s Llama 3, Databricks has announced its partnership with Meta to make Llama 3 available to enterprises of all sizes through a fully managed API on the Databricks platform.

Llama 3 comes in two initial versions – an 8 billion parameter model and a 70 billion parameter model. These models outperform existing open-source LLMs such as Gemma and Mistral, and even match the performance of top proprietary models like GPT-4 and Gemini on many benchmarks.

Databricks Model Serving will provide instant access to Meta Llama 3 via Foundation Model APIs, allowing users to experiment with, switch between, and deploy foundation models across all cloud providers easily. Customers can try Meta Llama 3 directly from the Databricks AI Playground in the coming days.

Naveen Rao, VP of generative AI at Meta, highlighted the significance of this release. “We are leveraging the awesome innovations Meta baked into their latest models with Databricks. The capabilities are really game changing, and we provide the ability to customize, manage, and deploy at scale securely. This is also a great way to enable generative AI on your data with our RAG capabilities built on top of these models,” he said.

“Meta Llama 3, which will be rolling out regionally in the next few days, can be accessed through the same unified API on Databricks Model Serving that thousands of enterprises are already using to access other open and external models,” the company said through its blog post.

Apart from Databricks, Llama 3 models are now also rolling out on Amazon SageMaker, Google Cloud, Hugging Face, Kaggle, IBM WatsonX, Microsoft Azure, NVIDIA NIM, and Snowflake. Additionally, the models will be compatible with hardware platforms provided by AMD, AWS, Dell, Intel, NVIDIA, and Qualcomm.

Last month, Databricks also announced the launch of DBRX. This new model outperforms SOTA open-source models like Llama 2 70B, Mixtral-8x7B and Grok-1 across various benchmarks, including language understanding (MMLU), programming (Human Eval) and Math (GSM 8K).

The post Meta Llama 3 Now Available on Databricks For Enterprise appeared first on Analytics India Magazine.