After PAN & Aadhaar, Protean Now Leverages AI for Citizen-Centric Solutions

Protean eGov Technologies, a provider of e-governance solutions, was instrumental in rolling out critical platforms such as the PAN card, Tax Information Network (TIN), the Pension Fund Regulatory and Development Authority (PRAN) and Aadhaar to release use cases like eSign and eKYC.

The company now sees immense potential for AI-powered infrastructure across domains like healthcare, education, agriculture, and governance. It is developing AI-powered chatbots that can communicate in local languages, besides promoting inclusion and accessibility to government services. It is also building AI models for fraud detection.

Targeting Startups, SMEs, and Government Agencies

Protean is targeting startups, SMEs, and government agencies with its “very sovereign data cloud offering”.

While the company refrained from naming specific clients, Metesh Bhati, CDO of Protean eGov in conversation with AIM, mentioned that a few government agencies are already using Protean’s cloud services, with some of their existing data centres migrating to Protean cloud.

The clientele includes both public and private sector organisations seeking reliable and secure cloud solutions.

“Our vision is to align with People+AI to provide advanced machine learning tools and support innovative projects in smart cities and predictive analytics within the public sector,” said Bhati.

Protean has given in-principle approval to partner with Open Cloud Compute (OCC), an offering by People Plus AI, to design an open network from the supply side. “That’s where we are saying we’ll be offering the AI infrastructure,” Bhati added.

Protean eGov’s first contribution to the AI mission is its in-principle approval to be part of the OCC initiative, which aims to scale up India’s AI capabilities.

The company recognises the need for a distributed cloud and compute infrastructure rather than a centralised one, drawing inspiration from successful examples like Aadhaar and the Account Aggregator framework.

“Instead of building big techs that need data centres powered by nuclear fuel, etc., we can have a distributed cloud and compute rather than centralised,” Bhati explained.

Protean eGov is also exploring the concept of making AI agents available on open networks, allowing anyone to consume and contribute to them.

ProKisan App: Reimagining Farmers’ Lives

Protean eGov recently conducted a proof-of-concept called ProKisan app, reimagining farmers’ lives using the combinatorial power of Digital Public Infrastructure (DPI) like UPI, Bhashini for localisation, and ONDC for buying, selling, and procuring.

The company partnered with Google to provide data on weather, MSP rates, and the best crops to sow, making these decision points available in local languages.

ProKisan offers a range of features to assist farmers at every stage of the agricultural process. The app provides weather predictions and decision support for crop selection based on the season. It also enables farmers to procure fertilisers and seeds through the Open Network for Digital Commerce (ONDC).

One of the key highlights of ProKisan is the integration of Honest, an AI-powered chatbot that offers bite-sized learning modules on various topics relevant to farmers.

“Today, as individuals, we use a lot of these LLMs. If we have a question, we just prompt it and get an answer. We thought, why not use these Honest trails and have a quick small bite-sized learning offering through Honest,” Bhati explained.

ProKisan also leverages AI to detect crops and create catalogues, allowing farmers to easily publish their yield on the network for potential buyers. The app supports UPI transactions and is localised in multiple languages using Bhashini, with plans to expand the language reach through Google’s technology stack.

“There is immense scope here. Maybe we can have multiple learning modules; kids can use it to ask questions in local languages; and there can be healthcare offering of IDs available,” Bhati added.

Protean developed ProKisan as a proof-of-concept in collaboration with Google and other partners, demonstrating the potential of combining AI, DPIs, and open digital ecosystems to create transformative solutions for citizens.

Leveraging AI Across Public Sector Projects

Protean eGov is witnessing a growing demand for AI-powered solutions from its existing customers and future opportunities.

“Most of our existing customers and the future opportunities are talking about AI. And when we say they’re talking about AI, it is about how AI can come in to make things autonomous, agile, and all-inclusive,” Bhati said.

The company has worked with over eight ministries in the past and is now focusing on leveraging AI to enhance personalisation, engagement, and inclusion in its offerings. It is also exploring how AI can help prevent fraud and improve the safety and speed of its products.

“Whether it’s our existing customers of PAN, where it is a customer, or PFRDA as a regulator and other products, there is a unanimous ask on how AI can intervene and do things more, how our products can be safer, faster, and inclusive,” Bhati explained.

Protean eGov’s roadmap includes developing AI-powered solutions for fraud detection and localised language chatbots for external customers. Internally, the company is experimenting with AI to accelerate code development and reviews, optimise infrastructure management, and strengthen security posture.

“Security is something that we are paranoid about as an organisation. Can AI come in to support us on our security posturing are few of the experiments we are running ourselves as well as with the collaboration partnership we have with the big techs,” Bhati added.

Advocating for Verifiable Credentials and Data Minimisation

While GenAI continues to transform industries and raise concerns about data security, Protean eGov advocates for the adoption of verifiable credentials and data minimisation to safeguard personal information.

“Data is sacrosanct, specifically PII (personally identifiable information), and with the new DPDP about to be rolled out, I guess the control is more about the data not owned by the organisation, but of the individual,” Bhati said, emphasising the importance of treating data as sacrosanct, especially in light of the upcoming Digital Personal Data Protection (DPDP) Bill.

Bhati highlighted the growing awareness among urban populations about data ownership and privacy.

“Individuals, at least the urban population like you and me, are more aware of our data. We are very particular about it, be it on social media or while talking to our banks, investment, etc. It’s eventually us and our data as individuals,” he added.

Protean eGov believes that technologies like AI and blockchain can play a crucial role in enabling verifiable credentials, allowing for authentication and verification without the need to share sensitive information like Aadhaar or PAN card numbers.

Bhati cited the example of DigiYatra, a contactless passenger processing system that uses facial recognition technology backed by Aadhaar, as a successful use case of verifiable credentials.

“Today, there are technologies, I wouldn’t say specifically AI, where verifiable credentials can play a role without sharing the information, your authentication verification. So, you need not even share your Aadhaar number or PAN card number per se,” Bhati explained.

Protean eGov believes that the adoption of verifiable credentials and data minimisation can help organisations lower their costs associated with securing and encrypting personally identifiable information (PII).

“I guess in the next couple of years, the verifiable credential where I don’t think any organisation would need to save any of this PII data because it will work on more on a verification basis,” Bhati added.

Partnerships with Microsoft and Google

Protean eGov is actively collaborating with tech giants Microsoft and Google to develop innovative product offerings and establish Centers of Excellence focused on emerging technologies like AI and DPI.

One of the key areas of collaboration is leveraging DPIs available in India, productising them, and using large tech companies to further scale and develop common go-to-market strategies.

Protean is also working with organisations like MOSIP (Modular Open Source Identity Platform) and OpenCRVS (Civil Registration and Vital Statistics) to extend its reach internationally.

On the security front, Protean has been experimenting with Microsoft’s Security Copilot and Google’s security solutions. The company is also utilising Google’s Vertex AI to create predictive modelling for fraud detection, drawing inspiration from how Google alerts users when logging in from an unfamiliar IP or device.

The post After PAN & Aadhaar, Protean Now Leverages AI for Citizen-Centric Solutions appeared first on AIM.

Meta’s GenAI moves from simple predictions to a chess game of consequences

meta-2024-multi-token-prediction.png

A schematic of Meta's approach to what's called multi-token prediction. During training of the AI model, the inputs are fed in as usual, but instead of the AI model being trained to produce a single token as a response — the next most likely word, say — the model is trained to simultaneously generate four or more likely tokens.

Generative AI models such as GPT-4 have astounded us all with the ability to produce textual output that resembles thought, such as answers to multiple-choice questions. Reaching the "right" thought, however, such as answering the question, remains a deeper problem, as evidenced by the phenomenon of "hallucinations," where AI models will assert — with apparent confidence — false statements.

In a new work, scientists at Meta have tweaked large language models (LLMs) to produce output that could be more correct in a given situation, by introducing the notion of penalties for wrong answers.

Also: Meta's 'pruning' of Llama 2 model shows path to slimmer AI

The approach, known as "multi-token prediction," seeks to instill in an AI model a cost for less desirable answers. In that sense, it is analogous to popular approaches for establishing guardrails in AI such as "reinforcement learning from human feedback," or RLHF, a method OpenAI popularized to curb ChatGPT's most outrageous outputs.

(An "AI model" is part of an AI program containing numerous neural net parameters and activation functions that are the key elements for an AI program's functions.)

"Gains are especially pronounced on generative benchmarks like coding, where our models consistently outperform strong baselines by several percentage points," write the authors of "Better & Faster Large Language Models via Multi-token Prediction." Lead author Fabian Gloeckle, joined by colleagues at Facebook AI Research and collaborating institutions CERMICS École des Ponts ParisTech and LISN Université Paris-Saclay, posted the paper last month on the arXiv pre-print server.

The authors' principal concern is that LLMs — despite their impressive accomplishments — don't achieve things such as reasoning or planning. The conventional approach of ChatGPT and the rest, called "next-token prediction," they write, "remains an inefficient way of acquiring language, world knowledge, and reasoning capabilities."

Instead of simple next-token prediction, where the AI model is trained to predict a single "token," such as a word or character in a string of tokens — say, the next word in a sentence — the Meta team's multi-token version is trained to predict multiple tokens of text simultaneously, each of which could be the correct completion of the sequence.

Technically, Gloeckle's team alter the basic structure of the LLM, known as a Transformer, so that it has four output "heads" that each produce a word or character or other symbol, rather than the standard single head.

The approach's immediate benefit is that it can be more memory-efficient when the AI model is live, making predictions for users, known as the inference stage of AI. Because multiple output heads can be working behind the scenes to try possibilities, a high degree of parallelism can happen. This form of "speculative decoding" means the multi-token approach "can speed up inference by a factor of 3×" versus predicting one thing at a time.

Also: Meta unveils second-gen AI training and inference chip

There's also a more profound insight. Normal AI models picking one token at a time are — in a sense — flat: They don't view any single prediction as more important than the last, as long as the current prediction is a good one.

In fact, the team notes there is a big difference between certain tokens in a phrase. In the oft-cited punctuation meme — "stop clubbing, baby seals" — the presence or absence of a comma in the middle phrase is the difference between an urgent plea for animal rights and an amusing image. The humor in the utterance plays in the mind because the comma alters the semantics of the phrase.

The point, as others have observed, is that "not all token decisions are equally important for generating useful texts from language models," Gloeckle's team wrote. "While some tokens allow stylistic variations that do not constrain the remainder of the text, others represent choice points that are linked with higher-level semantic properties of the text and may decide whether an answer is perceived as useful or derailing."

Also: Rote automation is so last year: AI pushes more intelligence into software development

The multi-head, multi-token approach, the team wrote, assigns fitness to each prediction based on the other simultaneous predictions. "Generally, we believe that the quality of text generations depends on picking the right decisions at choice points, and that n-token prediction losses promote those," the team wrote.

The "choice point" involves those moments where one prediction entails others down the road that can make or break the total phrase. "Multi-token prediction implicitly assigns weights to training tokens depending on how closely they are correlated with their successors," the team wrote.

By analogy, Gloeckle's team liken choosing the next word to moving through a maze: Each choice can be a route to the reward, or a route to some terrible fate.

They use the image of a maze to illustrate the "sequential prediction task" (as they refer to predicting the next word). The next right step could be a pivotal one that sends the AI model on the right path or the wrong path — a "consequential choice," as they term it.

Choosing the next right token is like walking through a maze, write the authors: at certain moments, the choice is a "consequential" one that will send the program to success (the trophy) or defeat (skull and crossbones.)

In a striking fusion of technologies, the authors link the multi-token approach to the RLHF approach, trying to predict a reward far down the line: "Assume that the language model is deployed in a reinforcement learning setting like in reinforcement learning from human feedback … [where] actions are single tokens […] to generate."

Linking text prediction to reward functions in that way brings into play all the areas where reward functions have made great strides in gaming. Reward functions are used in all sorts of AI problems referred to as reinforcement learning, not just RLHF.

For example, Google's DeepMind unit used reinforcement learning to develop AlphaZero, the program that can beat humans at chess and Go. It was also used in the program AlphaStar to compete in video game skill competitions against humans in the real-time strategy game StarCraft II.

Also: Snowflake says its new LLM outperforms Meta's Llama 3 on half the training

This gamification has the immediate result of producing a more "optimal" answer from the multi-token approach. The authors provide a variety of benchmark results. One, for example, compares how an AI model with 7 billion neural parameters, or weights, improves performance as it moves from single to multi-token prediction.

On a test called "Mostly Basic Programming Problems," or MBPP, developed at Google in 2021, an AI model has to produce code such as lines of Python for a given function. On that benchmark, the program always achieves greater accuracy with multi-token prediction.

There's also a sweet spot. The AI model seems to perform best at four simultaneous tokens, while predicting more than that — six or eight — leads to results that are not as good.

On standardized tests such as "Mostly Basic Programming Problems," where an LLM has to generate programming code, the same-sized AI model, one with 7 billion neural parameters, or weights, achieves greater accuracy when more tokens are produced, as indicated by "n," the number of tokens simultaneously generated.

As with many things in neural networks, it's not immediately certain why multi-token prediction should be better than single-token prediction. The hunch the authors offer is that by training a model for multi-token prediction, the resulting model avoids a disconnect that happens when the AI model makes live predictions with real prompting from users. That's what's called a "distribution mismatch between teacher-forced training and autoregressive generation."

Also: You can make big money from AI — but only if people trust your data

There are still many things to figure out, Gloeckle and his colleagues wrote. One goal is to develop a method of automating the sweet spot, the optimal number of simultaneous tokens that leads to the greatest accuracy. Another is how to automatically determine the right amount of data needed to train the AI model, given that "optimal vocabulary sizes for multi-token prediction are likely different from those for next-token prediction, and tuning them could lead to better results."

A larger takeaway is that traditional reinforcement learning may have much more to offer generative AI than many have suspected to date, suggesting there will be more fusion of the two methodologies down the road.

Artificial Intelligence

Ready to upskill? Look to the edge (where it’s not all about AI)

edgecolorgettyimages-1225456245

Developments with edge and internet of things-based initiatives may not ride the top of today's news cycles, but there's been a huge surge of activity around computing at the edges. IoT and edge may even be reshaping or creating more technology opportunities than artificial intelligence is — despite AI currently enjoying the lion's share of attention.

The pervasiveness of edge and IoT computing was borne out in a survey of 1,037 IT executives and professionals, which found that control logic, or embedded automation, surpassed AI as the most common edge computing workload (40% to 37%).

Also: AI at the edge: 5G and the Internet of Things see fast times ahead

"Does this imply a renewed focus on the practical aspects of delivering real-world solutions? Only time will tell," the survey's authors mused.

The Eclipse survey found development increasing across all IoT sectors, including industrial automation (33%, up from 22% a year before), followed by agriculture (29%, up from 23%), building automation, energy management, and smart cities (all at 24%). Java ranked as the top language for IoT gateways and edge nodes, while C, C++, and Java are the most widely used languages for constrained devices.

When it comes to skill requirements, everyone seems to be worrying about AI design and development — however, edge and IoT bring their own skill demands.

"Key skills in designing and building edge systems involve shifting focus from traditional centralized data center approaches to understanding and optimizing the edge of networks and infrastructure," George Maddaloni, chief technology officer for operations at Mastercard, told ZDNET. "We need to process data where it's generated, improving data flow efficiency, and reducing the need to send large amounts of raw data to process centrally."

Designing and constructing edge and IoT systems "requires a unique set of skills," Tony Mariotti, CEO of RubyHome, told ZDNET. "Unlike traditional IT which often focuses on centralized data processing, edge computing demands expertise in decentralized architectures and real-time data processing. Professionals need to be adept in IoT integration, network security, and data analytics. These skills focus on rapid, secure data handling at the point of collection, crucial for applications requiring immediate insights."

Also: What is AI? Everything to know about artificial intelligence

And yes, AI and machine learning also figure into edge and IoT initiatives. This is driven by demand for "more intelligent and autonomous systems capable of making decisions in real-time, directly at the point of data collection," Harshul Asnani, president of Tech Mahindra's technology, media, and entertainment business, told ZDNET. "By processing data on the device itself rather than relying on cloud-based systems, these AI-enabled edge devices reduce latency, decrease bandwidth usage, and improve response times. This is crucial for applications requiring immediate action, such as autonomous vehicles, real-time analytics in manufacturing, and smart city technologies."

The insights technology managers and professionals require to move forward with edge and IoT "include the necessity of scalable solutions to manage large data volumes and the importance of enhanced security measures," said Mariotti. "Professionals have learned to deploy complex IoT networks that maintain integrity and confidentiality while handling sensitive data, a crucial advancement for all technology-driven businesses."

This requires "understanding the nuances of data governance and real-time analytics," Asnani agreed. "As data processing moves closer to the edge, managing the sheer volume, variety, and velocity of data generated by IoT devices becomes a complex task. It necessitates robust data governance frameworks to ensure data quality, privacy, and compliance with regulatory standards."

Also: Bank CIO: We don't need AI whizzes, we need critical thinkers to challenge AI

As edge and IoT are more likely to require real-time capabilities, "real-time or near-real-time data analytics become crucial for extracting actionable insights instantaneously, demanding more sophisticated analytical tools and techniques," Asnani added. "Embracing edge analytics requires technological adaptation and a shift in mindset, prioritizing agility, and the ability to make decentralized decisions. Understanding these aspects will be critical for data managers and analysts to leverage the full potential of edge computing and IoT."

Leveraging the edge and IoT has proven to be critical for MasterCard, which maintains far-flung data processing centers. The edge footprint "has shifted to something that can now use both private and public cloud," said Maddaloni. "In public cloud, there is now a series of 'edge cloud' regions that we can use for containers, or for a simplified approach in our private cloud. From a resiliency perspective, we can now include both a single consolidated stack with a power distribution unit for energy backup in the case of failure as well as a cloud backup platform if needed."

MasterCard's edge systems also include sensors to "monitor the performance of motors, pumps, and emergency power generators," Maddaloni added. "The ability of these sensors to automate responses to certain conditions, like adjusting cooling systems or power distribution, minimizes the need for human intervention. This automation not only enhances efficiency but also allows personnel to focus on more strategic tasks."
There are sustainability abilities as well, said Maddaloni. "IoT provides insights that lead to energy savings, water conservation, and overall sustainability in operations. By optimizing resource usage, IoT helps in achieving greener data centers."

Also: 5G and edge computing: What they are and why you should care

The move towards decentralized data processing "means that professionals need to understand how to leverage edge computing to enhance operational efficiency and decision-making processes," said RubyHome's Mariotti. "This is especially critical in sectors that rely on real-time analytics, such as healthcare, finance, and smart real estate operations."

That brings us to the question of whether "edge" is the future for which tech and business pros need to prepare. "With the exponential growth of data at the edge and in IoT environments, a company's edge compute capabilities could become a decisive advantage," said Maddaloni. "The escalating volume of raw data necessitates a shift from centralized processing to edge processing to mitigate bandwidth constraints, reduce costs, and address issues like network latency and congestion."

Featured

Acceleration technologies that will boost HPC and AI efforts

Experts believe we are entering the 5th epoch of distributed computing where the heterogenous design of modern systems have been driven by numerous technology advances in accelerators, next-generation lithography manufacturing, chiplets, and packaging technology. This is the 5th and final article in the series discusses the impact that current and future acceleration technology will have on HPC and AI.

The most visible AI workload at the moment is the ubiquity of AI-workloads such as Large Language Models (LLMs). Less visible, but foundational, are the accelerators used to ensure the security of our cloud and on-premises datacenters and those accelerators that perform more mundane activities such as data movement. This move to accelerators to reduce or eliminate bottlenecks for common operations is the day of reckoning foreseen by Gordon Moore (of Moore’s law); a time when we will need to build larger systems out of smaller functions, combining heterogenous and customized solutions.

Software addresses exponential support issues and avoids vendor lock-in

Software is the key to utilizing these rapidly evolving accelerator technologies, many of which are understandably based on building blocks that accelerate AI-based workloads given their commercial viability.

The size and capabilities of these accelerators vary widely, from dedicated on-package accelerators focused on security to general-purpose accelerators like GPUs. Competition is our performance friend given the cornucopia of vendor-specific accelerators that are now available. It has also forced the HPC community to come together, driven by a common need to address the exponentially hard problem of application support for ubiquitous multiarchitecture and multivendor accelerator deployments in datacenters and the cloud. The breadth of the HPC deployments and diversity of workloads is simply too big. No single company— however large — can meet all customer needs nor can bespoke software customizations performed by humans. Instead, community development efforts create software ecosystems that support platform portability. Extensive, many-year efforts such as the DOE funded Exascale Computing Project (ECP) and oneAPI software ecosystem are two current practical solutions that support existing (and likely future) heterogeneous devices through standards-based libraries and languages. The efficacy of these efforts can be assessed by looking at what works (and does not work) for the leaders in relevant workload domains. This is the way to stay on top of the application performance curve and avoid vendor lock-in.

Bigger is better at the moment in AI as domain leaders are using both NVIDIA and Intel hardware to train trillion parameter LLMs. These trillion parameter efforts reflect a high-water mark for large AI models. The monumental Argonne National Lab ScienceGPT effort, for example, is backed by Intel and US government. It also reflects the amazing power of exascale supercomputing as this training is currently using a small subset of the Intel Data Center CPU and GPU Max series powered Aurora supercomputer nodes (testing started with 64 nodes and continues using only 256 of the more than 9,000 Aurora supercomputer nodes that will eventually be installed). This ScienceGPT project combines all the text, codes, specific scientific results, papers, into the model that will be used to speed scientific research.

HBM and reduced-precision arithmetic can make CPUs the preferred platform for both HPC and AI workloads

Such large runs make headlines, but in practice, massive investments are not necessary to train many LLM models.

It is important to recognize that AI workloads, particularly LLM workloads, tend to be memory bandwidth limited. Results show that High Bandwidth Memory (HBM) can make CPUs the desired platforms of choice for many AI and HPC workloads. HBM is not necessarily an “accelerator” device, but it can be an important workload accelerator because it can help keep the accelerators and processor cores busy and thus significantly speed many workloads.[1] [2] [3] Similarly, hardware accelerated reduced precision arithmetic operations can increase both computational performance and the effective memory bandwidth. Examples include the Intel® Advanced Matrix Extensions (Intel AMX) instructions in the latest 4th generation Intel Xeon processors and Intel Xᵉ Matrix Extensions (Intel XMX) on Intel Data Center GPU Max Series or Intel Data Center GPU Flex Series GPUs,

Use cases and representative workload results show CPUs can be fast enough for many workloads — including LLMs. A single Intel Xeon Platinum 8480+ 2S node, for example, can train a Bidirectional Encoder Representations from Transformers (BERT) language model in 20 minutes, and outperform an NVIDA A100 GPU on some fine tuning workloads. In part 3 of “Tuning and Inference for Generative AI with 4th Generation Intel Xeon Processors” published results showed that AWS customers can use Intel Xeon processors for tuning small to medium sized LLMs for their specific use cases. The 7 billion Falcon-7B Large Language Model is used as an example. This result is reflected by other PyTorch transformer workloads as well.

Accelerating inference

Many are discovering that large parameter inference workloads can also be challenging. This is where the big memory and reduced precision capabilities of CPUs can help cloud and on-premises AI users meet their desired latency goals, even when using models containing billions of parameters. Unified interfaces are important in supporting these workloads as illustrated by the Hugging Face use of the Intel Neural Compressor Architecture. Of course, the benefits of these AI building blocks, along with HBM, can speed traditional, non-AI HPC workloads.

New, power efficient accelerators such as the Intel NPU (Neural Processing Unit) in the Intel Core Ultra (aka “Meteor Lake”) CPUs can help bring many of the benefits of these AI-assisted simulations to researcher desktop and laptop devices. Time will tell, but local processing offers many advantages including lower the latency of inference operations and the use of fat-client, thin-server Internet AI capabilities. Local processing can also provide better privacy and security.

Specialized accelerators enrich general-purpose devices

Specialized accelerators such as the Intel Gaudi2 AI accelerators also provide AI-specific training and inference performance. One example is demonstrated by the use of 8 of these AI accelerator cards to run inference workloads using the 176 billion HuggingFace BLOOM model and others.

Intel is working to roll the capabilities of such specialized AI accelerators like Gaudi 2 into general-purpose accelerators. For example, Intel announced that the Intel Xeon Max Series GPU and Gaudi AI chip road maps will converge starting with a next-generation product code-named Falcon Shores.

Looking beyond CPUs and GPUs

All the accelerators discussed thus far utilize conventional Von Neumann hardware and neural network models. Research is proceeding on remarkable new technologies such as neuromorphic, quantum and other devices to understand how they might impact the future of HPC and AI.

Neuromorphic computing

New non-Von Neumann approaches such as neuromorphic computing promise to deliver high accuracy AI solutions while consuming orders of magnitude less power. Examples in the literature demonstrate the extraordinary efficacy of neuromorphic computing, which can match the accuracy of traditional neural networks on vision problems with orders of magnitude greater power efficiency compared to current CPUs and GPUs. The SpikeGPT project reflects a current effort to apply these spiking neural network models to large language models.

The neuromorphic hardware continues to advance. Intel’s Loihi project provides one example of a neuromorphic research processor that is being used to advance the state-of-the-art in accelerated AI performance and power efficiency. Loihi supports a broad range of spiking neural networks and can run at sufficient scale, with the performance and features needed to deliver competitive results compared to state-of-the-art contemporary computing architectures. As AI-augmented science and commercial applications advance, such extraordinarily power efficient devices become ever more attractive from a cost, performance, and global climate perspective both for local and datacenter processing.

The recently announced Intel Hala Point neuromorphic system that utilizes the Intel Loihi 2 processors is a concrete instantiation of this progress. Hala Point is Intel’s first large-scale neuromorphic system. It is being used to demonstrate state-of-the-art computational efficiencies on mainstream AI workloads. Characterization shows Hala Point can support up to 20 quadrillion operations per second, or 20 petaops, with an efficiency exceeding 15 trillion 8-bit operations per second per watt (TOPS/W) when executing conventional deep neural networks. This rivals and exceeds levels achieved by architectures built on graphics processing units (GPU) and central processing units (CPU). Hala Point’s unique capabilities could enable future real-time continuous learning for AI applications such as scientific and engineering problem-solving, logistics, smart city infrastructure management, large language models (LLMs), AI agents and more.

Quantum computing

Quantum computing promises a game-changing new computing capability. The paper “Local minima in quantum systems” , for example, discusses why finding local minima is easy for quantum systems and hard for conventional computers.

This is a field where researchers continue to realize foundational milestone achievements. Intel labs, for example, is involved in research collaborations to demonstrate practical solutions using quantum technology. In collaboration with industry and academic partners, for example, a team successfully demonstrated the supervised training of very small 2-to-1 bit neural networks using non-linear activation functions on actual quantum hardware. Such milestones represent significant progress, but while the field of quantum computing is rapidly advancing, practical solutions still remain tantalizingly out of reach.

Summary

The 5th epoch of computing identified by industry experts and foretold by Gordon Moore long ago clearly provides many benefits for HPC and AI workloads, but only when the data security and accessibility infrastructure are able to support user needs. Accelerators clearly are the future, which makes it easy to predict that standards based, community software development ecosystems like oneAPI and the ECP Extreme Scale Software Stack (E4S) will become the portable infrastructure for accessing accelerated capabilities by the scientific computing community. Otherwise, the combinatorial support problem becomes intractable unless one is willing to accept vendor lock-in. Such community developed infrastructure is necessary given the breadth of new computing models and hardware that are in the works and approaching widespread use. [4] [5] [6] [7] [8] [9] [10]

Learn more about how AI accelerated HPC will impact the future of supercomputing through the previous articles in this series. (Technology investment guidelines are provided in article 3.)

  • Article 1 – “Ushering in the 5th epoch of distributed computing with accelerated AI technologies”
  • Article 2 – “Addressing the Challenge of Software Support for Multiarchitecture AI Accelerated HPC”
  • Article 3 – “AI-Accelerated Technology Investment Guidelines for HPC”
  • Article 4 – “Use cases show that on-package accelerators benefit HPC/AI workloads from computation to data movement and security”
  • Article 5 – “Acceleration Technologies That Will Boost HPC and AI Efforts”

For workload specific information, look to the leaders in your area(s) of interest to see how community software development efforts and the use of standards-based libraries and languages can meet current and future computational needs. The most general-purpose accelerated software ecosystems at this time are oneAPI and E4S.

  • The Argonne Leadership Computing Facility AI testbed is a good information resource about the capabilities of the next-generation AI accelerators.
  • Work on the current generation Department of Energy exascale supercomputers provide information about the leading edge and exploration into what-is-possible in both HPC and AI.
  • For hands-on testing, download and start working with the E4S software and oneAPI ecosystem.
    • Many cloud providers provide access to new accelerated platforms. Look to your favorite ISPs.
    • HPC groups can contact E4S to gain access to the Frank cluster. This cluster is used for verification of the E4S software and can provide test access to recent accelerator hardware not covered under NDA.

Rob Farber is a global technology consultant and author with an extensive background in HPC and machine learning technology.

[1] https://www.intel.com/content/www/us/en/products/details/processors/xeon/max-series.html

[2] https://www.intel.com/content/www/us/en/products/docs/processors/max-series/overview.html

[3] https://www.datasciencecentral.com/internal-cpu-accelerators-and-hbm-enable-faster-and-smarter-hpc-and-ai-applications/

[4] https://arxiv.org/abs/2201.00967

[5] https://cs.lbl.gov/news-media/news/2023/amrex-a-performance-portable-framework-for-block-structured-adaptive-mesh-refinement-applications/

[6] https://www.exascaleproject.org/collaborative-community-impacts-high-performance-computing-programming-environments/

[7] https://www.exascaleproject.org/highlight/exaworks-provides-access-to-community-sustained-hardened-and-tested-components-to-create-award-winning-hpc-workflows/

[8] https://www.exascaleproject.org/e4s-deployments-boost-industrys-acceptance-and-use-of-accelerators/

[9] https://www.exascaleproject.org/harnessing-the-power-of-exascale-software-for-faster-and-more-accurate-warnings-of-dangerous-weather-conditions/

[10] https://journals.ametsoc.org/view/journals/bams/102/10/BAMS-D-21-0030.1.xml

This article was produced as part of Intel’s editorial program, with the goal of highlighting cutting-edge science, research and innovation driven by the HPC and AI communities through advanced technology. The publisher of the content has final editing rights and determines what articles are published.

OpenAI Adds PwC as Its First Resale Partner for the ChatGPT Enterprise Tier

Major accounting and professional services firm PwC announced on May 29 a deal to become a reseller and purchase more than 100,000 licenses for ChatGPT Enterprise with OpenAI, marking a large new revenue stream for the AI maker’s enterprise product.

The US and UK firms of PwC are now “OpenAI’s first reseller for ChatGPT Enterprise and the largest user of the product,” PwC stated in a press release. Specifically, PwC will complement its audit, tax and consulting services with ChatGPT Enterprise’s generative AI capabilities.

PwC Deal Shows Confidence in OpenAI’s Enterprise Offerings

PwC has not disclosed the financial terms or duration of its deal with or subscriptions to ChatGPT Enterprise. Today’s deal “builds upon” PwC’s commitment to AI capabilities of $1 billion over three years, starting in April 2023.

PwC can now use OpenAI’s latest large language model, GPT-4o, for its ChatGPT interactions. PwC will use GPTs, custom AI agents, for:

  • Reviewing tax returns.
  • Proposal response generation.
  • Software lifecycle assistants.
  • Dashboard and report generation.

PwC’s messaging to its clients includes “emphasizing the near universal demand across industries for the transformative power of this technology,” at a time when some critics of AI argue that the technology is still in search of use cases.

PwC holds more than 100,000 ChatGPT Enterprise licenses, 75,000 for US employees and 26,000 for U.K. employees.

Meanwhile, PwC says it has found 3,000 internal use cases for generative AI.

“We are a knowledge based business and generative AI is only getting more effective at making knowledge accessible and scalable,” Bret Greenstein, PwC partner and generative AI initiative leader, told TechRepublic in an email. “We tracked the growth of GenAI capabilities (10x a year on average) which showed that the types of things GenAI can do will only accelerate.”

The deal is a major win for OpenAI, particularly because artificial intelligence technology is very expensive to run.

“At a time when business leaders across industries demand outcomes and business impact – and not just potential – PwC’s expanded relationship with OpenAI provides a playbook for companies looking to scale their AI infrastructure, apps, and services,” PwC wrote in the press release.

SEE: GPT-4 can exploit day-one vulnerabilities when provided with their descriptions.

“We have strong relationships with all the major AI players and use their tools across our business,” said Greenstein. “However, we have been working with OpenAI since GPT-2 and even longer for AI/ML work, so we have built up our learning and understanding of these trends which helped us pave the way for this agreement.”

OpenAI Forms Safety Committee, Acquires Media Deals

OpenAI made a few other major announcements this week: the formation of a Safety and Security Committee and content deals, including with The Wall Street Journal parent News Corp.

The Safety and Security Committee

The Safety and Security Committee, announced on May 28, has 90 days to develop the processes and safeguards that OpenAI will use while developing its upcoming cutting edge, so-called “frontier” models. The committee includes CEO Sam Altman and a panel of team leaders and security experts.

Content and media deals

On May 22, OpenAI announced a content deal to add News Corp articles to ChatGPT. News Corp owns The Wall Street Journal, Barron’s, MarketWatch, New York Post and more. The deal will enable OpenAI to use content from News Corp publications as answers to questions asked of ChatGPT.

Deals with Vox Media and The Atlantic followed. On May 29, OpenAI announced an agreement in which ChatGPT will use Vox Media content, and Vox Media will build consumer and advertising products with ChatGPT. On the same day, The Atlantic joined OpenAI “as a premium news source,” which “will be discoverable within OpenAI’s products” and “will help to shape how news is surfaced and presented in future real-time discovery products.” Meanwhile, The Atlantic will explore ways to use OpenAI tech on “an experimental microsite.”

Fujistso and Mizuho Deploy GenAI to Track Trends in Humpback Whale Migration

Fujitsu and Mizuho Financial Group have partnered to track trends in humpback whale migration and promote sustainable tourism for Hachijo Island, a popular tourist destination. The island features historical remnants from different periods of Japanese history and stunning views of the dormant volcano Hachijō-Fuji.

The primary goal of the project is to conserve the island's natural beauty and ecosystem while allowing residents and visitors to enjoy the island’s precious resources. The project will proceed per the Tokyo Metropolitan Government's “Tokyo Treasure Island Sustainable Island Creation Project”, which aims to promote sustainable development and enhance the quality of life on Tokyo Islands.

As the only financial institution on the island, Mizuho is leveraging its expertise in finance and digital technology to tackle regional issues and help realize a “smart island”. Mizuho is promoting various projects aimed at improving digitalization, tourism, and government processes on the islands.

As part of the collaboration, Fujitsu’s state-of-the-art AI-based image recognition technology will be deployed in a pilot program to better understand humpback whale migration trends in the area. The data from the program will also be used to promote ethical tourism on Hachijo Island.

Fujitsu's AI model for the program has been trained with the migration patterns of humpback whales. Using data from multiple fixed-point cameras on the island, Fujitsu and its partners will analyze the data to verify the feasibility of humpback whale detection. The project is scheduled to run from May 2024 to October 2024 and may be extended for verification of results.

The humpback whale population suffered a devastating decline before commercial whaling was banned in 1985. The species were extensively hunted for oil, meat, baleen, and other whale products.

The Japanese waters are renowned for humpback whale encounters, but the sightings have dropped dramatically. Traditional ecological surveys were done manually, which had several limitations including logistical constraints, risk of observer bias, or interpretation errors.

Some estimates suggest that the humpback whale population declined by as much as 95% from pre-whaling levels due to overexploitation. While the species has made a slight comeback in terms of population, they still remain endangered.

Shutterstock/Ozrimoz

Having a deeper understanding of their migration patterns can boost conservation efforts and allow authorities to make more informed decisions regarding maritime traffic and fishing regulations. Additionally, the migration data can enable scientists to better grasp whale biology and ecology, and the impact of climate change on the health of the marine ecosystem.

Now with Fujitsu’s AI technology, researchers are equipped with more powerful tools for biodiversity and ecological surveys. Not only does this improve the efficiency and scalability of the process, but also increases the consistency of data. We hope that with more precise and reliable data, researchers and authorities can make more informed decisions to protect and preserve our planet and its inhabitants.

Related Items

AI Evolution: Challenges Persist Despite Growing Optimism

The Climate Crisis and the Data Center: Going Beyond Zero Emissions

Toward a Low-Carbon Datacenter

.

I tested Opera’s new Gemini-powered AI capabilities and came away impressed

Opera Aria drawing an image.

Aria's take on a cat and mouse playing chess.

My favorite browser, Opera, has an AI feature called Aria for a while now. On the rare occasion that I need AI assistance (for research/search purposes), I always turn to Aria. To that end, Opera's AI has been pretty fantastic.

Recently, however, Opera announced it will begin adding Google's Gemini AI models to help power its Aria. That doesn't mean Opera intends to replace the LLM (Large Language Model) Aria currently uses. In fact, Aria uses multiple AI models to respond to queries (choosing the model it feels will work best for the query at hand). Aria will now also have access to Google Gemini, which consists of several models (from Gemini Nano to Gemini Ultra).

Also: 5 reasons why Opera is my favorite browser

This new integration isn't just about being able to respond more quickly and accurately to queries. Users will also find Opera's Aria AI now includes new features, such as the ability to read responses out loud. It's also capable of rendering images based on queries, thanks to the Imagen 2 model on Vertex AI.

Opera has also introduced an AI Feature Drops program. According to Krystian Kolondra, EVP at Opera, "AI is moving fast and so are we. We've started the AI Feature Drops Program to allow people to test our newest AI explorations that either will or won't make it to the official version of Opera One. We are excited to let our most engaged users test and share their feedback and suggestions with us."

Also: I'm a diehard Pixel user, but I'm considering a change for two reasons

I downloaded the Opera Developer edition some time ago and, soon after the announcement, the update was made available. I applied the update and kicked the tires of the new Aria AI and came away impressed.

One thing to keep in mind is that both the speech and image features have been available on Opera's developer desktop version since late April. The difference is that both features are more reliable and considerably faster, thanks to the addition of Google's LLMs. On top of that, before adopting Google's AI models, the text-to-speech in Aria was not exactly conversion-like.

Let's dig in.

Text to speech

The first feature I tested was text-to-speech. To use it, you run a query in Aria. When the query completes, hover your cursor near the top right corner of the response to reveal a menu that includes a small speaker icon. Click that icon and the AI voice will start reading the response. To my surprise, the voice sounded fairly realistic. Yes, I could tell it was AI at times (especially when it came to less common names) but overall the sound had a natural pitch, timber, and cadence (far better than Google's Assitant voice).

I asked Aria to explain Linux.

You can't change the voice or the rate at which it speaks, but you can pause it (by hitting the pause button). This feature is available on both the desktop and mobile versions of Opera (Developer on the desktop and Beta on Android).

Image generation

The only changes to Aria's image generation (since the Gemini adoption) are in its speed and reliability. Prior to Gemini, I tested the image capability and found that it sometimes couldn't handle the query and would respond with an error. Try again and it might succeed. With the help of Imagen 2 on Vertex AI, image generation never fails.

Did I fail to mention that image generation is also free with Aria?

At the moment, the image generation feature is only available to the desktop version (Developer) and not the mobile version.

If you're keen on AI, I would highly recommend you give Opera Developer and Aria a try. From my experience, Opera's take on AI is the best of all web browsers (and it's not even close).

Featured

India is Likely to Develop its Foundational Model This Year

Why Isn’t There an Alibaba of AI in India Yet

The US currently boasts major tech companies such as Microsoft, Apple, NVIDIA, and Google, along with startups like OpenAI, spearheading AI advancements. Similarly, China is close behind, just a year away in the AI race, with giants like Alibaba and Tencent, as well as emerging players such as 01.AI, leading the charge. And India?

“We need someone to engage as a frontline player in this space actively. Someone who has the resources to start from scratch; not relying on existing solutions but creating foundational models,” Stition.ai founder and Devika creator Mufeed VH told AIM in the latest episode of Tech Talks.

Further, he said that Indian companies can either sponsor or utilise their resources for these initiatives, yet no one has taken the initiative to start. “However, I am optimistic that India will develop a foundational model within this year,” he said.

Is India’s ‘Jio Moment in AI’ Coming Soon?

Recently, Reliance Jio collaborated with NVIDIA for the use of GH200 GPUs to build AI models in India. During his visit to India last year, NVIDIA head Jensen Huang was optimistic about Reliance building its own LLMs that power generative AI applications made in India.

For now, Reliance Jio is keeping its AI developments under wraps, with no public disclosures to date. “Reliance wants to revolutionise the enterprise space with the use of AI… There is a centre of excellence with 100 experts working on AI solutions. Mukesh believes it is going to be transformative,” said Reliance New Energy Council chairman R A Mashelkar, in a recent interaction with Fortune India.

Meanwhile, Jio recently launched Jio Brain, positioned as the industry’s first 5G-integrated ML platform. It aims to empower telecom networks, enterprise networks, and industry-specific IT environments to incorporate ML tools into their day-to-day operations seamlessly.

TWO, a startup backed by Reliance Jio, also recently launched a family of models called SUTRA. These cost-efficient, multilingual GenAI models excel in 50+ languages, offering speech, search, and visual processing capabilities.

Renowned startup accelerator JioGenNext introduced its latest cohort, MAP’ 24, consisting of ten dynamic, generative AI startups spanning diverse sectors such as healthcare, banking, legal services, entertainment, and agriculture.

Earlier this year, Jio also partnered with IIT Bombay to bring about initiatives like BharatGPT, which focuses on developing AI solutions for several sectors, including the telecom and retail sectors. However, there have not been any significant revelations yet.

Adani AI Labs, an initiative by the Adani Group aimed at leveraging AI to tackle large-scale industrial problems, is also working on exciting AI projects and bringing them to the masses. One such notable work led us to ‘Train PNR Prediction,’ which predicts the confirmation probability of waitlisted train tickets, which will be useful for end users.

“We have achieved 95% accuracy in predicting it,” said Adani Digital Labs senior manager Gaurav Jain to AIM.

India is not left behind. Other giants like TCS, Infosys, Wipro, HCLTech, and LTIMindtree seem focused on enterprise solutions, upskilling and reskilling, alongside experimenting with real use cases.

Last year, Tech Mahindra became the first IT giant to launch something like a Generative AI Studio. ​​The IT solution provider introduced Tech Mahindra amplifAI0->∞, a comprehensive suite of AI offerings and solutions aimed at democratising and responsibly scaling AI deployment.

It is also working on an indigenous LLM in 40 different Indic languages, most notably Hindi. Called Project Indus, the model will be able to speak in many Indic languages.

One prominent initiative taking flight in India is AI4Bharat, which started as a collaboration between IIT Madras and Nandan Nilekani’s EkStep Foundation. It is sponsored by Bhashini, Microsoft, Google, and NVIDIA, with its contribution to the Indic open-source AI community tremendous. But the problem is, it is the only prominent one in the country so far. That’s why India needs more ‘AI4Bharats’.

The time is ripe, and Indian conglomerates and IT giants can do a lot more. Recent earnings from big tech companies show growth driven by advancements in generative AI, and it’s unlike anything seen before.

China Is a Year Away From the US. And India?

“People in China cannot access ChatGPT, OpenAI blocked China from accessing it,” revealed 01.AI founder Kai-Fu Lee, saying that his country shouldn’t be left out of this revolution.

“I strongly believe that the US will lead in breakthrough innovations, but China is better at execution,” said Lee.

01.AI is a Chinese AI startup that only emerged about a year ago and is already at a billion-dollar valuation. It takes pride in calling itself open source, giving away its AI models to cultivate a loyal developer community that can contribute to the creation of groundbreaking AI applications. The startup also raised $200 million in investment from Chinese e-commerce giant Alibaba.

Lee believes that Chinese companies have closed the AI gap between the US and China greatly by executing better.

“Taking my company as an example, we were eight years behind a year ago. Now we’re probably less than one year behind the top American company,” he said.

Ex-Google CEO Eric Schmidt says otherwise. He had earlier said that China is focused on dominating several industries, but as of now, the US still maintains a significant lead in AI.

“In the case of AI, we are well ahead two or three years, probably, of China, which in my world is an eternity,” he added.

The rapid AI advancements coming from China question this claim. Brands such as Baidu, Tencent, Alibaba, and Huawei have become household names in China, and these are the very companies investing heavily in generative AI and releasing AI models like there is no tomorrow.

Last year, Alibaba developed Qwen-72B, Tencent released ‘Hunyuan’, and Lee’s AI startup, 01.AI, also open-sourced its foundational LLM called Yi-34B.

China has also released a SORA rival, named ‘Vidu’, and recently, French luxury group LVMH also extended its partnership with the Alibaba Group to integrate Alibaba Cloud’s generative AI capabilities, including Qwen and Model Studio (Bailian), to enhance customer experience in China.

Lately, the internet has been abuzz with new AI developments coming from China daily—from posts about teachers in China using AI to grade exams to Chinese developing military robot dogs.

Chinese scientists also developed the world’s first AI child entity called Tong Tong.

It’s high time Indian conglomerates and IT giants took the lead in disrupting the AI space.

The post India is Likely to Develop its Foundational Model This Year appeared first on AIM.

8 Online AI Tools for Creating PPTs In Seconds

Microsoft introduced Copilot, integrating it into Microsoft 365 to provide users with more agency and enhance accessibility using natural language processing. Copilot is now within familiar Office suite apps like Word, Excel, PowerPoint, Outlook, and Teams.

With PowerPoint, in particular, Copilot acts as an AI assistant, simplifying presentation creation by effortlessly transforming ideas into polished slides. It streamlines tasks such as generating drafts, distilling complex content, organising slides, and applying brand styles. Users can explore different ideas and formats, refining their presentation skills along the way.

By leveraging Copilot’s intuitive features and feedback, presenters can save time. Whether for work, school, or personal use, Copilot empowers users to deliver impactful presentations that captivate audiences and effectively convey messages.

Given that they are now for sale, here are brief reviews of the Microsoft Copilot Pro Apps I have tried:
Outlook: This is the slickest of the Copilots in terms of deep integration into the core application, and in many ways is the most obvious use case. It basically lets GPT-4… pic.twitter.com/F81sINsBek

— Ethan Mollick (@emollick) January 16, 2024

However, Copilot isn’t the first to leverage its PowerPoint capabilities. Here are 8 alternative online AI tools to create your PowerPoint presentations in seconds.

  1. PopAi
  2. Beautiful.ai
  3. Decktopus
  4. Tome
  5. SlideSpeak
  6. Gamma
  7. Plus.ai
  8. SlidesAI
Tools Feature Price
PopAi Chat with document & Image Chat $49.99/year
Beautiful.ai Generate new feature $40/month
Decktopus Auto-created deck for presentations $34.99/month
Tome Dynamic text editing & image generation on text $16/month
SlideSpeak One-Click polish tool $19/month
Gamma Design lock-in $15/month
Plus.ai generate slideshow from scratch with versatile templates $20/month
SlidesAI Design Intelligence ₹832.83/month

PopAi

PopAi is an AI tool that offers versatile conversational experiences, supporting over 200 languages. It caters to both personal and professional needs, adapting to educational queries, technical support, and creative idea generation.

PopAi introduces innovative features like “Chat with Document” for instant insights from documents, “AI Presentation” for efficient presentation creation, and “Image Chat” for visual understanding.

By leveraging AI presentation tools, users can save valuable time and increase productivity. These tools automate various aspects of presentation creation, such as design suggestions and content generation. With AI chat, users no longer need extensive design knowledge or expertise to create visually appealing slides.

2. Pop Ai
Your Personal AI Workspace
You can:
– Create presentations
– Craft CVs and resumes
– Write academic essays
– Design flowcharts
– Write blogs
– Debug code, and much more. pic.twitter.com/8XELh9Oy5W

— D-Coder (@Damn_coder) May 28, 2024

Beautiful.ai

Beautiful.ai is a web-based tool that helps one create stunning presentations in minutes using AI to design slides based on content and preferences. It handles fonts, colours, layouts, and animations.

One can collaborate with a team in real time and convert presentations to .pdf or .ppts formats for sharing online or offline. Beautiful.ai works on any device and browser, allowing one to create and present from anywhere. It also integrates with Slack.

With DesignerBot, one can quickly design slides and benefit from helpful brainstorming, instant text, and image generation.

Decktopus

Decktopus is a simple and intuitive tool that helps to create presentations online. It automatically adjusts a presentation to ensure it is perfect, aligning to text, images, icons, and colours harmoniously for an appealing design.

Decktopus is a smart assistant that helps to create effective and engaging presentations. It enables one to create visually stunning and professional presentations by offering different themes and styles to suit your purpose and audience.

One of the standout features of Decktopus is its AI-enriched content. The tool provides image and icon suggestions, slide notes, and more content ideas based on topic and audience. One can also add voice recordings, videos, URLs, and other multimedia elements to enhance the presentation.

Tome

Tome is an innovative tool that helps users quickly and easily create presentations and other narrative content. By using prompts or existing documents as input, Tome generates visuals, layouts, and text suggestions to build professional-looking presentations.

Users can generate presentations simply by providing a prompt, and the output is organized by a table of contents, complete with text, introduction slides, and AI-generated images. Tome’s presentations have a distinctive style, typically featuring a black background, white text, and AI illustrations, setting them apart from traditional PowerPoint or Google Slides presentations.

Tome AI
Tired of spending hours making slides?
Generate your presentations in seconds. pic.twitter.com/rynggzMiSe

— Paul Couvert (@itsPaulAi) May 9, 2023

SlideSpeak

SlideSpeak revolutionises presentation creation by allowing users to upload PDFs or Word documents and automatically generate presentations based on the content.

With just one click, one can transform the document into a presentation without needing to interact extensively with an AI bot. The generated presentation can then be downloaded as a PowerPoint file (.pptx), where one can fix any misaligned text or images and make further edits as needed.

Additionally, SlideSpeak enables one to share and download the presentation in both formats, providing a seamless and efficient way to create and distribute professional presentations.

Gamma

Gamma is an AI-powered tool designed for creating professional presentations, websites, and documents. With Gamma, design lock-in is no longer an issue. It accelerates content creation by generating templates that automatically align with the brand.

Gamma supports various media formats, making it easy to include GIFs, videos, charts, or websites. This enhances presentations and helps convey complex ideas more effectively. The media integration feature ensures that your content is dynamic and engaging.

Ideal for large teams, Gamma allows for real-time collaboration, enabling instant feedback and collective efforts, all within a single platform. This feature caters to the needs of big teams and ensures seamless teamwork.

Business owners and students will love this.
This AI tool can save you hours of work by designing and customizing an entire slide deck.
Here's how to use Gamma- ChatGPT for presentations:
(It's free👇)pic.twitter.com/4tZhseJxFh

— Rowan Cheung (@rowancheung) April 8, 2023

Plus.ai

Plus.ai is an AI tool that can be integrated into Google Docs and Slides to generate custom content quickly and effortlessly. It prioritises professional designs, ensuring that presentations are suitable for both professional and academic contexts.

Additionally, the AI copilot functionality facilitates collaborative presentation creation by seamlessly integrating AI throughout the process. The Live Snapshots feature, powered by Plus’s Snapshot technology, automates data updates, ensuring that information remains current.

Moreover, Plus AI focuses on content quality, generating an appropriate amount of text for each slide and demonstrating an advanced understanding of various slide layouts. Furthermore, the rewrite feature allows users to quickly rectify inaccuracies in content with AI assistance.

SlidesAI

SlidesAI is seamlessly integrated into Google Slides, offering users the ability to utilise generative AI directly within the platform. It was initially launched with the capability to generate presentations from lengthy text documents.

SlidesAI has recently expanded its functionality to include the creation of presentations using shorter prompts as well. Alongside these primary features, SlidesAI also provides users with additional tools such as image suggestions tailored to specific slides, text paraphrasing options for refining content, and a text-to-slides feature that enables users to effortlessly convert existing text into presentation slides through simple copy-paste actions.

The post 8 Online AI Tools for Creating PPTs In Seconds appeared first on AIM.