Free Google Cloud Learning Path for Gemini

Gemini for Google Cloud Learning Path
Image created by Author

Introduction to Gemini

It's the era of language models, and Gemini is Google's latest and most capable model to date.

Gemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research. It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image and video.

If you are interested in learning about Gemini, language models, and using them to your benefit, Google has launched a new intermediate language model-focused learning path, the Gemini for Google Cloud Learning Path. Find out about the learning path below.

The Learning Path

The Google Cloud Gemini Learning Path demonstrates how Gemini can be a force multiplier for a number of different roles. Complete with its conversational natural language chat interface, Gemini enables quick interactions for cloud-related questions or provides advice on best practices. It helps with coding tasks by providing code completions or code generation as you type, or occasionally based on comments made. The learning pathway can facilitate various roles, such as devs, data analysts, cloud engineers, architects, and security engineers.

Free Google Cloud Learning Path for Gemini

In the first course, Gemini for Application Developers, learn how Gemini can help you build applications. Learn all about prompting, getting code explained, and even generating code.

In the second course, Gemini for Cloud Architects, find out how Gemini helps to provision infrastructure. See how Gemini can explain infrastructure, update infrastructure, and deploy Google Kubernetes Engine clusters. The course uses a hands-on lab to help cement learning.

The third course, Gemini for Data Scientists and Analysts, discover can help analyze data and make predictions. With a focus on customer data, learn how to identify, categorize, and develop new customers with the help of Google's BigQuery.

Free Google Cloud Learning Path for Gemini

The next course, Gemini for Network Engineers, demonstrates how Gemini helps network engineers create and manage virtual private cloud networks. Learn prompting strategies to have Gemini assist with your your networking tasks.

The fifth course is titled Gemini for Security Engineers, and is designed to show you how to treat Gemini as a collaborator for securing your cloud environment and resources. You will see how Gemini can help deploy example workloads into a Google Cloud environment, and to identify security misconfigurations.

Course number 6, Gemini for DevOps Engineers, covers how Gemini can help engineers manage their infrastructure. Use Gemini for understanding and managing application logs, create a Google Kubernetes Engine clusters, and more.

Free Google Cloud Learning Path for Gemini

The seventh course, Gemini for end-to-end SDLC, demonstrates using Gemini alongside additional Google products and services to develop, test, deploy, and manage your own applications, from inception to deployment.

In the final course in the learning path, Develop GenAI Apps with Gemini and Streamlit, learn all about text generation, using function calls, and creating and deploying a Streamlit application with Cloud Run.

Summary

Learn how to leverage Gemini for a whole host of engineering tasks with Google Cloud's latest learning path, the Gemini for Google Cloud Learning Path. Check out the more detailed information course-by-course to see if this is something that you could benefit from in your professional life.

Matthew Mayo (@mattmayo13) holds a Master's degree in computer science and a graduate diploma in data mining. As Managing Editor, Matthew aims to make complex data science concepts accessible. His professional interests include natural language processing, machine learning algorithms, and exploring emerging AI. He is driven by a mission to democratize knowledge in the data science community. Matthew has been coding since he was 6 years old.

More On This Topic

  • Free From Google: Generative AI Learning Path
  • ChatGPT’s New Rival: Google's Gemini
  • How to Access and Use Gemini API for Free
  • 11 Best Practices of Cloud and Data Migration to AWS Cloud
  • Getting Started with Google Cloud Platform in 5 Steps
  • 5 Simple Steps Series: Master Python, SQL, Scikit-learn, PyTorch &…

10 DeepFake AI Tools to Help You Create Content within Minutes

Recently, LinkedIn co-founder Reid Hoffman introduced ‘Reid AI’, a twin edition of himself which has been making the rounds on the internet.

He said that it is an experiment to explore whether engaging with an AI-generated version of himself could prompt self-reflection, unveil new insights into his thought patterns, and reveal deep truths.

Why did I deepfake myself? To see if conversing with an AI-generated version of myself can lead to self-reflection, new insights into my thought patterns, and deep truths. pic.twitter.com/DWODoZ9lXL

— Reid Hoffman (@reidhoffman) April 24, 2024

Deepfakes, synthetic media manipulated using artificial intelligence (AI), have become a powerful tool for generating realistic and engaging video and audio content. Hoffman’s twin edition is flooding social media as he has created something like this for the first time.

A user on X suggested, “Absolutely worth viewing, regardless of your interest in deepfake AI or not. Reid’s interview with his AI twin provides genuinely insightful moments and valuable insights.”

In a Reddit discussion, a user said, “I totally get what you’re looking for, and I’ve had a fantastic experience with Hipclip.ai. It’s my go-to for AI-generated videos based on transcripts. The tool allows you to maintain control over the visuals and closed captions, ensuring they align perfectly with your provided transcript. I’ve used it for YouTube Shorts, and the results were impressive. Try this tool, and I’m sure you’ll be delighted with the outcome.”

Here are 10 deepfake AI tools for content creation that makes the job easier.

DeepFaceLab

DeepFaceLab, the premier software for crafting deepfakes, commands over 95% of the market. Open-source and compatible with Windows, it leverages machine learning algorithms and GPU acceleration to generate top-tier fake videos.

Offering a plethora of features such as face replacement, de-aging, and head swapping, DeepFaceLab caters to both novice and expert users. While mastering it may demand some technical know-how and tutorial guidance, its expansive array of features and customisation options ensures the creation of professional-grade deepfakes.

Reface AI

ReFace AI is an advanced technology that combines artificial intelligence and machine learning to enable users to swap faces in videos and images, creating realistic and entertaining content.

Maintaining fine details like texture, wrinkles, and shadows is crucial for effective results. ReFace AI employs feature extraction techniques to preserve these intricacies, intelligently blending the original and target faces for seamless, high-quality output.

Well, that’s insanely cool🤌🏻
High key obsessed with the idea and the final result! https://t.co/oJ0JSQ3TkI

— Reface (@reface_app) September 21, 2023

Faceapp

Faceapp generates highly realistic transformations of human faces in photographs by using neural networks based on artificial intelligence.The app can transform a face to make it smile, look younger, older, or change the gender.

There are multiple options to manipulate the photo uploaded such as editor options of adding an impression, make-up, smiles, hair colors, hairstyles, glasses, age or beards. Filters, lens blur and backgrounds along with overlays, tattoos, and vignettes are also a part of the app.

Wombo

CEO Ben-Zion Benkhin developed Wombo. The application allows users to take a new or existing selfie and then select a song from a curated list to create a video that artificially moves the selfie’s head and lips in synchrony with the song.

The app works for any and all images resembling a face, though it performs best with three-dimensional characters facing the camera straight on. It’s perfect for sharing funny videos on platforms like TikTok and WhatsApp.

Facemagic

FaceMagic is an AI-driven face-swapping application available on both Android and iOS platforms. Featuring a user-friendly interface, it simplifies the process of replacing faces in both videos and images.

This app offers a fresh twist to traditional selfies and pictures, empowering users to unleash their creativity. Whether crafting memes, animating friends, or inserting oneself into beloved TV shows and movies, FaceMagic opens up a world of imaginative possibilities.

Sigma Tom Cruise Rule: No Mission is Impossible 😏#facemagic #faceswap #MissionImpossible #tomcruise #sigma #sigmarule #funny #memes pic.twitter.com/qicLZsLdF4

— facemagic (@facemagic_app) May 21, 2023

DeepNostalgia

DeepNostalgia, crafted by MyHeritage, is a remarkable app delving into deepfake technology, breathing life into old photographs and resurrecting ancestors from the past. Through a blend of computer vision and deep learning, it offers a poignant means for users to engage with their family’s heritage.

Accessible on both Android and iOS platforms, this app employs cutting-edge deep learning algorithms to deliver remarkably realistic facial animations, fostering profound connections with our familial past.

Faceplay

FacePlay is another great deepfake app on our list and very similar to the Reface app. The app offers several videos, pictures, and GIF templates; some of which can be used for free. To use the app, all you need to do is upload your picture, select a template, and the app will generate magic avatars of you without any complicated steps, thus saving you a lot of waiting time.

It provides a wide range of features such as animating live photographs over 3000 costume video templates, and much more.

DeepBrain

DeepBrain is a deepfake software that stands out because of its capability to create highly authentic and visually stunning fake videos. Utilizing advanced AI techniques, DeepBrain has become a popular choice for users who want to create authentic deepfake video content that is professional in quality.

It also has an effortless and reliable face-swapping ability. DeepBrain offers an intuitive interface permitting novice and expert users to discover its countless capabilities for making stunning fake videos.

Watch as AI Avatar Olivia reports a TLDR version of this breaking news article which was created using AI Studios' Article to Video feature!📝
🔗Read More: https://t.co/knIJFGR2L3
🚀Try it for yourself: https://t.co/ZICeH2vRhS pic.twitter.com/ULdRBEjXEQ

— DeepBrain AI (@DeepBrain_ai) March 29, 2024

Jiggy

Jiggy is a deepfake app that allows users to make people in photos dance by swapping their faces on animated bodies. The app uses AI technology to create photorealistic body deepfake videos, enabling users to instantly transform static images into dancing GIFs.

Its features include swapping faces from photos onto animated dancing bodies to create funny, dancing GIFs and accessing an always-up-to-date catalog of clips and movies to swap faces with top actors and VIPs.

New Orleans Are You Ready For…#zionwilliamson #nba #pelicans #pels #neworleans #la #bigeasy #lonzoball #zo pic.twitter.com/MPeWzGScc9

— Jiggy (@Jiggy_app) November 3, 2019

Zao

Zao, a Chinese application, is a face-swapping tool that employs deepfake technology, enabling users to seamlessly integrate their faces into scenes from various movies and TV shows. Among its notable features, Zao offers a diverse library of clips from popular media for users to select from.

The app walks users through a simple process of capturing a series of photos to accurately map their faces onto the chosen clip. Originally designed to spark playful interactions on social platforms, Zao has brought the capability of crafting deepfake videos to the fingertips of millions.

We tried Zao, the trending Deepfake app in China!
This is @akshay_gangwar as @LeoDiCaprio 'and @rihanna!#Zao #ZaoApp #DeepFake #China pic.twitter.com/m0C5Xjjpex

— Beebom (@beebomco) September 7, 2019

The post 10 DeepFake AI Tools to Help You Create Content within Minutes appeared first on Analytics India Magazine.

Ready or Not, AI Agents Are Coming

Ready or Not, AI Agents Are Coming copy

Recently, Bland AI put up a cool billboard advertising promoting its AI agent that can handle all sorts of phone calls for businesses in any voice, and it’s creating a buzz.

This is one cool billboard advertising promoting AI agent. Calling that number will connect u to a live conversation with an AI bot powered by @usebland. pic.twitter.com/A9tCLFU5dP

— Alvin Foo (@alvinfoo) April 25, 2024

However, this and others like Devin and Devika are just a glimpse of what’s to come.

“A lot of people talk about the ‘ChatGPT moment’, where you’re like ‘Wow, never seen anything like this’. I think you have not used planning algorithms; many people will have a kind of a ‘Wow I couldn’t imagine an AI agent doing this’ moment,” said Andrew Ng, founder of DeepLearning.AI and AI Fund, at Sequoia Capital’s AI Ascent.

Further, he said he ran live demos in which something failed, and the AI agent rerouted around the failures. “I’ve actually had quite a few of those ‘Wow, you can’t believe my AI system just did that autonomously’ moments,” he added.

Ng said that today, most of us use LLM models with a non-agentic workflow, where we type a prompt, and the LLM generates an answer. However, an agentic workflow is more iterative, where you can have the LLM write an essay outline, do the research, write the first draft, analyse what parts need revision, and then revise the draft.

“In such a workflow you may have the LLM do some thinking, revise the article, then do some more thinking, and iterate this through a number of times. And, what not many people appreciate is that this delivers remarkably better results,” he said.

He also shared an example of how his team analysed some data using the ‘HumanEval’ coding benchmark. When they used GPT-3.5 zero-shot prompting, it got 48% right. GPT-4 delivered a much better performance, with 67%.

However, GPT-3.5, wrapped in an agentic workflow, performed better than the zero-shot GPT-4.

“This has significant consequences for how we all approach building applications. If you’re looking forward to running GPT-5/ Claude 4/ Gemini 2.0 (zero-shot) on your application, you might already be able to get similar performance with agentic reasoning on an earlier model,” he emphasised.

Everybody Seems to be Bullish on ‘AI Agents’

Recently, venture capitalist Vinod Khosla, envisioned a future in which internet access will be mostly through agents. He predicted a future in which most consumer access to the internet will be agents acting for consumers doing tasks and fending off marketers and bots. “Tens of billions of agents on the internet will be normal,” he wrote.

Meta CEO Mark Zuckerberg also spoke about how if a business is trying to interact with a customer then the interaction is no longer limited to “the person sends you a message and you just reply”. It’s a multi-step interaction where the business would want to think through how it can accomplish the person’s goals. So, the job of the AI is no longer to just respond to the question.

“If someone else solves reasoning and we’re sitting here with a basic chatbot, then our product is lame,” he said, envisioning a kind of Meta AI general assistant product that will shift from something that feels more like a chatbot to things where you’re giving it more complicated tasks and then it goes away and does them.

“I think a big part of what we’re going to do is interacting with other agents for other people whether it’s for businesses or creators. A big part of my theory on this is that there’s not going to be just one singular AI that you interact with because every business is going to want an AI that represents their interests,” he added.

He further took the example of 200 million creators on Meta platforms and how they want to engage with their community but are limited by the hours in the day. He explains that if you could create something where that creator can basically own the AI, train it in the way they want, and can engage their community, then that’s going to be super powerful.

Agents, agents everywhere!

Recently Google introduced Vertex AI Agent Builder, a platform that enables the easy creation of autonomous agents with little to no coding required.

NVIDIA has also teamed up with the AI healthcare company Hippocratic AI to develop GenAI agents that not only outperform human nurses on video calls but also cost a lot less per hour.

Tech giants like Microsoft, OpenAI, and Google also seem to be racing to build more agent capabilities to position their technologies as essential tools.

Source: LinkedIn

Despite scepticism, Devin, for instance, resolved nearly 14 out of every 100 issues, this advancement marked notable progress in AI’s capability to autonomously understand and address software development issues, enhancing its potential to support developers. Devin can even do real jobs on Upwork!

It recently raised $175 million at a $2 billion valuation from Founders Fund.

Then there is Devika, an Indian open source AI software engineer capable of understanding human instructions, breaking them down into tasks, conducting research, and autonomously writing code to achieve set objectives.

All these developments further strengthen the belief held by many that the future of AI is going to be Agentic. “Honestly the path to AGI feels like a journey rather than a destination but I think agent workflows could help us take a small step forward on this very long journey,” as said by Andrew Ng.

Are You Ready?

“I think it’s very likely but perhaps not in a nice way,” said Kailash Nadh, CTO at Zerodha, told AIM, when asked about his opinion about agents running the internet.

Further, he said that there already exist agents taking instructions from us and going on the internet and getting them executed, and with LLMs it is only getting worse.

“I’ve seen bots… I’ve seen agents being used by people to order pizza. So, are we headed towards a future where this will be the case? I think absolutely!” said Nadh, adding that it is only a matter of time. “Is it going to be a nice one? I don’t know, I don’t think so. There are people who ruin everything.”

Even Ng said that the agents today don’t work fully reliably and that “they are kind of finicky”.

However, since we can iterate agents and they can recover from their failures, it makes them a lot more powerful. With continuously evolving agents, better agentic models, advanced tools and frameworks, the finicky aspects of agents might start to get reduced, painting an optimistic picture for the future.

Recent advancements like Anon building the identity backbone for the AI-powered Internet to enable billions of AI agents to securely access user accounts and transform our digital lives, also seem promising.

AI Agents are coming fast, but a major missing infrastructure piece has been “how do you get agents to do things on your behalf on the internet securely”until today… https://t.co/oQI4fNguiL

— Amjad Masad (@amasad) April 24, 2024

All of this could enable developers to build next-generation consumer and enterprise agent workflows, transforming how people interact with AI. Also, when done with security and proper frameworks in mind, the future of AI agents could be truly exciting!

The post Ready or Not, AI Agents Are Coming appeared first on Analytics India Magazine.

Commvault’s Arlie Teams Up with Microsoft to Elevate Cyber Resilience Globally

In November last year, Commvault, one of the founding members of the Microsoft Security Copilot Partner Ecosystem, introduced Arlie and Threat Scan Predict to simplify software management, enhance threat detection, and bolster cyber resilience for organisations worldwide.

“Earlier clients who used our software had to go through painful steps, unlike any other software. But now, if they want to check which backup jobs failed last week, they can simply ask Arlie,” Balaji Rao, area VP for India & SAARC at Commvault, explained.

Arlie, short for ‘Autonomous Resilience,’ is a generative AI tool built using Azure OpenAI. It is designed to simplify and automate data management tasks. However, its capabilities extend beyond simple queries, as it can also generate code for integrations with security tools.

“Do you want to integrate with an SIEM or a SOAR solution for cyber resiliency? For example, suppose you want to integrate with Palo Alto. In that case, you can ask Arli, ‘Can you give me the code for Palo Alto integration?’ and Arlie will generate that integration code for you,” explained Rao.

Further, he said that you don’t need to know the software anymore — “All you need to do is ask Arlie, and it will walk you through each one of those steps.”

Commvault’s AI applications extend to threat detection and prediction, with Threat Scan Predict identifying potential risks. “We actually use it in protection, gathering signals in advance and looking for anomalies, and we use a lot of AI in what we call Threat Scan Predict,” Rao explained.

Additionally, Arlie can automatically perform various data management tasks, such as data backups, restores, and policy configurations, based on user requests or predefined schedules. This automation reduces manual effort and ensures consistent data protection.

Commvault also uses its long-standing partnership with Microsoft, which began in 1998, to deliver advanced data governance, risk mitigation, and recovery capabilities to its customers, including Adidas, Sony, and AstraZeneca.

Commvault Cloud+Arlie = Ultimate Cybersecurity

The company’s Commvault Cloud—a SaaS-based data management solution enables organisations to protect, manage, and recover their data across on-premises, cloud, and hybrid environments.

The integration of Commvault Arlie with Commvault Cloud brings several benefits to users. Arlie’s natural language interface and intelligent recommendations make it easier for users to manage and protect their data in the cloud.

It also provides cloud-specific insights and recommendations to optimise data protection strategies for cloud environments.

With Arlie’s integration, users can automate data protection tasks for various cloud workloads, such as virtual machines, databases, and applications, reducing manual effort and ensuring consistent protection.

Additionally, Arlie can help users optimise their cloud storage costs by recommending data tiering, archiving, and deletion strategies based on data usage patterns and retention policies.

Arlie’s integration with Commvault Cloud supports multiple cloud platforms, such as AWS, Microsoft Azure, and Google Cloud platform. It enables users to manage and protect their data across these platforms from a single interface, simplifying multi-cloud data management.

Customer Story: Persistent Systems

AIM caught up with Persistent Systems’ CIO, Debashis Singh, who shared insights into their transition to Commvault Cloud with all its capabilities and how Microsoft’s assistance has influenced their overall strategy and the challenges they faced before the implementation.

Before adopting Commvault Cloud, Persistent Systems grappled with the complexities of managing data across hybrid environments. “We have a solution on-prem, where our cloud workload was every single day. But the ease of use was not that simple,” said Singh.

To ensure the solution’s effectiveness, Persistent Systems went beyond mere documentation.

“As part of the exercise, we even tried simulating the environment internally, creating a completely isolated environment, putting the backup solution on top of it, injecting malware into it, and finding out whether it can detect it and give us a clean count,” Singh revealed.

With Commvault, Persistent Systems also set ambitious targets for their recovery time objective (RTO) and recovery point objective (RPO). “We set a target of 24 hours as the RTO and one hour as the RPO. RPO is essentially a recovery point of safety, which refers to the point from where you can recover,” shared Singh.

During a simulated disaster recovery drill, Persistent Systems achieved an impressive RTO of just 8 hours and 20 minutes, surpassing their 24-hour target.

With generative AI tools such as Arlie, Commvault is upping the ante, and its commitment to democratising AI usage in cybersecurity is evident in all of its offerings. “Our platform is not the future; it is today. It is there today and in different aspects,” said Rao.

The post Commvault’s Arlie Teams Up with Microsoft to Elevate Cyber Resilience Globally appeared first on Analytics India Magazine.

OpenAI’s GPT-4 Can Autonomously Exploit 87% of One-Day Vulnerabilities, Study Finds

The GPT-4 large language model from OpenAI can exploit real-world vulnerabilities without human intervention, a new study by University of Illinois Urbana-Champaign researchers has found. Other open-source models, including GPT-3.5 and vulnerability scanners, are not able to do this.

A large language model agent — an advanced system based on an LLM that can take actions via tools, reason, self-reflect and more — running on GPT-4 successfully exploited 87% of “one-day” vulnerabilities when provided with their National Institute of Standards and Technology description. One-day vulnerabilities are those that have been publicly disclosed but yet to be patched, so they are still open to exploitation.

“As LLMs have become increasingly powerful, so have the capabilities of LLM agents,” the researchers wrote in the arXiv preprint. They also speculated that the comparative failure of the other models is because they are “much worse at tool use” than GPT-4.

The findings show that GPT-4 has an “emergent capability” of autonomously detecting and exploiting one-day vulnerabilities that scanners might overlook.

Daniel Kang, assistant professor at UIUC and study author, hopes that the results of his research will be used in the defensive setting; however, he is aware that the capability could present an emerging mode of attack for cybercriminals.

He told TechRepublic in an email, “I would suspect that this would lower the barriers to exploiting one-day vulnerabilities when LLM costs go down. Previously, this was a manual process. If LLMs become cheap enough, this process will likely become more automated.”

How successful is GPT-4 at autonomously detecting and exploiting vulnerabilities?

GPT-4 can autonomously exploit one-day vulnerabilities

The GPT-4 agent was able to autonomously exploit web and non-web one-day vulnerabilities, even those that were published on the Common Vulnerabilities and Exposures database after the model’s knowledge cutoff date of November 26, 2023, demonstrating its impressive capabilities.

“In our previous experiments, we found that GPT-4 is excellent at planning and following a plan, so we were not surprised,” Kang told TechRepublic.

SEE: GPT-4 cheat sheet: What is GPT-4 & what is it capable of?

Kang’s GPT-4 agent did have access to the internet and, therefore, any publicly available information about how it could be exploited. However, he explained that, without advanced AI, the information would not be enough to direct an agent through a successful exploitation.

“We use ‘autonomous’ in the sense that GPT-4 is capable of making a plan to exploit a vulnerability,” he told TechRepublic. “Many real-world vulnerabilities, such as ACIDRain — which caused over $50 million in real-world losses — have information online. Yet exploiting them is non-trivial and, for a human, requires some knowledge of computer science.”

Out of the 15 one-day vulnerabilities the GPT-4 agent was presented with, only two could not be exploited: Iris XSS and Hertzbeat RCE. The authors speculated that this was because the Iris web app is particularly difficult to navigate and the description of Hertzbeat RCE is in Chinese, which could be harder to interpret when the prompt is in English.

GPT-4 cannot autonomously exploit zero-day vulnerabilities

While the GPT-4 agent had a phenomenal success rate of 87% with access to the vulnerability descriptions, the figure dropped down to just 7% when it did not, showing it is not currently capable of exploiting ‘zero-day’ vulnerabilities. The researchers wrote that this result demonstrates how the LLM is “much more capable of exploiting vulnerabilities than finding vulnerabilities.”

It’s cheaper to use GPT-4 to exploit vulnerabilities than a human hacker

The researchers determined the average cost of a successful GPT-4 exploitation to be $8.80 per vulnerability, while employing a human penetration tester would be about $25 per vulnerability if it took them half an hour.

While the LLM agent is already 2.8 times cheaper than human labour, the researchers expect the associated running costs of GPT-4 to drop further, as GPT-3.5 has become over three times cheaper in just a year. “LLM agents are also trivially scalable, in contrast to human labour,” the researchers wrote.

GPT-4 takes many actions to autonomously exploit a vulnerability

Other findings included that a significant number of the vulnerabilities took many actions to exploit, some up to 100. Surprisingly, the average number of actions taken when the agent had access to the descriptions and when it didn’t only differed marginally, and GPT-4 actually took fewer steps in the latter zero-day setting.

Kang speculated to TechRepublic, “I think without the CVE description, GPT-4 gives up more easily since it doesn’t know which path to take.”

How were the vulnerability exploitation capabilities of LLMs tested?

The researchers first collected a benchmark dataset of 15 real-world, one-day vulnerabilities in software from the CVE database and academic papers. These reproducible, open-source vulnerabilities consisted of website vulnerabilities, containers vulnerabilities and vulnerable Python packages, and over half were categorised as either “high” or “critical” severity.

List of the 15 vulnerabilities provided to the LLM agent and their descriptions.
List of the 15 vulnerabilities provided to the LLM agent and their descriptions. Image: Fang R et al.

Next, they developed an LLM agent based on the ReAct automation framework, meaning it could reason over its next action, construct an action command, execute it with the appropriate tool and repeat in an interactive loop. The developers only needed to write 91 lines of code to create their agent, showing how simple it is to implement.

System diagram of the LLM agent.
System diagram of the LLM agent. Image: Fang R et al.

The base language model could be alternated between GPT-4 and these other open-source LLMs:

  • GPT-3.5.
  • OpenHermes-2.5-Mistral-7B.
  • Llama-2 Chat (70B).
  • LLaMA-2 Chat (13B).
  • LLaMA-2 Chat (7B).
  • Mixtral-8x7B Instruct.
  • Mistral (7B) Instruct v0.2.
  • Nous Hermes-2 Yi 34B.
  • OpenChat 3.5.

The agent was equipped with the tools necessary to autonomously exploit vulnerabilities in target systems, like web browsing elements, a terminal, web search results, file creation and editing capabilities and a code interpreter. It could also access the descriptions of vulnerabilities from the CVE database to emulate the one-day setting.

Then, the researchers provided each agent with a detailed prompt that encouraged it to be creative, persistent and explore different approaches to exploiting the 15 vulnerabilities. This prompt consisted of 1,056 “tokens,” or individual units of text like words and punctuation marks.

The performance of each agent was measured based on whether it successfully exploited the vulnerabilities, the complexity of the vulnerability and the dollar cost of the endeavour, based on the number of tokens inputted and outputted and OpenAI API costs.

SEE: OpenAI’s GPT Store is Now Open for Chatbot Builders

The experiment was also repeated where the agent was not provided with descriptions of the vulnerabilities to emulate a more difficult zero-day setting. In this instance, the agent has to both discover the vulnerability and then successfully exploit it.

Alongside the agent, the same vulnerabilities were provided to the vulnerability scanners ZAP and Metasploit, both commonly used by penetration testers. The researchers wanted to compare their effectiveness in identifying and exploiting vulnerabilities to LLMs.

Ultimately, it was found that only an LLM agent based on GPT-4 could find and exploit one-day vulnerabilities — i.e., when it had access to their CVE descriptions. All other LLMs and the two scanners had a 0% success rate and therefore were not tested with zero-day vulnerabilities.

Why did the researchers test the vulnerability exploitation capabilities of LLMs?

This study was conducted to address the gap in knowledge regarding the ability of LLMs to successfully exploit one-day vulnerabilities in computer systems without human intervention.

When vulnerabilities are disclosed in the CVE database, the entry does not always describe how it can be exploited; therefore, threat actors or penetration testers looking to exploit them must work it out themselves. The researchers sought to determine the feasibility of automating this process with existing LLMs.

SEE: Learn how to Use AI for Your Business

The Illinois team has previously demonstrated the autonomous hacking capabilities of LLMs through “capture the flag” exercises, but not in real-world deployments. Other work has mostly focused on AI in the context of “human-uplift” in cybersecurity, for example, where hackers are assisted by an GenAI-powered chatbot.

Kang told TechRepublic, “Our lab is focused on the academic question of what are the capabilities of frontier AI methods, including agents. We have focused on cybersecurity due to its importance recently.”

OpenAI has been approached for comment.

90% of Indian Internet Users are already using AI, says Report

90% of Indian Internet Users are already using AI, says Report

With over 900 million internet users, 9 out of 10 Indians are adopting AI, according to a study by marketing data and analytics firm Kantar. The current AI user base at 724 million, is expected to see an annual growth of 6%, driven by the computing capabilities of the internet, smartphones, connectivity, and cloud infrastructure.

While AI usage is projected to be higher among young adults aged 19-24, individuals aged 45 and above are also embracing these technologies, with approximately 81% of this demographic utilizing AI-driven services.

Kantar anticipates a surge in AI integration among digital commerce and entertainment apps to elevate customer experiences and align with evolving trends. However, adoption lags in segments such as BFSI, job search, and short video apps, as revealed by the survey.

According to the ‘IBM Global AI Adoption Index 2023’, around 6 in 10 IT professionals at enterprises reported that their company is actively implementing generative AI, making India among the countries with the most extensive AI adoption.

In the education sector, CBSE has incorporated AI as a skill module for Grades 6–8 and as a skill subject for Grades 9–12. Currently, several organisations are developing virtual assistants for educators, parents, and students to enhance learning experiences.

Prime Minister Narendra Modi has also acknowledged the significance of AI. In an interview with ANI news agency, PM Modi said, “Used AI to prepare India’s plan for the next 25 years.”

However, in smaller cities, the impact of AI is palpable, prompting government efforts to address related challenges. For instance, in Sharawasti, Uttar Pradesh, which ranks as India’s fourth most underdeveloped district with a voter count of 2.1 million, initiatives have been launched to combat deepfake technology.

The post 90% of Indian Internet Users are already using AI, says Report appeared first on Analytics India Magazine.

GitHub Secures Millions of Developers Through Two-Factor Authentication

GitHub has released the early results of its two-factor authentication (2FA) requirements for code contributors on GitHub.com–which was first announced in 2022 and rolled out across 2023–in efforts to secure developer accounts and prevent the next supply chain attack.

GitHub found that there has been a dramatic increase in 2FA adoption on GitHub.com, focused on users who have the most critical impact on the software supply chain. Moreover, users adopting more secure means of 2FA, including passkeys

They also recorded a net reduction in 2FA-related support ticket volume, credit to heavy up-front user research and design, as well as Support process improvements.

Additionaly, other organisations like RubyGems, PyPI, and AWSjoined in raising the bar for the entire software supply chain, proving that large increases in 2FA adoption aren’t an insurmountable challenge

In May 2022, we introduced an initiative to raise the bar for supply chain security by addressing the first link in that chain–the security of developers. Because strong multi-factor authentication remains one of the best defenses against account takeover and subsequent supply chain compromise, we set an ambitious goal to require users who contribute code on GitHub.com to enable one or more forms of 2FA by the end of 2023,” Mike Hanley, Chief Security Officer at GitHub said.

“What followed was a year’s worth of investments in research and design around the implementation of these requirements, to optimize for a seamless experience for developers, followed by a gradual rollout to ensure successful user onboarding as we continued to scale our requirements. While our efforts to ensure developers can be as secure as possible on GitHub.com don’t end here, today we’re sharing the results of the first phase of our 2FA enrollment, with a call for more organizations to implement similar requirements across their own platforms,” Hanleyadded.

The post GitHub Secures Millions of Developers Through Two-Factor Authentication appeared first on Analytics India Magazine.

Smartphones Will Soon be Dead

Smartphones Will Soon be Dead

Smartphones are indispensable and considered the ultimate solution for our daily needs. Now, with AI lurking around the corner ready to take over things, a new question arises: Are AI-powered devices poised to take their place? While it may seem a tad far-fetched right now, the reality is inching closer.

Despite our reliance on smartphones for WhatsApp, Instagram, and even ordering meals from platforms like Zomato or Swiggy, recent developments suggest a shifting landscape. Case in point: Apple’s acquisition of Paris-based Datakalab, signaling a push towards bolstering on-device AI capabilities for the future iPhone 16.

In another update, Microsoft announced Phi-3, a compact yet powerful language model boasting an impressive 3.8 billion parameters. What sets Phi-3-Mini apart is its ability to operate directly on your smartphone, marking a significant leap forward in accessibility and convenience.

In 2022, Nokia CEO Pekka Lundmark predicted that smartphones may not stay relevant in 2030. During the World Economic Forum, Lundmark said, “By then, definitely the smartphone as we know it today will no longer be the most common interface. Many of these things will be built directly into our bodies.”

How soon until we shift to AI wearables?

With the introduction of the Nokia 5120 in 1998 to foldable touchscreen phones introduced in 2023, smartphones have certainly come a long way. Currently, AI and machine learning operate behind the scenes on our phones, powering various functions, including enhancing photos, translating languages, identifying music, and aiding gaming.

Now smartphone makers see a chance to gear towards AI. According to the latest update, Qualcomm and MediaTek have introduced smartphone chipsets that enable the processing power required for AI applications.

In 2023, Samsung unveiled its groundbreaking generative AI model, Samsung Gauss, marking a significant leap forward in artificial intelligence technology. Google introduced its AI-powered Google Workspace suite, showcasing the increasing integration of AI into everyday tools and services.

Not stopping there, Google teased a new generative search experience, hinting at its potential inclusion in future flagship Pixel phones. Meanwhile, Apple has been diligently incorporating AI and generative AI capabilities into its products.

Meanwhile, Pete Lau, founder of OnePlus, has shared that he is optimistic about AI and believes, “AI is a vessel and smartphones fit the bill perfectly.”

Source: LinkedIn

As technology evolves, innovations like the Limitless AI Pendant and WIZPR Ring are launched to harness the power of AI and interact with large language models.

Next in line is Elon Musk’s Neuralink, which is working on producing electronic devices that can be implanted into the brain and used for communication with machines and other individuals. They look forward to making it as common as a smartphone, further opening up a world of possibilities for both medical and technological advancements.

The Future is AI

With the Humane Ai Pin review, the prospect of trusting and embracing devices seems slim. However, this is just the beginning.

Adopting a nascent technology product requires time to gain acceptance. The debut of the MacBook in 2006 was marred by technical issues like unexpected shutdowns and palm-rest discolouration, drawing a considerable backlash.

Similarly, the iPad was initially derided as nothing more than a ‘big iPod touch’, casting doubt on its purpose and potential success. Even the revolutionary iPhone faced skepticism upon its unveiling, with critics highlighting the absence of features like a physical keyboard and replaceable batteries, which were standard in contemporary phones.

As technology continues to advance, it’s becoming clear that simply upgrading our smartphones may no longer suffice. Embracing AI gadgets alongside our trusty phones and a pair of earbuds seems like a convenient approach.

The post Smartphones Will Soon be Dead appeared first on Analytics India Magazine.

Healthtech AI startup Endimension Technology raises INR 6 Crore in Pre-Series A Round

Endimension funding

Healthtech AI startup Endimension Technology has recently raised INR 6 Crore in Pre-Series A round led by Inflection Point Ventures.

The funds will be used to fuel AI research and development, team expansion, and software enhancement. These strategic investments aim to bolster Endimension’s market position, accelerate growth, and establish Endimension as an industry leader.

Other investors in this round include Sucseed Indovation, SINE IIT Bombay and individual angel investors. Endimension Technology, incubated at IIT Bombay, is driven by the vision to harness AI technology in radiology, ensuring early and precise diagnosis for patients globally.

“The Indian radio-diagnosis market, growing at a CAGR of 15%+ over the last decade, has got a lot of focus on equipment & infrastructure. The under-stated need is that of qualified professionals, i.e. radiologists, to manage this burgeoning demand. There has been growth across tier 1, 2 & 3 for equipment, but the availability and prohibitive costs of trained radiologists exacerbate the problem of demand outstripping supply situation.

“Endimension focuses on leveraging AI to facilitate faster assessment and diagnosis, employing generative AI to streamline report generation and reduce the time required by radiologists. IPV is confident that this investment will contribute towards the betterment of the industry,” Ivy Chin, Partner, Inflection Point Ventures said.

So far, Endimension’s platform has processed over 1 million scans to date and is currently deployed in 400 hospitals and diagnostic centres across multiple regions, enhancing their accessibility.

Endimension has added several feathers to its cap over the years. The startup was one of the 20 startups selected by Google for Startup Accelerator Class 8 and one of the top 10 startups at the WhatsApp Incubator Programme.

The post Healthtech AI startup Endimension Technology raises INR 6 Crore in Pre-Series A Round appeared first on Analytics India Magazine.

Zerodha CTO Says He Stopped Googling Technical Stuff Over the Past Year

CTO Kailash Nadh Zerodha

In a rather casual but strong revelation, Kailash Nadh, CTO of India’s largest broking company, Zerodha, told AIM that he has stopped Googling for technical topics in the past year or so.

“I’ve stopped Googling technical stuff over the past year. I interact with a chatbot and save minutes to hours every single day,” he said.

Google Search, No More?

When asked if there would be a future where we move away from Google Search, the answer was affirmative. “I think so,” he said.

Nadh explained some of the challenges involved in searching for technical solutions on Google Search. A user must go through several pages and read hundreds of comments to find the information that can potentially help solve the problem.

“They’re [ChatGPT and other chatbots] so powerful that you dump the stack, and they’ll immediately point out, saying you should explore this. That’s me saving 45 minutes on one problem, so I don’t even Google technical queries anymore,” said Nadh.

He believes that by quickly providing relevant insights, these AI tools can save considerable time, reducing the need for extensive web searches. This brings forth the problem of Google search slowly becoming irrelevant.

The Dead Internet Theory

“Is search changing? I think absolutely, for multiple reasons,” said Nadh.

Citing the ‘Dead Internet Theory’, a concept that circulated the internet in the late 2010s discussing how bots and AI-generated content drive a majority of the internet traffic and content, Nadh believes that with the advent of LLMs, this phenomenon is going to increase exponentially.

He anticipates that LLMs will populate tons of articles, blogs, and websites, further amplifying this trend.

Speaking about how bots have filled the internet, Nadh believes that AI agents will also dominate the internet, something he had witnessed as already happening. At present, social media is filled with tons of bot-generated content and activity.

“On social media, especially after the explosion of LLMs, there are lots of bots interacting with bots. You see this on Reddit and Twitter threads, and it produces no value. It’s just a lot of noise.”

Bots and agents are already in the picture, receiving instructions from people and executing tasks on the internet. Siri, which Nadh squarely calls ‘quite dumb’, exemplifies this. With the potential for malicious actors to unleash bots to disrupt discourse, a drastic change in the quality of search is observed.

Source: X

Nadh foresees a lot of garbage driving search traffic. “I think traditional search will kind of die because of the huge quality issue,” he said. Traditionally, search was a ‘discovery problem’, with users going through every web page to find the relevant information, which was a ‘non-ideal’ solution to a knowledge problem.

“If you have a certain question and you get an immediate answer, that is the easiest way to gain knowledge,” said Nadh, who believes that search will move towards a direct question-and-answer format with website citations.

That is precisely what Perplexity AI is doing.

The New Era of Search Begins

Perplexity, the AI answer engine, is all the buzz. Co-founded by Andy Konwinski, Denis Yarats, Johnny Ho, and Aravind Srinivas, the company recently raised $63 million at a $1billion valuation. Since its launch, the platform has received over 1 billion queries (in 15 months) and serves 169 million monthly queries.

The AI-powered answer engine, which has over 10 million monthly users, is a platform that entails what Nadh spoke about – solutions with citations. Perplexity is also emerging as a probable Google alternative.

With AI now powering search platforms, including Google, with its search generative experience, arises the question of AI hype and whether the mad rush to implement AI is justified.

AI Hype?

Nadh highlighted the emergence of implementing AI technologies, sometimes without thinking through the problem. “AI should not really be looked at as a solution chasing a problem. You can’t predict all possible scenarios, so one has to be very careful [before adopting],” he said.

When asked if AI was a bubble and the current scenario simply a part of the hype circle, Nadh said, “Are we in the middle of an AI hype cycle? Absolutely. Like with any other technology, when there’s a breakthrough, there’s a lot of hype. But are we in the middle of an AI bubble where it’ll go bust, and there’ll be no AI? No.”

The post Zerodha CTO Says He Stopped Googling Technical Stuff Over the Past Year appeared first on Analytics India Magazine.