Wolfram’s New Update Gives Developers Genius-level Generative AI

After being one of the first plugins to ever come to ChatGPT, Wolfram has now gone all in on the LLM wave. In the latest version 13.3 update, the Wolfram language has added support for LLM technology, as well as integrating an AI model into the Wolfram Cloud.

This update comes after Wolfram has slowly been building the tooling for making the language LLM-ready. The update puts LLMs directly into the language, with the introduction of an LLM subsystem for the language. It also builds on the LLM functions technology added in May, which ‘packages’ AI powers into a callable function, with the new subsystem now being user-addressable.

With these new updates, developers have a whole new way of interfacing with their data. This approach combines Stephen Wolfram’s idea of natural language programming along with the Wolfram language’s symbolic programming, creating a force to be reckoned with. What’s more, with the Wolfram language API, this can be plugged in to larger systems, delivering amazing power through a natural language interface.

LLM-powered computation

The Wolfram language’s strengths lie in its symbolic programming capabilities. Using code, the language can perform many complex mathematical functions, such as algebra, matrix manipulations, and differential equations. To bolster the language’s logical problem-solving capabilities, Stephen Wolfram, the creator of the language, decided to add in LLM capabilities to the bot.

Wolfram’s tryst with LLMs began with the creation of the Wolfram ChatGPT plugin, which empowered the chatbot with the symbolic programming capabilities of the language. Stephen then embarked upon a journey to take symbolic language to new heights, saying to AIM,

“We’re about to use the symbolic language [Wolfram] to provide a way of using LLM as a component in a larger software stack. It’s something that you can do in a very beautiful way.”

The 13.3 update seems to be a step towards this direction, bringing LLMs directly into Wolfram language through a LLM subsystem. After the chatbot plugin and LLM function calling, Stephen introduced Chat Notebooks. This allows users to easily interact with LLMs in the Wolfram Notebook through a text box, allowing them to generate powerful code in the Wolfram language. Stephen called this interface an example of “using an LLM as a linguistic interface with common sense”, as it allows the users to interact with the language without needing to know the syntax.

Stephen thinks that this is a natural step to the existing capabilities of the program, stating,

“When you’re doing something you’re familiar with, it’ll almost always be faster and better to think directly in Wolfram Language, and just enter the computational language code you want. But if you’re exploring something new, or just getting started on something, the LLM is likely to be a really valuable way to “get you to first code”,”

The in-built LLM has self-correcting capabilities as well, allowing it to fix its own errors before running it and outputting code snippets. This means that it can also debug existing code in Wolfram language, and can even look at details like stack trace and error documentation to fix broken code.

It also comes with a few different personas, each geared towards a different purpose. The code assistant persona writes codes and explains it, the code writer persona only generates the code, while others like Wolfie and Birdnado respond to the user “with an attitude”.

To extend the functionality of the LLM even further, Wolfram also launched the prompt repository, which can be used to get additional function prompts and modifier prompts. While he stated that Wolfram language will continue to become “increasingly integrated as a tool into LLMs’, the prompt repository currently showcases the capabilities of the language’s new AI tools.

A developer’s playground

The prompt repository is a community-contributed prompts platform that will allow the LLM in Wolfram language to adopt many different personas, each with discrete use-cases. In addition to this, the community has also contributed various functions that can extend the functionality of the language beyond its already-comprehensive list of inbuilt functions.

The prompts in the repository are split into 3 main categories, namely personas, functions, and modifiers. Personas define the style of interaction with the user, functions generate output from existing text, and modifiers apply an effect to the output coming from the LLMs. Each of these functions can also be called in code, allowing developers to integrate them easily into existing code or even extend the functionality of a program.

The repository serves a very important purpose is speeding up workflows by allowing developers to avoid LLM wrangling. In Wolfram’s words,

“Sometimes just using the [prompt’s] description will indeed work fine. But often it won’t. Sometimes that’s because one needs to clarify further what one wants. And sometimes there’s just a certain amount of “LLM wrangling” to be done. And this all adds up to the need to do at least some “prompt engineering” on almost any prompt.”

The repository effectively removes the need for this prompt engineering by creating a database of accessible and callable prompts, which are then converted into Wolfram Language code by the LLM. These prompts use the language to carry out a function on the given text, extending its functionality beyond mathematical problems.

Some of the sample prompts that stood out to us were the LongerRephrase prompt, which expands a given statement, Scientific Jargonize, which makes a plain text sentence sound like it has come out of a research paper, and TweetConvert, which converts data into a tweet. There are also a host of other prompts which can convert sentences into product pitches, dejargonize complex pieces of text, check grammar stringently, and even generate puns.

Using the ever-growing repository of prompts, devs and citizen developers alike can use the Wolfram language to easily modify large pieces of text. What’s more, since each prompt can be called as a function, they can be added on to any program to give it LLM super powers. Once the added LLM functionality goes live in the coming days, the Wolfram language will become an indispensable tool in the belt of AI enthusiasts and developers alike.

The post Wolfram’s New Update Gives Developers Genius-level Generative AI appeared first on Analytics India Magazine.

How Tredence Recommends Using AI in UI/UX Design

How Tredence Recommends Using AI in UI/UX Design

In today’s rapidly evolving technological landscape, AI has emerged as a crucial component in various industries, including retail, healthcare, and beyond. AI is no longer just a transformative force; it is now an integral part of business operations and even the creative sector. The future holds a tight integration between AI and design, where they work hand in hand to enhance user experiences across different platforms. AI in design is still in its early stages, but the concept of AI designers is gaining traction as a specialised area of expertise for designers.

To understand more about how AI can be integrated into design, specifically for UI/UX designers, AIM caught up with Pavithra Narayanan, Manager – User Interaction and User Experience team (Design team).

AIM: How do AI and design work together? How has the integration of AI technologies impacted the overall user experience in UI/UX design?

Pavithra: I believe AI has become an integral part of everything we do across different business areas. It’s no longer just about engineering, but also covering a wide range of aspects in various areas of business, including the creative sector. AI is now an integral part of our lives, and design alike. AI and machine learning and their intersection with design are still in their infancy stages. However, I do see AI design as an upcoming area of expertise for designers.

AI and design can integrate in multiple ways. For instance, designers can create designs for AI systems, algorithms, or applications powered by AI. Alternatively, AI can assist in designing tools themselves. Many design tools, like Figma, have recently embraced AI too, and the possibilities they provide are endless. Moreover, there is an aspect of incorporating human concepts into AI algorithms, making AI more relatable and human-like in terms of user experience. Therefore, there are different facets through which integration can occur, and it has the potential to evolve in the future.

AIM: In what ways can AI be leveraged to personalise the user experience in UI/UX design?

Pavithra: Personalisation occurs in multiple ways. Data is abundant in the current world, especially when dealing with various types of end users (business as end-user vs. direct consumers). Customer data is tracked by BtoB and BtoC companies through click stream, transaction information, performance review, surveys, third party data and so on. Data is available in plenty more than ever, and we now have access to tap into customer performance, behaviour and what not. AI helps us better understand this data, enabling us to develop improved user personas. This, in turn, guides our design process and user journey. Thus, AI empowers us to create better user-centred designs.

Another way AI benefits us is in our primary design tools like Figma and Miro. Any AI upgrade not only deals with data but also helps eliminate redundancy in the design process. For example, consider the user form process I mentioned earlier. AI can provide auto-suggestions, reducing the effort required by users to complete the form. In our case, the applications and tools we build are directly powered by AI. As designers, we invest considerable time in understanding the entire AI system, including how it communicates with users. For instance, the system may trigger an alert when a user generates a specific output or offer suggestions to improve the user’s report generation. Additionally, AI assists us in strategy management, providing marketers with qualified options. It covers various aspects of our design process for user-oriented apps powered by AI models in the backend.

AIM: What are some of the key challenges or considerations when implementing AI in UI/UX design?

Pavithra: There is innovation in the way data is inputted into systems. It’s no longer limited to just text or clicks — it can also involve voice commands, gestures, and more. When a system has access to a large amount of personal data, it raises privacy concerns. It is one of the fastest-growing threats, not only in terms of the increasing amount of data but also in how AI consumes and handles this data. As designers, our approach should involve understanding different data privacy implementation systems and collaborating with engineers to ensure they are integrated into the design prototypes from the beginning. Data privacy should never be overlooked. Engineers have a responsibility to protect the data being consumed, while designers should ensure it is presented in a user-friendly manner that aligns with the user’s flow and journey.

AIM: How do you ensure that AI algorithms are transparent, unbiased, and align with ethical standards while delivering an enhanced user experience?

Pavithra: AI and ML models are built on user inputs and are not capable of learning on their own like human brains. They require training data sets. Engineers need to put considerable effort into user research and diversity of use cases when creating these models to ensure there is no bias in them. This diversity should also be reflected in the design. If we design solely for a specific user group or enable an AI model to respond only in a certain way without thorough and diverse research, we risk building biassed systems. An example of this bias can be seen in facial recognition systems, as highlighted in the documentary ‘Coded Bias’. Many systems exhibit racial bias due to inadequate templates or a lack of proper monitoring and structure. Hence, we need to be cautious and ensure comprehensive research and ethical considerations when integrating design and AI.

Regarding the ethical aspect, let’s consider a scenario where I, as a designer, use tools like Figma or an AI-powered engine to generate wireframes. The question of ownership, creative authority, and credit becomes open-ended. While I encourage designers to embrace AI, they should be aware of its generic nature. It is essential to maintain the human aspect and empathy that are key in the design world. In the future, applications and content should not all look alike, so designers need to cross-reference AI with other sources and not take it for granted as their sole information source.

AIM: What are the many ways AI and Design intersect and what is in it for designers?

Let’s explore three ways in which design and AI interact. The first is “design with AI”, where AI assists in the design process. For example, there are AI-driven tools that automate wireframes based on input, allowing designers to kickstart their work.

Other tools help overcome creative blocks by generating a starting point for designers. These tools cater to user interaction, visual design, and graphics. Additionally, in fields like strategy, UX research, and user journey management, there is a growing trend of integrating design concepts into AI systems. This ensures that AI understands the desired responses through built-in user research, as seen in ChatGPT.

Lastly, designing for AI as a product involves the role of user interaction designers, who focus on how users interact with AI and the various touchpoints for communication. In summary, AI is becoming an integral part of the design world, and designers must adapt while maintaining their unique perspectives and expertise.

The post How Tredence Recommends Using AI in UI/UX Design appeared first on Analytics India Magazine.

Microsoft Executive Gurdeep Pall to Retire 

Gurdeep Pall, a long-serving corporate vice president at Microsoft, announced to his colleagues that he intends to retire from the company in September, The Information reported on Monday citing people with knowledge of the matter.

The Information report added that a Microsoft spokesperson confirmed his coming departure via email, describing it as “a long-planned retirement.”

Throughout his career, Pall played a significant role in the development and promotion of important Microsoft products such as Windows, Skype, and Bing,

Pall joined Microsoft in 1990 and worked in the company for 33 years . He has worked well together with previous CEOs Gates, Ballmer, and now Nadella. He often attends important product launches by Microsoft.

He began his journey at Microsoft as a software engineer and contributed to the integration of TCP/IP, the fundamental software protocol of the internet, into Windows. After 2005, he was mainly responsible for the product and R&D department, development and management of Skype, Teams, Microsoft voice, mobile search, and Bing Maps.

Pall has been working on converting Microsoft’s research into products for their “industrial metaverse” division since 2016. This division aims to help customers build software that automates control systems in areas such as power plants, factory robotics, and transportation networks.

He was also a part of Microsoft’s project called Airsim, a drone simulation software product launched in July 2022, Information report mentioned. However, Microsoft cut costs through layoffs on this project and its current development is not known.

The departure of Gurdeep Pall shows how Microsoft has focused its attention towards generative AI under the leadership of Kevin Scott, chief technology officer and executive vice president of AI.

The post Microsoft Executive Gurdeep Pall to Retire appeared first on Analytics India Magazine.

15 Popular Plugins Available to ChatGPT Plus Users 

Remember when we first laid our hands on a smartphones? We had an app for everything — image editing, music, banking, shopping, travel, you name it. Something similar is whirling these days, albeit the conversation now revolves around ChatGPT plugins to perform different tasks. Currently, there are over 200 plugins available on the ChatGPT Plus that aid you in your daily tasks from varying fields. Be it educational, research, cooking or entertainment, worry not plugins got your back! Notably, these are available only for ChatGPT Plus users.

Ask Your PDF

If you are a researcher, this plugin is definitely for you. As a researcher one needs to go through various papers mostly in the form of PDF and it becomes a tedious task to remember what information can be found where. With Ask your PDF, one can just directly ask questions to get answers from the information written in the PDF.

🚨Update!🚨
We're excited to announce that the AskYourPdf plugin for ChatGPT is now live on ChatGPT! You can chat your documents directly from ChatGPT's UI.🤩
See the AskYourPdf plugin for ChatGPT in action below pic.twitter.com/8A9HtjdXhO

— Ask PDF (@AskYourPdf) May 12, 2023

Link Reader

Link Reader can read the content and pull out information from all kinds of web links, including webpages, PDFs, images, and more. Suppose you are working on a project and need a certain answer from a website, just type in your query and the Link Reader plugin will provide you with the required answer.

There’s An AI For That

There are several AI tools that we may have not even heard of but may need. To solve this problem ‘There’s An AI for that’ offers a vast database of tools for various purposes, including image editing, PDF conversion, and more, catering to both personal and professional needs.

Prompt Perfect

Prompt Perfect is a top-notch ChatGPT extension designed to assist users in crafting flawless prompts for AI chatbots. If you struggle with formulating prompts, Prompt Perfect can be of tremendous help. Simply enter a prompt for any query or topic you wish to ask the AI bot.

1/ Introducing the Prompt Perfect ChatGPT Plugin, your reliable partner for crafting top-notch prompts for any occasion: pic.twitter.com/KldKLDz5eC

— Prompt Perfect 🔌 (@Prompt_Perfect) June 2, 2023

World News

It is no hidden fact that ChatGPT only has information and facts till 2021. This problem has been solved using the World News Plugin. It provides a well-organised list of news articles in multiple languages and source links. Although it may not have conventional assistance, World News is a valuable tool for those who want to stay informed about current events worldwide.

Stories

Stories, a remarkable ChatGPT plugin, allows users to engage in the art of storytelling. With this creative tool, all you need to do is provide a prompt, and Stories will craft a captivating story for you to enjoy.

The plugin offers an exceptional feature where it presents the story and accompanying images in a charming vintage-style book format. The images are also generated by AI and thoughtfully placed alongside the text. Navigating through the story is as easy as clicking the edges of the pages, allowing for a delightful reading experience.

Show me Diagrams

Show me Diagrams provide users with a simple and efficient way to generate visual diagrams accompanied by text explanations. Whenever there’s a requirement to create a diagram swiftly, this plugin is the go-to solution. It is very useful in creating mind maps.

What to Watch

This useful plugin assists in discovering the streaming platform for a specific show. While there are other websites that offer similar services, having ChatGPT perform this function is convenient. Although it may not be flawless, it generally performs well, with occasional errors. Nonetheless, it saves the hassle of searching through multiple services to find your favorite obscure show.

MixerBox OnePlayer

MixerBox is a comprehensive music compilation tool that collects songs and curates playlists based on user preferences. Users simply need to specify their musical taste, and MixerBox takes care of the rest. Additionally, MixerBox provides direct links to the songs, allowing users to listen to them conveniently. The greatest advantage is that the songs are free, as they are linked to YouTube videos.

VoxScript

VoxScript can grab a video transcript and lets you quickly pullout useful information. It’s a great plugin for someone who understands better from written words rather than videos.

Chat with PDF

This is pretty much similar to ‘Ask your PDF’. It comprehends textbooks, handouts, and presentations effortlessly so you don’t spend hours flipping through research papers and academic articles.

ScholarAI

ScholarAI grants users access to a database of scholarly articles and academic research. Users can easily search for and retrieve peer-reviewed studies, enabling them to obtain trustworthy information to support their scientific research, technical projects, and funding proposals efficiently.

Check out tech How’s video about #ScholarAI and the #ScholarAI plugin for #ChatGPT
Find Research Papers with ChatGPT – SchloarAI Plugin Guide https://t.co/I1YYXOztEk via @YouTube

— ScholarAI and the ScholarAI plugin for ChatGPT (@ScholarAI_) July 2, 2023

VideoInsights

VideoInsights lets you leverage the power of AI to analyse your video content and gain valuable insights. Not only that it also automatically generates summaries, action items and key highlights from your video transcripts in seconds regardless of the language.

KeyMate.AI Search

KeyMate.AI Search improves your knowledge base by searching the internet for the latest information on diverse subjects. With this plugin, you can access up-to-date data and expand your understanding on various topics.

Webpilot

WebPilot enables users to input a website address or addresses and make requests to interact with, extract specific information from, or modify the content of those websites.

Also, Read 12 Plugins that Make GPT-4 Complete

The post 15 Popular Plugins Available to ChatGPT Plus Users appeared first on Analytics India Magazine.

OpenAI disables ‘Browse’ Feature after releasing it on ChatGPT App

In less than two weeks of releasing the beta mode of browse feature on ChatGPT app, the company announced that it has temporarily disabled the feature. The browse feature was showing content to users that the company had not intended to. OpenAI explained that if a user asks for a full text of a URL, ChatGPT was showing that information, and that they have retracted the feature till they fix the problem.

User Privacy Issues Prevails

The identified problem goes against the content owner’s privacy, a problem that has got OpenAI at crossroads with a number of companies. Data privacy issues have been the main concern for companies banning the usage of ChatGPT. Countries such as Japan have also warned OpenAI against using user’s sensitive data.

Interestingly, in a blog released by OpenAI highlighting the learnings from Sam Altman and his team’s global tour, the company said that they will explore ways for creators, publishers and content producers to benefit from their technology. The bug highlighted in the browser feature goes against it. The company also reiterated that they do not train on customer data and that ChatGPT users can opt-out from using their data for training.

OpenAI has rolled out the ‘browse with Bing’ feature in beta phase for ChatGPT Plus subscribers, where a user gets access to real-time data. Since the rollout, users have been giving feedback on the feature. The browse feature was available as part of ChatGPT Plus which comes at a subscription of $20 a month.

Banking on Feedback

OpenAI has been heavily banking on user feedback for rolling out new features and to further set their future plans for resolving their cybersecurity and AI safety issues. It was no different for the browse feature as well which was released in beta mode in a bid to learn from people’s feedback. By giving power to users, OpenAI is showcasing itself as a brand that believes in a democratised model, where everyone has a say in their product improvement. However, in the process, the actual issue of data security is still not being addressed.

The post OpenAI disables ‘Browse’ Feature after releasing it on ChatGPT App appeared first on Analytics India Magazine.

US Semiconductor Company Microchip Announces $300 mn Investment Plan in India

US-based semiconductor company Microchip Technology has announced its plan to invest USD 300 million to expand its operations in India. The company has also inaugurated a new research and development (R&D) facility in Hyderabad.

Located in the One Golden Mile Office Tower, the 168,000-square-ft centre has a capacity for 1,000 employees.

“Microchip is making a significant strategic commitment to growing our operations in India, whose meteoric growth has established it as one of the top sources of business and technical resources in our sector.

“Our investments here will enable us to both benefit from and contribute to the country’s increasingly important role in the global semiconductor industry,” said Ganesh Moorthy, President and CEO of Microchip, said.

Recently, Micron Technology, an American chipmaker, has announced plans to invest up to USD825 million in the establishment of a new chip assembly and test facility in Gujarat, India. This marks their first manufacturing plant in the country and is expected to generate around 5,000 direct employment opportunities in the region.

Reportedly, the Government of India (GoI) has recently approved five applicants in the semiconductor space under Design Linked Incentive (DLI) Scheme.

Through the scheme, GoI aims to foster innovation and promote the development of integrated circuits, chipsets, systems on chips, systems, and IP cores through this initiative.

The post US Semiconductor Company Microchip Announces $300 mn Investment Plan in India appeared first on Analytics India Magazine.

Prompt Engineers Then, AI Engineers Now

Prompt Engineers Then, AI Engineers Now

The boom of generative AI has presented people with a lot of new opportunities. The number of generative AI roles have almost tripled in the last one year. For developers, it has created a completely new layer of abstraction and profession – a complete dimensional shift. What people often call prompt engineers have become something more than just giving prompts to ChatGPT or similar softwares.

It is actually becoming a full time job. In a recent trending blog, it is argued that instead of calling the new developers as “prompt engineers”, we should call them “AI engineers”. This is not to be confused with machine learning engineers though, the people who work on system-heavy workloads to build models from scratch. More than these ML or LLM engineers, we will definitely see a rise of more AI engineers.

Andrej Karpathy also expresses similar views on the topic. He said that “prompt engineer” as a term for this role could be misleading and even cringe as it requires a lot more than just prompting.

I think this is mostly right.
– LLMs created a whole new layer of abstraction and profession.
– I've so far called this role "Prompt Engineer" but agree it is misleading. It's not just prompting alone, there's a lot of glue code/infra around it. Maybe "AI Engineer" is ~usable,… https://t.co/sv3Qijyv6f

— Andrej Karpathy (@karpathy) June 30, 2023

Two sides of coin

AI engineers as a role are expected to develop. It can be for the better or for the worse. A lot of experts in the field argue that as LLMs get better, there would be no need for doing even the things that you need to do now after generating code from these tools. The hallucinations are just temporary.

On the other side, as the field evolves, the need for more expertise in the field would also be required. This is the case with every field. There would be AI engineers, and then there would be full stack prompt engineers. Just like there are sub-disciplines like devops engineer, analytics engineer, data engineer, etc, there would be roles that would deal with different aspects of the tasks within AI.

Earlier, traditional ML used to involve finding data, building the models, and then launching the product. Now, with AI engineers, who hop onto this industry after trying out the product, the journey is reversed. After getting an API like ChatGPT, they build a product such as Jasper or any other GPT-based tool. Then later, hop onto fine-tuning with data and scaling the model. The later two steps require a lot more technical knowledge than just prompt engineering. Therefore, this is the rise of AI engineers.

Moving over, with the availability of APIs for building tools, AI engineers could shift towards more application based roles, or towards more research based skills by diving back to foundational knowledge. This brings us to the shift of MLops to LLMops, if we can call it.

Considering the nature of the job, these AI engineers would be needed to understand every single step of an LLM app stack. Requiring them to expand their knowledge beyond just prompting. So while it is becoming easier to become a developer, after getting into it, the challenge for being the best is huge and vast.

Is this Software 3.0?

Oftentimes, companies need people that know how to use and ship a product or tool more than the knowledge of how to build it. For example, a user on HackerNews argued, “if we have React engineers, why couldn’t we have AI engineers?” It is true, it is just a requirement of different skills.

LLM/AI engineers also need a systematic workflow for experimentation and observability, particularly regarding prompt creation and system development. Challenges such as hallucination and reasoning gaps are already prevalent in LLMs, leading to a general agreement that these agents are unreliable.

Both traditional models and LLM systems require adjustments for optimal performance. Traditional models can be improved through training and hyperparameter selection, while LLM systems can be fine-tuned by modifying the prompt and chaining LLMs together. Now, this can be done not just by LLM researchers, but also prompt engineers.

In the past, software development relied on manually coding instructions in programming languages. ML and neural networks, which allowed systems to learn patterns and make predictions based on training data. This was considered Software 2.0, where models were trained to perform specific tasks.

Microsoft said everyone’s a developer. Karpathy said that the hottest programming language is our very own English language. Software 1.0 was classical coding, which transitioned to machine learning and neural networks, Software 2.0. Next transition of prompt-based developers should be Software 3.0. But Karpathy said that we are still in the second level of abstraction as, “we are still prompting on human-designed code, only in English,” which is still a Software 2.0 artefact, not Software 3.0.

The post Prompt Engineers Then, AI Engineers Now appeared first on Analytics India Magazine.

Next Frontier of Cybersecurity: Guarding against Generative AI

ChatGPT, introduced in November last year, experienced a rapid surge in adoption amid users and enterprises alike. However, no one can deny the floodgate of risks that this new technology has unleashed on everyone. A recent report by cybersecurity firm Group-IB revealed that over 100,000 ChatGPT accounts have been compromised and their data is being illicitly traded on the dark web, with India alone accounting for 12,632 stolen credentials.

( Source: Group-IB)

Similarly, in March, a bug in an open-source library gave some ChatGPT users the ability to see titles from another active user’s chat history. Companies such as Google, Samsung and Apple have also forbidden their employees from using any generative AI-powered bots.

Venkatesh Sundar, founder and president, Americas, at Indusface, believes there is rapid adoption of generative AI without too much consideration of risk. “In most cases, the adopted LLM models are built by someone else, so they carry the security risk of a compromised LLM affecting all apps using the LLM model. This is very similar to the risk of using open source / third-party code and plug-ins,” he told AIM.

Generative AI API risk

API risks aren’t not new. As anticipated by Gartner in their 2019 report, API hacks have indeed become a prevalent form of cyberattack. According to a survey conducted by Salt Security, a leading API security company, among 200 enterprise security officials, a staggering 91% of companies reported experiencing API-related security issues in the past year.

Now, as more and more enterprises are looking to leverage LLM APIs, the biggest concern remains the leaking or exposure of sensitive data from these tools. While certain applications of natural language interfaces, such as search functionality, may pose lower security risks, the use of LLMs for tasks like analysis, reporting, and rule generation expands the potential scope of attacks and vulnerabilities.

There is a risk of data breaches or unauthorised access to this information, potentially resulting in privacy violations and data leaks. “While there’s so much attention being placed on the use and availability of generative AI, ransomware groups continue to wreak havoc and find success at breaching organisations around the world,” Satnam Narang, senior staff research engineer at Tenable, told AIM.

Adding further to the discussion, Sundar stresses that organisations should anticipate attacks or attempts to corrupt the data set. Hackers may attempt to inject malicious or biassed data into the dataset, which can influence the LLM’s responses and outputs. “Important business decisions may rely on this data, without good understanding of how the AI model works or the validity of data points used in the process,” Kiran Vangaveti, founder & CEO, BluSapphire Cyber Systems told AIM.

Earlier this year, researchers from Saarland University have presented a paper on prompt engineering attacks in chatbots. They discovered a method to inject prompts indirectly, using ‘application-integrated LLMs’ like Bing Chat and GitHub Copilot, expanding the attack surface for hackers. Injected prompts can collect user information and enable social engineering attacks.

Is Building GenAI capabilities in-house the key?

OpenAI and other organisations recognise the importance of addressing API risks and have implemented precautionary measures. OpenAI, for instance, has undergone third-party security audits, maintains SOC 2 Type 2 compliance, and conducts annual penetration testing to identify and address potential security vulnerabilities before they can be exploited by malicious individuals.

However, Sundar believes security is complex and securing natural language queries is way more complex. “While controls like access are being built, many attacks leverage different prompts or series of prompts to leak information. For example, when ChatGPT blocked the prompt to generate malware, people have found a way around it and now are asking ChatGPT to give a script for penetration testing,” he said.

Vangaveti concurs that understanding security frameworks required to protect against malicious use or protect data is a complex task. However, as this area matures, more frameworks or best practices will evolve. Furthermore, enterprises today are also exploring many open-source LLMs as alternatives. Open source LLMs can potentially be more vulnerable to cyber attacks due to their availability and open nature. Since the source code and architecture are openly accessible, it becomes easier for attackers to identify and exploit vulnerabilities.

Nonetheless, to tackle this, Narang believes the solution could be building generative AI capabilities in-house. “As long as there is a reliance upon outside tooling to provide the generative AI functionality, there will always be some inherent risk involved in entrusting data to a third-party, unless there are plans to develop and maintain one in-house.” Interestingly, Samsung announced that they will be building their own generative AI capabilities after sensitive data were accidentally shared with ChatGPT by some of its employees.

ChatGPT is writing malware

ChatGPT’s coding capabilities, which include writing code and fixing bugs, have unfortunately been exploited by malicious actors to develop malware. “Attackers are able to profile targets relatively quickly and create attack code on the fly with little expertise. They are able to build custom malware rapidly,” Vangaveti said.

Some experts believe ChatGPT and DALL-E pose even a greater risk to non-API users. “Information stealing malware, such as Raccoon, Vidar and Redline are capable of stealing sensitive information stored in web browsers, which includes user credentials (username/email and password), session cookies and browser history,” Narang said.

Besides, researchers from threat detection company HYAS have demonstrated a proof of concept (PoC) called BlackMamba, and demonstrated how LLM APIs can be used in malware in order to evade detection. “To demonstrate what AI-based malware is capable of, we have built a simple PoC exploiting a large language model to synthesise polymorphic keylogger functionality on-the-fly, dynamically modifying the benign code at runtime — all without any command-and-control infrastructure to deliver or verify the malicious keylogger functionality,” they said in a blog post.

Hence, without doubt, the widespread adoption of generative AI has raised concerns about security risks, including API vulnerabilities and data exposure, so organisations must implement robust security measures and remain vigilant to mitigate these risks effectively.

The post Next Frontier of Cybersecurity: Guarding against Generative AI appeared first on Analytics India Magazine.

Boost your Bottom Line by Learning How to Use ChatGPT for Just $20

A person using ChatGPT.
Image: StackCommerce

AI has the potential to expand what a business can offer in terms of content generation and customer engagement. Business owners could use ChatGPT for market research, analytics, and even for human resources (HR) requests. Freelance professionals may be able to expand what they can offer to their clients by using ChatGPT for drafting, lead generation, and research. Learning to use ChatGPT to produce content that holds up to professional standards may take time, but you can begin studying in a 25-hour Introduction to ChatGPT, currently on sale for $19.99.

ChatGPT is a versatile tool a professional can apply to content creation, sales and marketing, and more. Use it to produce ideas driving your own projects or to revise work you’ve already crafted. One of the primary goals of this course is to help users streamline and improve their operations using AI input. For professionals, that may mean using ChatGPT to quickly complete the labor-intensive stages of a project, so they can focus on the fine details. Freelance writers can learn to use ChatGPT for outlines or headline generation. Data analysts can feed information into the AI and ask for conclusions, patterns, and trends.

This course is offered by International Open Academy, a leader in the online learning marketplace. It gives professionals access to diverse, easy-to-use courses that you can complete on your own timeline.

International Open Academy courses are accredited by the International Council for Online Educational Standards. A certificate of completion can be earned by completing all course materials and passing the included course examinations.

Learn to use ChatGPT to expand what you can do as a professional, a business owner, or a freelancer. Get an Introduction to ChatGPT while it’s available for $19.99, a discount of 75% off the $80 retail price.

Prices and availability are subject to change.

Person using a laptop computer.

Subscribe to the Daily Tech Insider Newsletter

Stay up to date on the latest in technology with Daily Tech Insider. We bring you news on industry-leading companies, products, and people, as well as highlighted articles, downloads, and top resources. You’ll receive primers on hot tech topics that will help you stay ahead of the game.

Delivered Weekdays Sign up today

Google Claims to be 47-Years Ahead of Others in Quantum

Google Quantum Computer

After going silent on quantum computing for a while, Google announced that it has successfully created a quantum computer capable of performing calculations that would require the top-performing supercomputers 47 years to complete. This significant advancement aims to provide undeniable evidence that experimental quantum machines can surpass conventional competitors.

According to a research paper – Beyond-Classical Computing Using Superconducting Quantum Processors by Google’s scientists, the company’s latest technology surpasses the capabilities of current classical supercomputers.

Google’s research paper showcases the ability of larger quantum computers to handle “noise,” which refers to interference that poses a threat to the delicate states in which qubits operate, enabling them to continue performing calculations.

The researchers state, “We can confidently assert that our demonstration falls within the realm of quantum computation beyond classical capabilities.” Critics argue that the rival machines were evaluated based on a randomisation task that favors quantum computers and lacks practical significance outside of academic research.

Steve Brierley, CEO of Riverlane, a quantum company based in Cambridge, views this development as a significant milestone, stating, “The debates surrounding quantum supremacy, whether we had reached it or could reach it, are now settled.”

In April, Google published another research paper titled “Phase Transition in Random Circuit Sampling” presenting a more advanced quantum device, intended to settle the ongoing debate.

In comparison to the 2019 machine that consisted of 53 qubits, the fundamental units of quantum computers, the latest-generation device incorporates 70 qubits.

The exponential enhancement in computational power of a quantum computer is directly proportional to the increase in qubits. Consequently, the new machine exhibits a staggering improvement, being approximately 241 million times more powerful than its 2019 predecessor.

In a declaration made four years ago, Google asserted that it had become the first company to attain “quantum supremacy,” a pivotal milestone indicating that quantum computers had surpassed their conventional counterparts. During that period, Google faced opposition from competitors who contended that the disparity between their machine and traditional supercomputers was being exaggerated.

Recently, Microsoft also announced its roadmap for building a quantum supercomputer which it would be able to build within a decade. The company also released Azure Quantum Elements, a copilot for building quantum computers.

Moreover, IBM, the brand of quantum computing also sped up its process of building quantum computers, to build beyond classical computing.

The post Google Claims to be 47-Years Ahead of Others in Quantum appeared first on Analytics India Magazine.