How to use the new Bing (and how it’s different from ChatGPT)

How to use the new Bing

Microsoft's Bing has struggled for years to gain a foothold among search engines. But the company's recent deep dive into artificial intelligence (AI) is breathing new life into search, with its AI-powered Bing Chat feature.

Often referred to as Bing ChatGPT, the new Bing is very different to its more popular competitor. It uses GPT-4 and performs more like an AI-powered search engine in a conversational format, but that's just the begining.

Also: How to use Bing Image Creator (and why it's better than DALL-E 2)

Unlike ChatGPT, the new Bing has internet access, giving it the ability to provide more up-to-date responses. ChatGPT, in turn, is only trained on data up to the year 2021, so it cannot provide answers on current events.

How to use the new Bing

What you need: Getting started with the new Bing requires you to use Microsoft Edge and to log in to a Microsoft account. When you access Microsoft Bing, you can choose whether to use the search or chat formats.

On the Bing website, there will be a "Chat" option on the top left.

Any time you perform a Bing search, you can switch to Chat by clicking on it below the search bar.

You can see Bing offers a lot of different options to optimize the conversation.

It's time to ask Bing anything you'd like to know.

FAQ

How is Bing Chat different from a search engine?

The biggest difference between Bing Chat and other AI chatbots, compared to a search engine, is the conversational tone in the rendering of search results. Intelligently formatting search results into an answer to a specific question makes it easier for anyone who is looking to find out something over the internet.

Also: I tried Bing's AI chatbot, and it solved my biggest problems with ChatGPT

Beyond search engine capabilities, Bing Chat is a fully fledged AI chatbot and can do many of the things others can, like ChatGPT. You can now use Microsoft Bing to generate text, such as an essay or a poem, to write code, or to ask complex questions and hold a conversation with follow-up questions.

Does Bing use ChatGPT?

Bing does not use ChatGPT, but it does use GPT-4 in the formulation of its answers, with the exception of the visual input feature. The new Bing is the only way to use GPT-4 for free at this time and Microsoft claims the integration with the latest language model makes Bing more powerful and accurate than ChatGPT.

Also: Want to experience GPT-4? Just use Bing Chat

Many users prefer one or the other. In my experience, I've noticed Bing Chat can sometimes be a bit slow to respond and can miss some prompts, but that's typically remedied by asking a follow-up question such as, "Did you search for that?". However, I also believe the new version of Bing offers users more control over experience and a more intuitive UI.

The GPT 3.5 version of OpenAI's language model powers ChatGPT. When GPT-4 becomes widely available through an updated version of ChatGPT, it will be through OpenAI's subscription service, ChatGPT Plus, which costs $20 a month.

Is there a Bing image creator?

Microsoft recently announced Bing Image Creator. Microsoft is using DALL-E, an artificially intelligent image generator from OpenAI. This is a tool for Microsoft Edge within Bing, as users are able to give Bing a prompt to create images within an existing chat, as opposed to going to a separate website.

Also: The 5 best AI art generators of 2023

Does Bing Chat give wrong answers?

Just like ChatGPT and other large language models, the new AI-powered Bing Chat is prone to giving out misinformation. Most of the output that new Bing offers as answers are drawn from online sources, and we know we can't believe everything we read on the internet. Similarly, when you use the new Bing in chat mode, it can generate nonsensical answers that are unrelated to the original question.

Is Bing Chat free?

Bing Chat is not only free, it's also the best way to preview GPT-4 for free right now. You can use the new Bing to ask questions, get help with a problem, or seek inspiration, but you are limited to 15 questions per interaction and 150 conversations a day.

Also: The best AI chatbots: ChatGPT and other fun alternatives to try

Are my conversations with Bing Chat saved?

Microsoft defaults to clearing your conversations when you click the New Topic button, so your conversations aren't saved beyond the duration of each chat. However, search history is saved in your account, depending on your settings.

Is there a waitlist for the new Bing?

At the time of this publication, if Edge users log in to their account, they should be able to access Microsoft's new AI-powered Bing right away.

Can you use the new Bing on mobile?

If you have the Edge browser on your mobile device, you can use the new AI-powered Bing search in chat mode, much like you would on your computer. There's also the option of skipping the Edge browser and downloading the Microsoft Bing app from your device's app store. This app provides a straight line to the Bing AI chatbot, with the benefit of not having to access a website when you want to use it.

Also: Your Microsoft apps are getting an AI revamp. Here's what we know

Both the Microsoft Bing app and the Edge browser support voice dictation on mobile, so you can ask your questions without even having to type them in.

See also

These two countries are teaming up to develop AI for cybersecurity

AI concept

Singapore and France have announced plans to set up a research facility to jointly develop artificial intelligence (AI) capabilities that can be applied in cyberdefense.

The agreement between Singapore's Ministry of Defence (Mindef) and France's Ministry of the Armed Forces (MOAF) will see both countries collaborate in potential research, such as AI for geospatial analysis, natural language processing to extract information for analysis, and computer vision for monitoring image and video feeds to identify potential threats across various environmental conditions.

Also: IoT devices can undermine security. Here are four ways to boost your defenses

The research facility is the first lab in Singapore that Mindef has established together with another country.

The facility will take a "global and multidisciplinary approach" to developing AI capabilities for "impactful" defense applications, according to Mindef.

Also: Singapore businesses stumbling over what security culture entails

The lab will be led by the French National Centre for Scientific Research and Temasek Laboratories @ National University of Singapore, both of which will gather research expertise from the wider community in each country. These sources of knowledge will include institutes of higher learning and research facilities.

Also: Singapore wants citizens to take accountability for personal cyber hygiene

Mindef's permanent secretary of defense, Chan Heng Kee, said his ministry was expanding its partnerships to tap new sources of innovation for defense applications. "Digital and dual-use technologies like AI are rapidly evolving today," Chan said. "By bringing together leading researchers in Singapore and France, we can accelerate our research to tackle shared security challenges."

The two countries inked an agreement to collaborate in March 2022 across various digital and green economy areas, including smart transport, financial services, and medical technology. The France-Singapore Digital and Green Partnership provides a "structured" platform for both nations to cooperate on projects across a range of digital and green issues.

Singapore passed amendments to two bills last August that paved the way for a new digital intelligence unit to be established as part of the country's armed forces. The government described the move as necessary amid intensifying "cyber intrusions" that threatened critical systems. The new digital and intelligence service unit is the fourth service under the Singapore Armed Forces and is responsible for combating online attacks.

Security

I tested Google Bard’s new coding skills. It didn’t go well

Google Bard

Previously, we discussed how Bard can provide some coding help to programmers, but couldn't code. That's changed. As of Friday, Google has announced Bard can code. But can it code well?

Let's find out.

Also: The best AI art generators to try

To come up with an answer, I'm going to run some of the coding tests I gave to ChatGPT. We'll see how Bard does, and compare the results.

Writing a simple WordPress plugin

My initial foray into ChatGPT coding was with a WordPress PHP plugin that provided some functionality my wife needed on her website. It was a simple request, merely asking for some submitted lines to be sorted and de-duped, but when ChatGPT wrote it, it gave my wife a tool that helped her save time on a repetitive task she does regularly for work.

Here's the prompt:

Write a PHP 8 compatible WordPress plugin that provides a text entry field where a list of lines can be pasted into it and a button, that when pressed, randomizes the lines in the list and presents the results in a second text entry field with no blank lines and makes sure no two identical entries are next to each other (unless there's no other option)…with the number of lines submitted and the number of lines in the result identical to each other. Under the first field, display text stating "Line to randomize: " with the number of nonempty lines in the source field. Under the second field, display text stating "Lines that have been randomized: " with the number of non-empty lines in the destination field.

And here's the generated code that Bard wrote:

So far, it looks good. But not so much. The UI is not formatted properly. Worse, the plugin doesn't work. Clicking the Randomize button just results in both fields being cleared. That's it.

By contrast, ChatGPT built a fully functional plugin right out of the gate.

Fixing some code

Next, I tried a routine I'd previously fed into ChatGPT that came from my actual programming workflow. I was debugging some JavaScript code and found that I had an input validator that didn't handle decimal values. It would accept integers, but if someone tried to feed in dollars and cents, it failed.

Also: What is generative AI and why is it so popular? Here's everything to know

I fed Bard the same prompt I fed ChatGPT, and this is what resulted:

The code generated here was much longer than what came back from ChatGPT. That's because Bard didn't do any regular expression calculations in its response and gave back a very simple script that you'd expect from a first year programming student.

Also: How to use ChatGPT to write Excel formulas

Also, like something you'd expect from a first year programming student, it was wrong. It properly validates the value to the left of the decimal, but allows any value (including letters and symbols) to the right of the decimal.

Finding a bug

During that same programming day, I encountered a PHP bug that was truly frustrating me. When you call a function, you often pass parameters. You need to write the function to be able to accept the number of parameters the originating call sends to it.

Also: How to use Midjourney to generate amazing images

As far as I could tell, my function was sending the right number of parameters, yet I kept getting an incorrect parameter count error. Here's the prompt:

When I fed the problem into ChatGPT, the AI correctly identified that I needed to change code in the hook (the interface between my function and the main application) to account for parameters. It was absolutely correct and saved me from tearing out my hair.

I passed Bard the same problem, and here's its answer:

Wrong, again. This time, Bard simply told me that the problem I was having was a mismatch of parameters, and I needed to pass the donation ID. That was a wrong answer. Once again, ChatGPT succeeded and Bard failed.

For the record, I looked at all three of Bard's drafts for this answer, and they were all wrong.

'Hello, world' test

Last week, I asked ChatGPT to generate code in 12 popular programming languages (including Python) to display "Hello, world" ten times, and to determine if it was morning, afternoon, or evening here in Oregon. ChatGPT succeeded for the mainstream languages.

Also: This new technology could blow away GPT-4 and everything like it

I fed the same prompt to Bard. Since it has been wrong on everything so far, I just picked one language to test, asking it to generate some Python code:

Although Bard's method for determining time was a bit more convoluted than it needed to be, the result was workable.

So, can Bard code?

Bard can definitely write code. But in three of my four tests, the code it wrote didn't work properly. So I wouldn't necessarily say that Bard can code.

I'll tell you this. If I were hiring a programmer and gave them the above four assignments as a way of testing their programming skills, and they returned the same results as Bard, I wouldn't hire them.

Also: Generative AI is changing your tech career path. What to know

Right now, Bard can write code… like a first year programming student who will probably get a C grade for the semester.

Given how good ChatGPT is, Google's answer is … embarrassing.

You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

See also

Generative AI means more productivity, and a likely retrenchment for software developers

AI robot working

The industry is abuzz about the power generative artificial intelligence platforms (such as ChatGPT) are bringing to the software development profession. "For many developers, generative AI will become the most valuable coding partner they will ever know," a recent report out of KMPG gushed.

What are the implications of this latest breakthrough in democratized AI? Will it begin to replace programming itself? Or will it finally help overworked and stressed IT professionals abstract the more mundane aspects of their jobs away and help them focus on bigger problems more relevant to their businesses?

Also: I tested Google Bard's new coding skills. It didn't go well

The current verdict from industry observers: So far, so good. But there are mixed reactions when it comes to whether it will help developers succeed or displace many of their roles. It could even serve to smooth the way to application modernization.

"Generative AI is dramatically transforming the way developers approach their roles, ushering in nothing short of a revolution in productivity," says Joe Welch, principal and technology leader of Launch Consulting, a division of The Planet Group. "By incorporating GitHub Copilot into VS Code for a recent project, we saw programmers reduce ten-minute tasks, such as writing a small function, down to the 30 seconds it took to simply write out a comment that explains the function. The actual code for the functions is written by Copilot, and often these functions will work out-of-the-box without any need for changes. It's hard to understate the game changer this is."

Also: These are the most in-demand tech roles in 2023

Generative AI tools such as ChatGPT "are built on large language models that can perform complex reasoning, deduction, and creativity," says Duncan Angove, CEO at Blue Yonder. "At its core, programming is also a language, which makes it a perfect task for generative AI to take on."

Generative AI models "trained against the vast expanse of open-source code available on the internet are already explaining poorly documented code, generating documentation for code, and even writing functions or relatively targeted pieces of code, all with minimal direction from humans," the KPMG report observes.

Also: Generative AI is changing your technology career path. What to know

For his part, Angove foresees actual programming roles diminishing, and more business-focused developers assembling the capabilities they require for particular applications. As the technology evolves, "I believe human programming skills will fade in necessity, and eventually be replaced with human-prompt engineers," he predicts. "Business analysts and product managers will be the new prompt engineers, translating business needs into prompts that generate the code we need. In the short term, we will also still need programmers to quality check the code, but over time that, too, will fade."

A potential showstopper for the actual generation of code — versus helping developers be more productive in doing so — are the legal implications of freely using code that is essentially designed elsewhere. "Intellectual property issues around generative AI remain unresolved," the KPMG authors caution. "These models are trained on open-source code, with many different types of licenses, and it remains to be seen what will happen if software they generate is deemed too similar to open-source code."

Also: Okay, so ChatGPT just debugged my code. For real

While it's highly debatable what kind of retrenchment there will be for developer roles, Launch's Welch foresees many positive impacts on developers' abilities to deliver results far more quickly and expediently for their ever-demanding businesses:

  • As a recommendation engine: An important benefit will be "integrating AI recommendations into the code development process or providing AI recommendations on code check-in," he states. "GitHub Copilot is a great example of this and provides recommendations and suggestions as developers type. Developers can also indicate that code that they are trying to write in a specially formatted comment and Copilot will provide a sample implementation of that function."
  • Creating documentation for existing code to help new developers onboard. "We have used AI to provide top-level summaries of sub-systems and then more detailed descriptions of individual modules," says Welch. "After reading these overviews, the developers can then interact directly with the AI chatbot to ask detailed questions about the use-specific functions or sections of code. This can greatly reduce the overall time it takes to understand a new codebase."
  • Updating deprecated libraries. "One of our ongoing challenges is to keep third-party libraries updated to supported versions in accordance with the appropriate security guidelines," says Welch. "Often, it is unclear the level of risk in upgrading these libraries. Generative AI is great at predicting the overall effort, identifying specific code patterns which need to be modified, and helping to ensure that these libraries and frameworks are kept up to date with the least amount of effort and business risk possible."
  • Migrating applications from legacy languages. "AI can greatly ease the migration of a large codebase from an older language such as Cobol into a more modern language such as Java or C#," says Welch. "These migrations can often be challenging as they require developers who are fluent in both the older language and the newer language."

Also: This new technology could blow away GPT-4 and everything like it

Ultimately, opportunities for developers and other IT professionals will be abundant in "things that can't be easily copied or taught," Angove predicts. "Think about what LLMs can't do, and do that. The value of fresh thinking also becomes even more valuable. Develop skills that help build the tools — LLMs themselves — versus the now-free applications."

Microsoft launches bug bounty program for the new Bing

Microsoft Bing logo is and OpenAI logo

As artificial intelligence continues to trend thanks to its powers in content generation, software development, and replacing search engines, companies are cracking down on ways to patch vulnerabilities and make their AI tools safer. Following OpenAI's bug bounty program announcement a couple of weeks ago, Microsoft has expanded its own to include the new Bing Chat.

A bug bounty program is aimed at security researchers and ethical hackers, who are welcome to find vulnerabilities within computer programs or websites in exchange for a reward. When one of these security researchers finds a problem, also known as a 'bug', they must report it in order to receive their bounty.

Also: Generative AI means more productivity, and a likely retrenchment for software developers

Microsoft continuously updates its bug bounty programs in an effort to create mutually successful partnerships with security researchers to improve services like .NET, Edge, Azure, and Identity; in August 2022, the company announced it had $13.7 million in bug bounties in that past fiscal year.

Anyone looking to cash in on the latest expansion of the bug bounty program for the new Bing must submit their detailed report through the general submission list and select Bing from the product list. Submissions must include the type of issue, the version that contains the bug, any updates they've installed, special configurations required to reproduce the bug, step-by-step instructions to reproduce the issue on a first install, proof or concept, and the impact of the issue, including how an attacker could exploit the issue.

Also: This new technology could blow away GPT-4 and everything like it

Once submitted, Microsoft will determine if it's qualified and engineers will review it and, if applicable, pay a bounty.

The new Bing Chat is Microsoft's revamped, AI-powered search engine that runs a new chat format and image generator built on OpenAI's GPT-4.

Artificial Intelligence

How to use Canva to transform any photo into a professional headshot

Canva Magic Eraser feature

Whether you need to set up your LinkedIn profile for the first time, add a photo to your Slack account, or just want to revamp your website, having a professional headshot is always a good thing to have.

However, hiring a photographer to take a professional headshot can get expensive. Plus I, like many others, often dislike how I look in pictures when I try to look my best. Instead, I prefer the more natural and candid photos my friends and family take of me.

Also: How to use ChatGPT to write a cover letter (and why you should)

Canva's Magic Eraser tool helps circumvent both issues by creating a perfect headshot from a picture you already have and like.

It doesn't matter what you are wearing in the photo or what the background is — with the help of AI, you can transform it all.

How to use Canva to create the perfect headshot

Before you can get started with your project, you will need to become a Canva Pro member — though Canva does offer a 30-day free trial.

The Magic Edit tool, as well as other AI-powered design tools, such as Magic Eraser and Magic Write, is limited to Pro users at the moment.

Also: How to use Magic Eraser on the Google Pixel

The membership costs $120 per year for an individual account and includes other perks, such as image editing, stock photos, premium templates, and more.

Once you sign up — or initiate the free trial — and are signed in, you can get started on creating your headshot.

Final result:

More on AI tools

Generative AI can make some workers a lot more productive, according to this study

AI robot working on laptop

When thinking about generative AI in the workforce, it's easy to think of the worst-case scenario — AI replacing human jobs. However, a study shows that generative AI tools can have a positive impact on workers, particularly those working in the customer service sector.

A working paper by the National Bureau of Economic Research found that access to generative AI can increase workers' productivity by 14% on average, as measured by the amount of customer issues the agents were able to resolve per hour.

Also: Generative AI means more productivity, and a likely retrenchment for software developers

To conduct the study, the NBER used data from 5,000 customer support agents working for a Fortune 500 software firm. The agents used a tool built on a recent version of Open AI's Generative Pre-trained Transformer (GPT) large language model (LLM) to assist them in their roles.

The LLM monitored the customer chats in real time to provide agents with suggestions on how to respond. This enabled agents to respond more quickly, answer more chats per hour, and more successfully resolve chats, according to the paper.

However, the study found that the increase in productivity disproportionately benefited workers who had less skill and experience.

Also: ChatGPT's 'accomplishment engine' is beating Google's search engine, says AI ethicist

The AI tools helped fill the experience gap. Workers using AI with two months tenure performed as well as workers with six months tenure who didn't use AI.

As a result, the study found that high-skill workers don't have as much to gain from using AI assistance since the AI recommendations are essentially imitating the knowledge the high-skill workers already possess.

In addition to optimizing workers' productivity, AI can also help improve how workers are treated by customers, reducing the likelihood of calls being escalated to a supervisor.

Also: ChatGPT: Who's using this AI tool and why?

As a whole, this study provides a real-world example of how AI can be used to positively assist some workers instead of simply replacing them.

Artificial Intelligence

The 5 biggest risks of generative AI, according to an expert

Human facing a robot

Generative AIs, such as ChatGPT, have revolutionized how we interact with and view AI. Activities like writing, coding, and applying for jobs have become much easier and quicker. With all the positives, however, there are some pretty serious risks.

A major concern with AI is trust and security, which has even caused some countries to completely ban ChatGPT as a whole or to reconsider policy around AI to protect users from harm.

Also: This new technology could blow away GPT-4 and everything like it

According to Gartner analyst Avivah Litan, some of the biggest risks of generative AI concern trust and security and include hallucinations, deepfakes, data privacy, copyright issues, and cybersecurity problems.

1. Hallucinations

Hallucinations refer to the errors that AI models are prone to make because, although they are advanced, they are still not human and rely on training and data to provide answers.

If you've used used an AI chatbot, then you have probably experienced these hallucinations through a misunderstanding of your prompt or a blatantly wrong answer to your question.

Also: ChatGPT's intelligence is zero, but it's a revolution in usefulness, says AI expert

Litan says the training data can lead to biased or factually incorrect responses, which can be a serious problem when people are relying on these bots for information.

"Training data can lead to biased, off-base or wrong responses, but these can be difficult to spot, particularly as solutions are increasingly believable and relied upon," says Litan.

2. Deepfakes

A deepfake uses generative AI to create videos, photos, and voice recordings that are fake but take the image and likeness of another individual.

Perfect examples are the AI-generated viral photo of Pope Francis in a puffer jacket or the AI-generated Drake and the Weeknd song, which garnered hundreds of thousands of streams.

"These fake images, videos and voice recordings have been used to attack celebrities and politicians, to create and spread misleading information, and even to create fake accounts or take over and break into existing legitimate accounts," says Litan.

Also: How to spot a deepfake? One simple trick is all you need

Like hallucinations, deepfakes can contribute to the massive spread of fake content, leading to the spread of misinformation, which is a serious societal problem.

3. Data privacy

Privacy is also a major concern with generative AI since user data is often stored for model training. This concern was the overarching factor that pushed Italy to ban ChatGPT, claiming OpenAI was not legally authorized to gather user data.

"Employees can easily expose sensitive and proprietary enterprise data when interacting with generative AI chatbot solutions," says Litan. "These applications may indefinitely store information captured through user inputs, and even use information to train other models — further compromising confidentiality."

Also: AI may compromise our personal information

Litan highlights that, in addition to compromising user confidentiality, the stored information also poses the risk of "falling into the wrong hands" in an instance of a security breach.

4. Cybersecurity

The advanced capabilities of generative AI models, such as coding, can also fall into the wrong hands, causing cybersecurity concerns.

"In addition to more advanced social engineering and phishing threats, attackers could use these tools for easier malicious code generation," says Litan.

Also: The next big threat to AI might already be lurking on the web

Litan says even though vendors who offer generative AI solutions typically assure customers that their models are trained to reject malicious cybersecurity requests, these suppliers don't equip end users with the ability to verify all the security measures that have been implemented.

5. Copyright issues

Copyright is a big concern because generative AI models are trained on massive amounts of internet data that is used to generate an output.

This process of training means that works that have not been explicitly shared by the original source can then be used to generate new content.

Copyright is a particularly thorny issue for AI-generated art of any form, including photos and music.

Also: How to use Midjourney to generate amazing images

To create an image from a prompt, AI-generating tools, such as DALL-E, will refer back to the large database of photos they were trained on. The result of this process is that the final product might include aspects of an artist's work or style that are not attributed to them.

Since the exact works that generative AI models are trained on are not explicitly disclosed, it is hard to mitigate these copyright issues.

What's next?

Despite the many risks associated to generative AI, Litan doesn't think that organizations should stop exploring the technology. Instead, they should create an enterprise-wide strategy that targets AI trust, risk, and security management.

"AI developers must urgently work with policymakers, including new regulatory authorities that may emerge, to establish policies and practices for generative AI oversight and risk management," says Litan.

Artificial Intelligence

How to use Microsoft Edge’s integrated Bing AI Image Creator

Microsoft Edge's integrated Bing AI Image Creator

Microsoft has upped its AI game by integrating its Bing AI Image Creator directly into Edge. Previously available as a separate tool, the Image Creator is now accessible from Edge's sidebar, allowing you to describe the image you want generated as well as edit and share it.

Also: The best AI art generators to try

Let's see how this works.

How to use Bing AI Image Creator within Microsoft Edge

First, make sure you're running the latest version of Microsoft's browser. In Edge, click the three-dot More icon in the upper right, go to Help and feedback, and then select About Microsoft Edge. Any available update will automatically be downloaded and installed.

Click Crop to flip, angle, or crop the image. Click the buttons in the lower right to flip the image horizontally or vertically. Buttons in the lower left enable you to rotate the image 90 degrees clockwise or counterclockwise. Drag the slider right or left to change the angle of the image. Drag the handles to crop the image as you wish. As you flip, resize, and crop the image, you can drag and drop the image itself within the frame to reposition it.

Also: How to use Midjourney to generate amazing images

You can also undo your last action by clicking the Undo button at the top and return the image to its original state by clicking the Reset button.

The generated images are all saved in Edge so you can access any previous ones. Under the Create button, click the heading for Creations. At the bottom of the sidebar under Recent, move the scroll bar to jump from one set of images to another. Click the set you want to see to view the four images.

More on AI tools

Nvidia says it can prevent chatbots from hallucinating

Mockup of Nvidia guardrails

Nvidia, the tech giant responsible for inventing the first GPU — a now crucial piece of technology for generative AI models, unveiled a new software on Tuesday that has the potential to solve a big problem with AI chatbots.

The software, NeMo Guardrails, is supposed to ensure that smart applications, such as AI chatbots, powered by large language models (LLMs) are "accurate, appropriate, on topic and secure," according to Nvidia.

Also: The 5 biggest risks of generative AI, according to an expert

The open-source software can be used by AI developers can utilize to set up three types of boundaries for AI models: Topical, safety, and security guardrails.

The topical guardrails would prevent the AI application from exploring topics in areas that are not necessary or desirable for the intended use. Nvidia gives the example of a customer service assistant not answering questions about the weather.

This type of guardrail would have been useful for Bing Chat when it was first released and began divulging company secrets.

Also: How to use Microsoft Edge's integrated Bing AI Image Creator

The safety guardrails are an attempt to tackle the issue of misinformation and hallucinations.

When employed, it will ensure that AI applications respond with accurate and appropriate information. For example, by using the software, bans on inappropriate language and credible source citations can be reinforced.

The security guardrails would simply restrict apps from reaching external applications that are deemed unsafe.

Also: Generative AI can make some workers a lot more productive, according to this study

Nvidia claims that virtually all software developers will be able to use NeMo Guardrails since they are simple to use, work with a broad range of LLM-enabled applications, and work with all the tools that enterprise app developers use such as LangChain.

The company will be incorporating NeMo Guardrails into its Nvidia NeMo framework, which is already mostly available as an open-source code on GitHub.

It will also be offered as part of the Nvidia AI Enterprise software platform and as a service through Nvidia AI Foundations.