Why Amazon Q Deserves Another Chance 

At re:Invent, amid much fanfare, AWS introduced Amazon Q, a generative AI chatbot that is specifically designed for a business’ need. The company claimed that unlike OpenAI’s ChatGPT, it is much safer and more secure. However, contrary to these assertions, Amazon Q has come under the limelight for all the wrong reasons.

Just three days after the launch, concerns are increasing among employees regarding accuracy and privacy of the chatbot. Q is reportedly “suffering from significant hallucinations” and has been implicated in leaking sensitive data, such as the locations of AWS data centres, internal discount programs, and unreleased features.

Undoubtedly, Amazon quickly released a statement and said, “No security issue was identified as a result of that feedback. We appreciate all of the feedback we’ve already received and will continue to tune Q as it transitions from being a product in preview to being generally available.”

A Case for Amazon’s Q

Employees can use Amazon Q to complete tasks in popular systems like Jira, Salesforce, ServiceNow, and Zendesk, as was highlighted at re:Invent, which is a unique thing about Amazon Q. For example, an employee could ask Amazon Q to open a ticket in Jira or create a case in Salesforce.

Interestingly, Amazon Q hasn’t been released yet, and criticisms are already mounting. Being in preview, it’s expected to undergo corrections as necessary.

“Companies need to realise that it is incredibly difficult to make an LLM not hallucinate. At best they can minimise it to some degree and won’t be able to get rid of it. What OpenAI did with GPT-4 is a herculean act that others may not be able to easily imitate,” said Nektarios Kalogridis, Founder and CEO DeepTrading AI addressing concerns about Amazon Q.

Also, we cannot blame Amazon Q directly for hallucinating as it can work with any of the models found on Amazon Bedrock, AWS’s repository of AI models, which includes Meta’s Llama 2 and Anthropic’s Claude 2.

The company said customers who use Q often choose which model works best for them, connect to the Bedrock API for the model, use that to learn their data, policies, and workflow, and subsequently deploy Amazon Q. Therefore, if there are instances of hallucination, it could stem from any of the aforementioned models.

Moreover, ChatGPT has also had its share of issues with leaking sensitive information. Most recently, it leaked private and sensitive data when told to repeat the word ‘poem’ indefinitely. But that hasn’t deterred enterprises from using ChatGPT.

Similar to Amazon Q, OpenAI’s ChatGPT Enterprise hasn’t been made available yet. OpenAIs COO, Brad Lightcap, in a recent interview, revealed that ‘many, many, many thousands’ of companies are on the waiting list for the AI tool (ChatGPT Enterprise). Since November, 92 percent of Fortune 500 companies have used ChatGPT, a significant increase from 80 percent in August.

Enterprise Chatbots are the Future

Despite the concerns raised, Amazon Q comes with great benefits.

Just like ChatGPT Enterprise, Amazon Q will also allow customers to connect to their business data, information, and systems, so it can synthesise everything and provide tailored assistance to help employees solve problems, generate content, and take actions relevant to their business.

The above features are a result of RAG, which retrieves data relevant to a question or task and provides them as context for the LLM. However, RAG comes with a risk of potential data leaks, similar to what occurred with Amazon Q.

Ethan Mollick, professor at Wharton, expressed that RAG has its own advantages and disadvantages. “I say it a lot, but using LLMs to build customer service bots with RAG access to your data is not the low-hanging fruit it seems to be. It is, in fact, right in the weak spot of current LLMs – you risk both hallucinations & data exfiltration.”

Something similar OpenAI introduced on Devday with Assistant APIs, which include a function called ‘Retrieval,’ which is nothing but a RAG function. This enhances the assistant with knowledge from outside our models, such as proprietary domain data, product information, or documents provided by your users.

Apart from OpenAI and AWS, Cohere is quietly collaborating with enterprises to incorporate generative AI capabilities.

Cohere was one of the first ones to understand the importance of RAG as a method to reduce hallucinations and keep the chatbot updated. In September, Cohere introduced the Chat API with RAG. With this new feature, developers can combine user inputs, data sources, and model outputs to create strong product experiences.

Despite the concerns that are being raised about hallucination and data leaks, enterprises completely cannot ditch the generative AI chatbots as they are definitely going to get better over time and this is just the beginning.

The post Why Amazon Q Deserves Another Chance appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...