Beware, ChatGPT Can Control You Through Your Phone

OpenAI has now confidently entered the next phase of possible AI domination. With the release of the ChatGPT App on iOS, the chatbot is now available for all iPhone users. Releasing in a phased manner, the app is currently available in the US market alone and will expand to other countries and other devices (Android) soon. Within 12 hours of its release on the Apple Store, the ChatGPT app became the No.1 app under ‘productivity’, and continues to be so three days later too (as seen on Apple Store). With the rapid rise, is this truly the ‘no turning back moment’ for OpenAI? But, with OpenAI’s unhinged growth reaching new heights by pushing for adoption, wouldn’t data privacy issues peak as well?

ChatGPT’s iOS app is out!
It’s beautifully simple and *crazy* fast.
It even has super easy audio input which is absolutely fantastic.
I’ll be using this all the time on the go! pic.twitter.com/CFqabbS9MW

— Mckay Wrigley (@mckaywrigley) May 18, 2023

OpenAI has been on a continuous growth trajectory with continuous feature additions and new subscription models. The last announcement of introducing over 70+ plugins and web browsing to ChatGPT Plus users is OpenAI’s way of getting people hooked to their ecosystem. However, with looming data security issues, with fear of misuse and leakage of sensitive information, trust has always been a concern with OpenAI.

There’s Everything for All

OpenAI continues to guard its secret by going the closed-source way. Though there have been reports on the company planning to come up with an open source AI model, it will have nothing to do with the current GPT model powering ChatGPT. However, to probably appease the open source community, and maybe dodge any future retribution caused by having a questionable AI system, OpenAI has integrated open-source Whisper, a speech-recognition system. By having this, the question of not having anything open sourced in ChatGPT is ruled out.

No Turning Back?

The Whisper feature allows voice input that enables users to ask their queries to the chatbot. In line with Google’s voice search, ChatGPT app is now all out competing with the largest search engine. With ChatGPT plugins, access to real-time data is no longer a concern. Not just that, it is also going head-on against multiple productivity applications built for sorting and simplifying tasks.

With Whisper being integrated into ChatGPT, a user has a plethora of options to utilise the combined power of both. For instance, a user can directly record voice conversations or meeting points and simply ask ChatGPT to run tasks such as summarising the speech, pulling key insights from it, asking for suggestions, and lots more. However, there are no details on the maximum voice duration for recording.

If you look at applications such as Notion, a productivity tool for organising and managing tasks such as creating, and organising notes, databases, calendars, etc, or even Microsoft Loop, which also does all of the above, the ChatGPT app is going to easily catch up with all of them. With over 100 million users on ChatGPT and 1.8 billion visitors per month, it won’t be difficult.

Within just a few days of its release, people have already started sharing their positive experiences on how the app’s Whisper feature is helping them with tasks.

I’ve found myself using the ChatGPT app just for voice transcription. I open the app from the lock screen, hit record, and paste it into another app.
I’ve never used voice features because they were always more trouble then they are worth. It’s great when it works so well!

— Greg Mushen (@gregmushen) May 20, 2023

The company also mentioned that ChatGPT Plus subscribers will have an advantage by getting early access to features, faster response time, and “GPT-4’s capabilities” – a convincing push to take up their subscription services.

AI Safety Remains a Dream

While everything sounds hunky-dory, the looming question of AI safety will persist. With multiple concerns on security and privacy for ChatGPT, won’t a mobile app just open up further vulnerabilities on data breach? While the company has given out lengthy statements and announcements on how the company will prioritise AI safety, until now there has been no concrete plans on ‘how’ they would go about it.

The biggest irony is right after the release of the ChatGPT app on iOS, Apple announced a ban on employees from using ChatGPT at work owing to fears of data collection and leaks. Even at the recent Senate hearing on AI where Sam Altman testified, when questions on AI safety were brought up, the company conveniently passed the buck by supporting the need for an agency to regulate safety.

Now, with the ChatGPT app, one can only wonder about the kind of data collection that will happen. OpenAI even mentioned in their app announcement that they would gather ‘user feedback’ and will work on further ‘feature and safety improvements’ for ChatGPT. But well, it remains cryptic and unclear as always.

The post Beware, ChatGPT Can Control You Through Your Phone appeared first on Analytics India Magazine.

‘Cheap Labour a Huge Stumbling Block for AI in India’

In an era when generative AI rose to prominence, speculation ran rife about employees facing potential job losses as AI emerged as a potent replacement. Notably, IBM garnered attention when it disclosed plans to employ AI in lieu of its laid-off workforce. Even amidst this climate of uncertainty, numerous AI startups witnessed an unprecedented surge in funding. However, amidst the hype, only a handful of companies managed to effectively harness the technology and identify practical use cases. GoTo, formerly known as LogMeIn, emerged as one such company, solidifying its position as one of the largest Boston-based enterprises.

Analytics India Magazine got in touch with Andrew Kernebone, APAC solutions consulting director, GoTo, to understand how the company leverages generative AI to provide solutions to customers and what are the challenges they face. “I think there are a lot of unknowns for many companies, but I’m excited by what I see already,” he said, when asked about how he feels about the adoption of generative AI in the IT sector.

Giving an example, he says, “If you look at the service industry, for instance, in BPOs, where they provide service desk functionality for organisations, customer experience is the key to success.” According to him, being able to have augmented or AI instant advice for the agent is powerful. AI can think faster than humans and guide them with customised responses, which results in a much better experience for the end user.

“The agents receive real-time advice and have a voice in their ear, and that’s really exciting to me. In the near term, the augmentation of our capabilities in this space excites me the most,” he said.

Data Security

However, when asked about the issue of the company’s IP codes falling into the public database when using models like ChatGPT, he pointed towards the email he received minutes ago. It was an update from the company’s internal IT security team discussing guidelines on how the company should use generative AI capabilities internally. They emphasised the importance of being cautious with public tools like ChatGPT, as they are accessible to anyone. “By inputting intellectual property into such tools,” says Kernebone, “we essentially add it to the engine’s knowledge base, making it potentially accessible to others. Security is undoubtedly a top concern for many people in this regard.”

Kernebone believes that having access to a private, exclusive language model becomes crucial, rather than relying solely on the public interface. From a systems perspective, he says, the challenge lies in integrating the company’s own toolset with a secure implementation of generative AI.

Additionally, says he, on the employee side, there’s the aspect of ongoing security education and training. Phishing attacks are incessant, and they are likely to become more sophisticated with AI. It’s essential to constantly remind and train employees to be vigilant and adept in using these tools within a secure framework.

ChatGPT Use Cases and Layoffs

Speaking of the use cases of ChatGPT in the company, he said, “One of the main areas we’re currently focusing on is our global use of unified communications capabilities.” He explained that the company has a feature called customer engagement, which essentially provides a streamlined call centre capability, allowing agents to interact with customers across multiple digital mediums. “We now have an integration with ChatGPT that gives the agent real-time advice on responses. So when they’re dealing with customers, they get real-time customised messages for that customer, in banking or any other industry,” said Kernebone.

When asked if the adoption of generative AI will lead to less hiring in future, he was of the opinion that it goes back to any of the great leaps in technology. “If we look back to when we transitioned from manual to machine manufacturing, those who manufactured things with their hands, lost their jobs. Jobs changed,” he said.

However, he believes that it’s more about augmentation and making people more efficient. He agreed that the companies will likely use these advancements to grow without needing to hire as many people. “But at the same time, this opens up new ideas and possibilities that we haven’t even thought of yet. It presents opportunities for new businesses, new models, and new employment opportunities,” said Kernebone. “While we tend to focus on what we might lose, I believe there is tremendous potential for what we might gain.”

Kernebone also asserted that when it comes to countries like India with cheap labour, the adoption of AI might be costlier than hiring new people. “AI is not free, it relies on computational resources housed in data centres or distributed across multiple data centres,” he said.

He believes that the excitement of the adoption of AI may be more pronounced in countries with higher labour costs, where companies see the potential for achieving more with fewer employees. In contrast, in countries where labour costs are lower, he believes that the equation may change over time.

“While AI might seem like a compelling idea on paper, cost analysis becomes a significant factor to consider. This backend information adds an intriguing layer to the conversation that is often overlooked,” said he.

The post ‘Cheap Labour a Huge Stumbling Block for AI in India’ appeared first on Analytics India Magazine.

Fake Pentagon attack hoax shows perils of Twitter’s paid verification

Fake Pentagon attack hoax shows perils of Twitter’s paid verification Amanda Silberling 7 hours

Surprising literally no one, the combination of paid blue checks and generative AI makes it all too easy to spread misinformation. On Monday morning, a seemingly AI-generated image of an explosion at the Pentagon circulated around the internet, even though the event didn’t actually happen.

Within about half an hour, the image appeared on a verified Twitter account called “Bloomberg Feed,” which could very easily be mistaken for a real Bloomberg-affiliated account, especially since it had a blue check. That account has since been suspended. The Russian state-controlled news network RT also shared the image, according to screenshots that users captured before the tweet was deleted. Several Twitter accounts with hundreds of thousands of followers, like DeItaone, OSINTdefender and Whale Chart shared it. Even an Indian television network reported the fake Pentagon explosion. It is not immediately clear where this fake image and news story originated.

Prime example of the dangers in the pay-to-verify system: This account, which tweeted a (very likely AI-generated) photo of a (fake) story about an explosion at the Pentagon, looks at first glance like a legit Bloomberg news feed. pic.twitter.com/SThErCln0p

— Andy Campbell (@AndyBCampbell) May 22, 2023

This is far from the first time that a fake image has successfully tricked the internet, but the stakes are higher when the fake event is an explosion at a U.S. government building, rather than the Pope wearing a Balenciaga coat. Some have reported that the fake image could be tied to a 25 basis point movement of the S&P 500, but the dip didn’t last long, and there’s no way to prove that it was entirely a result of this hoax. The incident does beg the question of how generative AI could be used to game the stock market in the future — after all, Reddit did it.

Misinformation is an issue as old as the internet, but the simultaneous growth of generative AI and change in Twitter’s verification system makes for especially fertile ground. From the get-go, Twitter owner Elon Musk’s plan to strip existing blue checks of their status and let anyone pay for the symbol has been a mess. Even if we know that blue checks no longer indicate legitimacy, it’s hard to break a visual habit you’ve cultivated for almost 15 years: If you see an account called “Bloomberg Feed” that has a blue check posting about an attack on the Pentagon, you’re probably predisposed to think it’s real. As it gets more and more difficult to spot fake images, we’ll only continue seeing false news reports like this in the future.

Elon Musk’s Twitter: Everything you need to know, from layoffs to verification

Most sites claiming to catch AI-written text fail spectacularly

Forget LLMs, Large Knowledge Models are The Future of AI Chatbots

Large Knowledge Models

Do large language models (LLMs) based chatbots need to improve? For sure. How exactly would that come about is what researchers have been pursuing all this while. Now, Mark Cuban, the American billionaire and AI enthusiast, has come up with a solution. The idea is to train models with access to significant intellectual property and data to become large knowledge models.

The only problem with this, as Cuban explained, is that this data would not be free. This would bring up another race in the AI world – which big tech company, Google, Microsoft, or Meta, would be the first to pay for the data and how much would that be?

This is important because companies have been using GPT-based models and APIs to generate content, but for a lot of companies the data that these chatbots, like ChatGPT are trained on, is irrelevant. That is why OpenAI introduced plugins for ChatGPT so that enterprises can let the chatbot access their proprietary data, and respond to queries based on that.

This also comes with a lot of privacy issues. A lot of companies are hesitant to work with GPT or Bard and upload their private information, which would then be accessible to the big techs.

What Would Win

Steve Jarret, head of AI and data at Orange, described LLMs as an OS platform. What this means is that these chatbots in their basic structure are built to only provide simple capabilities, how Android or iOS lets you operate your phone. But the plugins that OpenAI introduced are like the apps we install, designed for specific use cases, and improving the capabilities of the phone.

This means that these big-tech companies that are building LLMs and providing them to every organisation possible, need to integrate and embed more features into their core offerings like ChatGPT or Bard. These features should include information letting the LLMs link to the proprietary data of the company. This would help the models provide more knowledge-driven information, which is what companies want in the end.

If OpenAI or Google does not embed capabilities such as the input of proprietary data, even without plugins, people building these plugins would start selling them directly to enterprises. This would make the LLMs independently irrelevant.

For example, a company that builds a plugin to access financial data from a website would start selling the plugin to be linked to any LLM. This is why the companies need to buy exclusive access to intellectual property data, through which they would be able to train knowledgeable chatbots that companies can use as it is.

Moreover, this would also allow companies to use specific use case generative models that are combined with LLMs to generate better-focused responses. If specific use case-based chatbots are built, they would be ideal for individual companies. These models, when fed with proprietary data, would be able to generate responses as intended by the users in the way current LLMs do.

Who Would Win

Elon Musk is in a bid to build his own rival to ChatGPT. Though there hasn’t been much information lately about Musk’s intentions with generative AI, he has roped in many researchers to build something related to the trending technology. Interestingly, Musk has something that no other company has – Twitter data – that is actually a gold mine.

OpenAI had access to this data before Musk realised this upon becoming the CEO of the company. He pulled the data away from OpenAI and plans to file a lawsuit against the company for using the data for building ChatGPT. Now he is building an all-encompassing shell firm called X Corp.

There is a high possibility that Musk’s “based” bot would be the one that would be truly “knowledgeable”. Cuban predicts that Musk’s TruthGPT, would be ahead of Google, Meta, and Microsoft’s offerings and is also expected to be open source. “He can weigh his own tweets and those of the sources he likes and end up with a consumer facing AI that can be a virtual Elon. Pretty cool. Pretty scary,” said Cuban.

All of this is why I think @elonmusk is starting his own @truthgpt, or whatever he will call it. He gets to take the entire @twitter firehose to train or feed any open source model and have a competitor to the big 3. He can weight his own tweets and those of the sources he… https://t.co/tWk4Kfp8tv

— Mark Cuban (@mcuban) May 18, 2023

The scary part about building an AI model based on Twitter is the amount of fine-tuning it requires to filter out the irrelevant, and in some cases dangerous opinions of people.

On the other hand, if Musk does not build a chatbot built on his “intellectual property data”, someone else would. When Bard was released, several people prompted it about its dataset, to which it replied that it is trained on Gmail data. Google was quick to respond that Bard is just hallucinating, but this still brings in the question about privacy around the chatbot. Moreover, is Gmail data actually an intellectual property of Google.

The same is the case with Microsoft and OpenAI’s ChatGPT. Musk would be able to avoid any legal ramifications if he uses Twitter data to train his AI model, unlike OpenAI, which is being sued and blamed for training on private data.

Twitter’s privacy policies clearly state – By publicly posting content, you are directing us to disclose that [shared] information as broadly as possible, including through our APIs, and directing those accessing the information through our APIs to do the same.

This might be concerning for people who do not want an AI to be trained on their data, but the truth is that by posting on Twitter, users have agreed to the privacy policy. Looks like Musk bought the $44-billion Twitter just for the data. Now he has the sole right to build a chatbot using social media data.

This might also be the chance for either Google or OpenAI to build their moat with intellectual property data, instead of publicly available data.

The post Forget LLMs, Large Knowledge Models are The Future of AI Chatbots appeared first on Analytics India Magazine.

Need ideas? Add an AI chatbot like Google Bard to the brainstorming process

This illustration shows a three-step brainstorming concept: Think, prompt and share.
Image: Andy Wolber

There are many methods for conjuring up creative concepts, and exploration engines such as Google Bard and ChatGPT add another option to a brainstormer’s toolkit: prompts.

A series of well-phrased prompts to an AI chatbot can rapidly provide you with text generated by the large language models that drive such systems. These exploration engines let you add a prompt phase to your brainstorming process: think, prompt and then share.

SEE: My 4 Google Bard search prompting tips

How to use an AI chatbot in your brainstorming process

Before using an AI chatbot for the prompt step, be sure that your company permits employees to use this technology. As reported in The Wall Street Journal, Apple is one of the companies restricting employees’ use of AI chatbots.

If that hurdle is cleared, define the problem and think about it independently. You should document the problem and an initial set of ideas in whatever tool works well for you, whether it’s a new Google Doc, a blank board in Jamboard, cells in a Google Sheet, or perhaps a sheet of paper and a pencil.

After you’ve exhausted your initial exploration of ideas, try prompting. You might try a three-phase approach of an initial query with a couple of follow-on requests.

For example, suppose you’re brainstorming how to train employees, and you have access to Google Bard. You might try a series of prompts as shown in Figures A, B and C in sequence.

Figure A

This screenshot shows a prompt in Google Bard.
Form your initial prompt. For example: “Can you suggest 20 effective ways to teach a group of 100 people about computer security concepts and practices?” Note that the system won’t necessarily reply with the number of concepts you request.

Figure B

This screenshot shows a prompt in Google Bard.
Next, request additional ideas. For example: “Please suggest 15 more ideas” or “More ideas?” Note that the responses received include several different concepts.

Figure C

This screenshot shows a prompt in Google Bard.
Encourage a bit more creativity in the responses with an additional prompt: “How about 15 unusual ways to teach security?” This returns yet another set of ideas for your consideration.

If you need more ideas, you might try additional rounds of prompting with different wording. Continuing the example above, other prompts could be:

  • “Create a syllabus for a 5-part computer security training course aimed at employees.”
  • “Write a 75-word workshop description for a computer security session for employees.”
  • “What are three of the most unusual things people should know about computer security.”

Review the responses, then edit and move the ideas into your brainstorming Google Docs document or sheet or Jamboard (Figure D), where you can group or arrange concepts for consideration or discussion.

Figure D

This screenshot shows a prompt in Google Bard and a snippet moved to a sticky note.
Move useful ideas into your brainstorming document. The example shows a section of text copied from Bard (right) and pasted into a sticky note in Jamboard (left).

Don’t be too quick to delete ideas unless items are repetitive, misguided or otherwise off-target. Keep the weird, whimsical and wacky ideas at this point. Depending on the number and nature of the ideas, you may want to reorganize the content to place related ideas close to one another.

At this point, you’re ready to share your ideas with your team. First, share your source document, board or sheet with the members of your team. Ideally, give people Commenter access, so they may select and add their thoughts. Make sure every member of your team has time to review and comment on all the content. Next, meet either in person or perhaps via Google Meet to discuss.

Injecting a chatbot into your brainstorming efforts can help you and your team generate more ideas, but you still need to take time to think and evaluate ideas before you act. Message or mention on Mastodon (@awolber) to let me know how AI chat systems change your productivity processes.

Google Weekly

Google Weekly Newsletter

Learn how to get the most out of Google Docs, Google Cloud Platform, Google Apps, Chrome OS, and all the other Google products used in business environments.

Delivered Fridays Sign up today

Meta’s Breakthrough Language Model on Par with GPT-4 and Bard in Performance

Researchers for Meta AI, alongside Carnegie Mellon University, University of Southern California and Tel Aviv University, today unveiled LIMA, a 65 billion parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinsurance learning or human preference modelling.

Check out the research paper: LIMA: Less Is More for Alignment

Meta’s AI chief Yann LeCun said that this is on par with GPT-4 and Bard in terms of performance.

The researchers said that LIMA has shown strong capabilities in learning specific response formats with minimal training examples. They said that it can effectively handle complex queries, ranging from planning travel itineraries to speculating about alternate history.

Interestingly, one of the notable aspects of this new language model’s performance is its ability to generalise well to unseen tasks that were not part of its training data. In other words, LIMA can apply its learned knowledge to new and unfamiliar tasks, demonstrating a level of flexibility and adaptability, shared by the researchers.

In comparison to human-controlled responses, specifically GPT-4, Bard, and DaVinci-003 (trained with human feedback), the responses generated by LIMA were quite impressive. For instance, in 43% of cases, responses from LIMA were either equivalent to or preferred over GPT-4. In the case of Bard, LIMA was preferred in 58% of cases. In the case of DaVinci-003, the preference rose to almost 65%.

It is interesting to note that with only limited instruction tuning data, models like LIMA can generate high-quality output.

A few weeks back, Meta also released MEGABYTE, a scalable architecture for modelling long sequences. This new technique has outperformed existing byte-level models across a range of tasks and modalities, allowing large models of sequences of over 1 million tokens.

The post Meta’s Breakthrough Language Model on Par with GPT-4 and Bard in Performance appeared first on Analytics India Magazine.

Google upgrades its AI flood-forecasting technology to help more people

Flood forecasting Google demo

A demo of Google's flood-forecasting technology shown at a Google AI event in November.

We've seen artificial intelligence (AI) do everything from write an essay to develop an entire business — but did you know that it can help mitigate the dangers of natural disasters, too?

Last November, Google released its AI-enabled flood-forecasting platform, FloodHub, to 20 countries around the globe.

Also: How to use Google Bard now

This platform takes Google's AI flood-tracking technology and uses the data to display where and when floods might occur on an interactive world map.

Google has now expanded FloodHub to 80 countries worldwide, adding 60 new countries across Africa, the Asia-Pacific region, Europe, and South and Central America.

Also: Every major AI feature announced at Google I/O 2023

The newly included regions include areas with the highest percentages of people exposed to flood risk, such as the Netherlands, Vietnam, Myanmar, Laos, and Cambodia.

Another important upgrade is that FloodHub will be able to provide forecasting up to seven days in advance, which is a big increase from 48 hours last year.

Also: How to use ChatGPT: What you need to know now

The platform will be able to provide forecasting to 460 million people around the globe, according to Google.

Within the next year, Google plans to expand flood forecasting alerts in Search and Maps to deliver critical information to its users in their times of need.

Artificial Intelligence

Bing Chat gets a new wave of updates, including (finally) chat history

Closeup of Bing Chat's chat history sidebar

Bing Chat's chat history sidebar.

Microsoft continued its hot streak of churning out Bing Chat updates on Friday. This wave of updates included long-awaited features such as an export button and a chat history feature.

Now, instead of having your chats with Bing disappear when you start a new conversation, Bing will remember the history of previous chat threads and display them on the right-hand side of the chat window.

Also: The best AI chatbots

You can also personalize your history by deleting and renaming your chats based on your needs, similar to the chat history on ChatGPT.

However, there was no mention of whether you can shut the chat history options on or off, a feature ChatGPT had to recently implement to give users autonomy about which of their data gets used to train OpenAI's models.

Lastly, you can share or export your conversation to a PDF, text file, or Microsoft Word Document to save your findings in a more structured way.

Also: Google Bard's AI says urgent action should be taken to limit (*checks notes*) Google's power

You can also expect to see more visual elements within your answers as more charts and visualizations were added to aid with more complete and useful responses.

Some responses may even include a video response that users can click on to launch in full-screen video overlay.

For example, if you look up, "How to tie a tie?", a video with timestamps could come up as part of your answer to help you master the skill.

Other updates announced in the blog post include improved auto-suggest, optimized answers for recipes, and privacy improvements in the Edge sidebar.

Also: Today's AI boom will amplify social problems if we don't act now, says AI ethicist

A major update not included in the blog post but spotted by users on Twitter was the removal of a chat limit and chat turns.

Instead of seeing a chat limit in the lower left-hand corner of each individual chat, you now see a character tracker with a limit of 4,000 characters per chat.

We can expect even more updates to be announced at Microsoft's annual developer conference, Microsoft Build on Tuesday.

Artificial Intelligence

TCS Partners with Google Cloud to Offer Generative AI Services for its Customers

Amid the generative AI skepticism in the enterprise sector, the Indian IT giant Tata Consultancy Services (TCS) today announced that it is partnering with Google Cloud to leverage its generative AI services, alongside designing and deploying custom business solutions for their customers.

The company, in its press release, said that this new offering is powered by Google Cloud’s Generative AI tools – Vertex AI, Generative AI Application Builder and Model Garden, and TCS’ own solutions. TCS will work with its customers to custom build their solutions based on the context provided by them.

TCS believes that it is well positioned to build innovative enterprise-level solutions using generative AI. TCS’s Krishnan Ramanujam said that this partnership will help the company to rapidly create value for their customers. He said that the company is investing in assets, frameworks, and talent to harness the power of generative AI to enable growth for their customers.

TCS is moving aggressively in the enterprise generative AI landscape. Recently, TCS also announced that it is working on developing its own alternative to GitHub Copilot, to revamp enterprise code generation.

Read: Gen AI is All Over the IT World, Except on Ground

Most Indian IT companies today claim to be working on developing generative AI solutions for their customers, and a majority are still in the PoC stages. Last week, Cognizant announced the launch of its enterprise-wide platform called Cognizant Neuro AI, which goes beyond experimentation and PoCs. This new platform provides enterprises with a comprehensive approach to accelerate the adoption of generative AI technology, alongside harnessing business values in a flexible, secure, scalable and responsible manner.

Accenture unveiled a new study titled ‘A New Era of Generative AI for Everyone.’ exploring generative AI and LLMs. Tech Mahindra, on the other hand, is taking a distinctive approach with its Generative AI Studio, empowering businesses with a user-friendly interface that facilitates a myriad of customisation options for their content. Now, Capgemini claims that it has a unique moat when it comes to deploying generative AI solutions.

The post TCS Partners with Google Cloud to Offer Generative AI Services for its Customers appeared first on Analytics India Magazine.

The Future of AI: Exploring the Next Generation of Generative Models

The Future of AI: Exploring the Next Generation of Generative Models
Image by Author

If you’re keeping up with the tech world, you would know that Generative AI is the hottest topic. We’re hearing so much about ChatGPT, DALL-E, and more.

The recent breakthroughs in Generative AI will drastically alter the way we continue to approach content creation and the growth rate of AI tools in all sectors. Grand View Research stated in their Artificial Intelligence Market Size, Share & Trends Analysis Report:

“The global artificial intelligence market size was valued at USD 136.55 billion in 2022 and is projected to expand at a compound annual growth rate of 37.3% from 2023 to 2030.“

More and more organizations by the day, from different sectors or backgrounds are looking to upskill with the use of Generative AI.

What is Generative AI?

Generative AI is algorithms used to create new and unique content, such as text, audio, code, images, and more. With the development of AI, Generative AI has the potential to take over various industries helping them with tasks that people thought were once upon a time impossible.

Generative AI is already creating art that can mimic artists such as Van Gogh. The fashion industry can potentially use generative AI to create new designs for their next line. Interior designers can use generative AI to build someone their dream home within days, rather than weeks and months.

Generative AI is fairly new, a work in progress and still needs time to perfect itself. However, applications such as ChatGPT have set the bar high and we should expect to see more innovative applications getting released in the coming years.

The Role of Generative AI

There are no specific limitations on what generative AI can currently do as mentioned before, it’s still a work in progress. However, as of today, we can categorize it into 3 parts:

  1. Producing new content/information:
  2. This can range from creating a new blog, a video tutorial, or some fancy new art for your wall. However, it can also help in the development of a novel drug.

  3. Replace repetitive tasks:
  4. Generative AI can take over employees' tedious and repetitive tasks such as emails, presentation summaries, coding and other types of operations.

  5. Customized data:
  6. Generative AI can create content for specific customer experiences, and this can be used as data to ensure success, ROI, marketing techniques, and customer engagement. Using the consumer’s behavioral patterns, companies will be able to distinguish effective strategies and methods.

Below is an example of one of the most popular types of generative AI models, Diffusion Models.

Diffusion Model

The diffusion model is designed to learn the underlying structure of a dataset by mapping it to a lower-dimensional latent space. Latent diffusion models are a type of deep generative neural network, developed by the CompVis group at LMU Munich and Runway.

The diffusion process is when you slowly add or diffuse noise to the compressed latent representation, and generate an image that is just noise. However, the diffusion model goes in the opposite direction and does the reverse process of diffusion. The noise is gradually reduced from the image in a controlled way, so the image slowly appears to look like the original.

The Future of AI: Exploring the Next Generation of Generative Models
Image by Author Uses Cases of Generative AI

Generative AI has been widely adopted by many organizations from different sectors. It has allowed them to adopt the tools to help fine-tune their current processes and methods and elevate them more effectively. For example:

Media

If it is creating a new article, a new image to put on the website, or a cool video. Generative AI has taken the media sector by storm, allowing them to produce efficient content at a faster rate and reduce their cost. Personalized content has allowed organizations to take their customer engagement to the next level, providing a more effective customer retention strategy.

Finance

AI tools such as Intelligent document processing (IDP) for KYC and AML processes. However, generative AI has allowed financial institutions to take their customer analysis further by discovering new patterns in consumer spending and determining potential issues.

Healthcare

Generative AI can help with images such as X-ray and CT scans to provide more accurate visualizations, define images better, and detect diagnostics at a faster rate. For example, using tools such as illustrations-to-photo conversion through GANs (Generative Adversarial Networks) has allowed healthcare professionals to have a more in-depth understanding of a patient's current medical state.

Governance Challenges of Generative AI

With anything great, comes bad, right? The rise in generative AI has led to the emergence of how governments are going to be able to control the use of generative AI tools.

For a while now, the AI field has been open for organizations to do what they want. However, it was a matter of time before someone came in and created fixed regulations around AI. Many are concerned about the supervision of generative AI models and how it will impact the socio-economy, as well as other issues such as intellectual property, and privacy infringement.

The main challenges that generative AI is currently facing in terms of governance are:

  • Data Privacy — Generative AI models require a lot of data to be able to successfully export accurate outputs. Data privacy is a challenge that all AI companies and tools are facing due to the potential misuse of sensitive information.
  • Ownership — Intellectual property rights for any content or information that has been created by generative AI are still an open discussion. Some may say that the content is unique, whereas others may say the text-generated content has been paraphrased from a variety of internet sources.
  • Quality — With the high volume of data that is fed into generative AI models, the number one concern would be to investigate the quality of the data and then the accuracy of the output that has been generated. Fields such as medicine are areas of high concern as dealing with misinformation can be highly impactful.
  • Bias — As we look into the quality of the data, we also need to evaluate the possible bias present in the training data. This can lead to discriminatory outputs, causing AI to be distasteful in many people's eyes.

Wrapping it up

Generative AI still has a lot of work to do before it's positively accepted by everyone. These AI models need a better understanding of human speech from different cultural backgrounds. For us common sense when speaking with someone comes naturally to us, however, it’s not very common for AI systems. They struggle to adapt to different circumstances as they are programmed to be trained on factual information.

It will be interesting to see what role generative AI will play in the future. We have to wait and see.

Nisha Arya is a Data Scientist, Freelance Technical Writer and Community Manager at KDnuggets. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.

More On This Topic

  • Using Generative Models for Creativity
  • The Insiders’ Guide to Generative and Discriminative Machine Learning…
  • Future Says Series | Discover the Future of AI
  • Exploring the SwAV Method
  • The next-generation of AutoML frameworks
  • Exploring Unsupervised Learning Metrics