How Freshworks is Revamping Generative AI Space

Freshworks

The tech industry has been swept up in the wildfire of generative AI, igniting a frenzy of excitement. SaaS companies are enthusiastically embracing its vast potential, especially since the introduction of language models and the ease of accessing diffusion models through APIs.

As this cutting-edge technology continues to evolve, a cloud-based SaaS unicorn Freshworks has been building AI-integrated solutions for half a decade now.

In 2018, Freshworks launched Freddy, their AI engine that focussed on reducing repetitive tasks. Recently, they introduced three new generative AI services – Freddy Self Service, Freddy Copilot, and Freddy Insights. They are also working on proprietary language models and integrating general-purpose LLMs to meet customer needs. These predictive and assistive AI capabilities enhance productivity for support agents, sellers, marketers, IT teams, and leaders. Within their beta release, they boosted time savings by as much as 83%.

AIM got in touch with Prakash Ramamurthy, Chief Product Officer of Freshworks to understand more about their AI ventures.

Freshworks’ AI-driven Solutions & Partnerships

At the forefront of Freshworks’ AI offerings lies Freddy Self Service, which enables businesses to provide personalised customer and employee service through conversational bots so that companies can deliver top-notch service efficiently, enhancing customer satisfaction and loyalty. Freddy Copilot, a personal AI assistant that acts as an “Always-on” collaborator, offers contextual information, and insights, and offloads repetitive tasks. Freshworks’ Freddy Insights offers proactive insights and recommendations via a conversational interface, propelling businesses to success.

According to Ramamurthy, the success of Freshworks’ AI-driven solutions is also attributed to strategic partnerships with industry giants such as Microsoft, Google, and AWS. Notably, Freshworks has integrated Microsoft Azure OpenAI Service into their Freddy AI solutions, resulting in a significant boost to their capabilities.

Leveraging Azure Cognitive Services, including OpenAI completions API, GPT- 3.5-turbo, and GPT-4 models, Freshworks has incorporated a host of advanced features into its products. These features, including message expansion, tone enhancer, summariser, and rephraser, have revolutionised how users handle their tasks, enabling them to streamline their workflows and achieve more in less time.

For Freddy Copilot, Freshworks taps into the power of Azure OpenAI Service’s most potent GPT-3 model, text-davinci-003. By doing so, Freshworks ensures that businesses of all sizes can access the advantages of generative AI while maintaining robust security measures and responsible governance.

“By leveraging Microsoft’s advanced AI technologies, Freshworks is poised to provide innovative and secure AI-driven solutions, offering customers enhanced experiences and empowering businesses to succeed in their operations,” he added.

Addressing ML Model Deployment Issues

The deployment of ML models remains a significant challenge for many enterprises worldwide. According to Gartner, only 53 percent of projects make it from prototype to production, even at organisations with some level of AI experience.

To address this issue, Freshworks accelerates AI model deployment by leveraging existing technology and tailoring it to meet customer needs. Freddy AI enhancements (Freddy Self Serve, Copilot, and Insights) offer up to 83% reduction in effort during beta deployments.

In terms of supporting developers, Freshworks has extended its commitment by allowing over 2,500 developers who use the Freshworks Developer Platform to utilise Freddy Copilot. This integration has significantly accelerated the development of innovative, high-quality, and reliable apps, reducing the time it takes to go from idea to app to just nine to 10 weeks.

As for recent projects executed by Freshworks in AI, one notable case study involves their customer iPostal1, a global leader in digital mailbox technology. iPostal1 chose Freshworks for its CRM system, which allowed them to customise internal workflows to better serve their customers and unify messaging channels on a single platform.

When it comes to Freshworks’ technology infrastructure, tools, and platforms, the company focuses on delivering value to customers through a deep understanding of how their solutions are used in day-to-day operations. They actively engage with customers, shadowing support agents, sitting with salespeople, and collaborating with marketers to observe their processes and challenges.

Tracing Back A Little

Founded in Chennai in 2010 by Girish Mathrubootham, Freshworks has over 5,000 employees and a strong presence in the US, UK, EU, APAC, MEA, India, and LATAM, operates from 13 office locations with headquarters in California.

Serving an impressive client base of over 65,000, including prominent names like American Express, Blue Nile, Bridgestone, Databricks, Fila, Klarna, and OfficeMax, Freshworks collaborates with 400+ technology partners. It was the first Indian SaaS company to be ever listed on Nasdaq.

Freshworks has a 5.3% Q-o-Q surge in its Q2 2023, showcasing impressive financials. The company’s net loss significantly decreased by 17.8% during this period.

In terms of total revenue, Freshworks achieved a remarkable milestone of $145.1 million, signifying a remarkable 19% YoY growth compared to Q2 2022, and an impressive 20% growth when adjusted for constant currency.

Moreover, the company’s revenue from operations soared by 5.3% to a notable $145 million in Q2, marking a substantial improvement over the first quarter of the calendar year, which was at $137.7 million. In FY22, the company achieved remarkable milestones, generating a total revenue of $498.0 million, marking a 34% growth compared to the previous year and surpassing $500 million in annual recurring revenue.

With state-of-the-art technology and a solid track record of financial success, Freshworks stands ready to reshape the SaaS industry.

Read more: Data Science Hiring Process at Instahyre

The post How Freshworks is Revamping Generative AI Space appeared first on Analytics India Magazine.

Level up your AI skills with this ChatGPT and Python coding bundle for $30

stack-social-chatgpt-ai-bundle

Improve your ChatGPT skills with this training bundle.

You may have experimented with ChatGPT, and found it to be a useful tool on its own for work, school, or personal projects. But if you take the opportunity to study even basic coding, you can expand what you can do with AI simply by customizing your own chatbot.

Start your coding education during this back-to-school sale and get the 2023 Ultimate AI ChatGPT and Python Programming Bundle for only $30 until August 13.

Code your own AI chatbot with Python

This AI programming bundle primarily focuses on cultivating your skills with Python. Courses start at the very beginning and may be most useful to learners with little coding experience. If Python is a new language to you, start with Python 3: From ZERO to GUI Programming. This course gives you nine hours of programming tips to help you work through the other eight Python courses. Those introduce new skills like PDF handling and data analysis. One even gives you the chance to program an escape room.

Courses are taught by Dr. Chris Mall, who has a masters degree in IT and a Ph.D in Computer Science. He also teaches a course that introduces you to the theory behind AI and shows you how to program a robot. From there, you can apply your skills in two courses that walk you through the process of crafting a ChatGPT AI bot. Even if you aren't a coding expert, these courses will show you how to use ChatGPt to generate new code for you.

86 hours of Python and AI training

Back-to-school sales doesn't just mean going back to a classroom. Take control of your own education and learn to code and build your own AI chatbot.

Until August 13 at 11:59pm Pacific, get the 2023 Ultimate AI ChatGPT and Python Programming Bundle for $30. No coupon needed.

ZDNET Recommends

YouTube Shorts will look even more like TikToks with these new features

Vertical video creator

TikTok popularized short, vertical video content. After seeing its success, many platforms, including YouTube, have tried to leverage the app's popularity. Now, a wave of new features makes YouTube Shorts more similar to TikToks than ever.

On Tuesday, YouTube announced features that will help to optimize both the viewing and creating process for Shorts. Interestingly, most of these features are already found on TikTok and Instagram.

Also: LinkedIn is testing Microsoft's AI art generator to design your posts. Here's how it works

The first feature is called Collab, YouTube's take on TikTok's Duet feature, which allows users to react to a video in a split-screen format next to the original video as it's playing. This feature is already being rolled out to creators on iOS and will continue to be rolled out in upcoming weeks.

Live vertical videos are also coming to Shorts, another feature already found on TikTok and Instagram. With this feature, creators can livestream in a vertical video format for their followers to join.

Also: Micro-social media: What is it and which tools should you try?

Like the Live feature on TikTok, YouTube's version will allow fans to send their favorite creators funds while watching the live stream with a few taps. The YouTube demo of the Live feature showcased buttons with viewers' profiles that were next to the amount of money they gifted the creator.

YouTube is adding new effects and stickers, including a Q&A feature similar to the ones found on TikTok and Instagram, which would allow creators to ask their audience questions on their videos and see the responses populate in the comments.

To facilitate the creation of new content, creators can now respond to comments with another Short. They can also recreate a short they see with a click of the Remix button. Both of these features are already found on TikTok.

The last two new features are unique to YouTube — and they have the potential to significantly improve both the creating and viewing experience on YouTube Shorts.

Also: YouTube's new AI feature helps you decide what to watch next

The first is the option to save Shorts to playlists. YouTube has always had this option for horizontal videos, and now users can also save their favorite short-form videos to specific playlists they can revisit later.

Lastly, YouTube is experimenting with a recomposition tool that allows creators to transform their horizontal videos into Shorts easily.

YouTube says users will be able to adjust the segment of the video's layout, and zoom and crop the clip to fit the Shorts layout. YouTube will share more details on this feature in the upcoming weeks.

Social Media

Why the AI World is Looking Up to NVIDIA

Why the AI World is Looking Up to NVIDIA

The huge GPU crisis is upon us — something that Tesla chief Elon Musk had warned the tech industry about. In April this year, Musk had tweeted, “It seems like everyone and their dog is buying GPUs at this point,” pointing at the huge demand which would eventually lead to a shortage. Cut to the present, everyone wants to build AI products and companies. It’s an AI deluge of such magnitude that even a company like NVIDIA is struggling to build and provide them at the moment.

The demand for high-performance GPUs, especially the NVIDIA H100s, has skyrocketed. As of August 2023, the tech industry is grappling particularly with shortage of the highly sought-after NVIDIA H100s. The scarcity of these GPUs is significantly impacting AI companies that heavily rely on them for model training and inference tasks.

"People think Nvidia is overpriced"
"Should we tell them the universe is just a simulation running on Nvidia GPUs?" pic.twitter.com/hyHx7PSqkt

— Adam Singer (@AdamSinger) August 1, 2023

Gossip of the valley

Andrej Karpathy from OpenAI said, “Who’s getting how many H100s and when is the top gossip of the valley right now.” Interestingly, Stephen Balaban, CEO of AWS Lambda said, “Lambda has a few thousand more H100s coming online before the end of this year — if you need 64 H100s or more, DM me.” The situation is that dire at the moment.

Various AI leaders, including Adam D’Angelo, CEO of Quora and Sam Altman, CEO of OpenAI, have voiced their concerns about the GPU shortage. OpenAI has revealed that the limited GPU supply is hindering their short-term plans, including model fine-tuning and dedicated capacity. Which is possibly one of the reasons why the company is still stuck on GPT-4 and not able to fulfil the promises it made with its LLM.

Moreover, not just AI companies, several categories of organisations have a significant demand for H100 GPUs. These include startups engaged in LLMs, cloud service providers (CSPs) like Azure, GCP, and AWS, larger private clouds like CoreWeave and Lambda, and other prominent companies like Musk’s Tesla. For Musk, he had already bought thousands of NVIDIA GPUs before anyone else to reserve it for xAI. Possibly, everyone is bidding against that and that is how we convinced Altman to buy the ai.com domain, in exchange for GPUs.

According to reports, GPT-4 was probably trained using around 10,000 to 25,000 Nvidia’s A100s. For GPT-5, Musk suggested it might require 30,000 to 50,000 H100s . In February 2023, Morgan Stanley predicted GPT-5 would use 25,000 GPUs. With such an amount of GPU’s required and NVIDIA the only reliable supplier in the market, it boils down to it to make the situation better.

Who Needs How Much

According to a recent blog, OpenAI is estimated to require around 50,000 H100 GPUs, while Inflection AI is looking for approximately 22,000 units. The requirements for Meta are uncertain, but it is rumoured that they may need around 25,000 GPUs, possibly even exceeding 100,000 units.

Who’s getting how many H100s and when is top gossip of the valley rn https://t.co/AxarseOmg9

— Andrej Karpathy (@karpathy) August 2, 2023

The major cloud service providers, including Azure, Google Cloud, AWS, and Oracle, may collectively seek around 30,000 GPUs each. Private clouds like AWS Lambda and CoreWeave are also expected to demand a total of 100,000 GPUs. Other AI-focused companies such as Anthropic, Helsing, Mistral, and Character might individually need around 10,000 units.

It’s important to note that these figures are approximate estimates and some overlap may occur between cloud providers and their end customers. Considering these numbers, the total demand for H100 GPUs could be around 432,000 units. At an estimated price of $35,000 per GPU, this translates to a staggering $15 billion worth of GPUs, that all goes to NVIDIA.

Additionally, it’s worth mentioning that this estimation does not include Chinese companies like ByteDance (TikTok), Baidu, and Tencent, which are likely to have substantial demands for H800 GPUs, especially designed for the Chinese market, as well.

While the future outlook remains uncertain, the industry is hopeful that increased supply and advancements in GPU technology will eventually ease the shortage. For example, NVIDIA has also been talking about releasing A800s that would be able to power the same amount of compute for building AI models, but it is still questionable. Until then, AI companies must navigate this challenging period by exploring alternative GPU options and partnerships to continue their crucial work in the field of artificial intelligence.

GPU scarcity is the moat

To make things worse, experts in the industry fear that the current GPU scarcity may cause a self-reinforcing cycle, where scarcity itself becomes a moat, leading to further GPU hoarding and exacerbating the shortage. Probably that is why Musk hoarded the GPUs in the first place. The next-generation H100 successor is not expected until late 2024, adding to the concerns.

In 2010, we used Jensen Huang's @nvidia GPUs to show that deep feedforward nets can be trained by plain backprop without any unsupervised pretraining. In 2011, our DanNet was the first superhuman CNN. Today, compute is 100+ times cheaper, and NVIDIA 100+ times more valuable.… pic.twitter.com/UeysuP03SO

— Jürgen Schmidhuber (@SchmidhuberAI) July 31, 2023

Acquiring H100s has emerged as a significant concern for AI companies, hindering their operations and causing delays in product rollouts and model training. The AI boom’s unprecedented demand for computational power has exacerbated the situation, leading to a scarcity of essential components used in GPU manufacturing.

NVIDIA has been backing almost every AI startup in the world. It seems that the company was funding startups so that they could start the company and buy the GPUs. Now, it has established a GPU monopoly in the market and dependency of others on itself, the onus is on the chip giant to fulfil the demand of the market.

But creating GPUs involves a complex manufacturing process that requires various critical components. Memory, interconnect speed (such as InfiniBand), caches, and cache latencies play vital roles in determining the performance of GPUs. The shortage of any of these components can lead to a delay in GPU production, contributing to the overall scarcity.

The post Why the AI World is Looking Up to NVIDIA appeared first on Analytics India Magazine.

ChatGPT Now Shows Smart Suggestions

ChatGPT Now Shows Smart Suggestions ChatGPT Now Shows Smart Suggestions

OpenAI has decided that now you don’t even need to type in your prompts to get answers from ChaGPT. Smart Suggestions, a new feature suggests random initial messages that can be given to ChatGPT to start a conversation.

As soon as users open the website, they can see four suggestions on top of the prompt box where they can test out how the chatbot would respond to these prompts for further inspiration. These messages are currently random and do not relate to the past conversations with ChatGPT.

Apart from initial messages, the chatbot now also offers follow-up questions to continue the conversation based on the context of the last prompt and generated response.

Not sure if this is new, or I just missed it, but ChatGPT now shows suggested follow-up prompts at the bottom of its output.
In this example, I asked if HubSpot offered a CRM, and I see two follow-up prompts (which are good suggestions). pic.twitter.com/O1jY9S7qbI

— dharmesh (@dharmesh) August 3, 2023

People have been suggesting further improvements in the way we interact with chatbots for a long time. For example, AIM said that understanding the context, and giving users the options of follow up questions was already needed. Here on, there can be simple options such as ELI5 (Explain like I am 5), to begin the conversation. When equipped with a suitable interface, LLMs can function seamlessly and provide assistance comparable to that of a human colleague

These LLM based chatbots felt like a natural progression from typing into message boxes to conversing with AI. Simply put, humans are so obsessed with typing into white boxes in search engines and messaging apps, that it seemed like a simple transition to talk to AI the same way. Now, to make the chatbots feel more seamless, OpenAI has decided to introduce more ways to interact with it.

On the other hand, the ChatGPT user base has dropped since June. This might be for several reasons like the availability of API and general loss of interest in the chatbot. Seems like OpenAI is now taking further steps to improve how people interact with ChatGPT with smart suggestions, which comes after the general availability of Code Interpreter.

The post ChatGPT Now Shows Smart Suggestions appeared first on Analytics India Magazine.

Steg.AI’s Unique Watermarking Approach Overshadows Tech Giants

In May 2023, the world was left shell-shocked as it witnessed images of the Pentagon shrouded in smoke. Many news channels reported the incident based on these images and even the stock market reacted to the news, dropping for a brief period of time. Later, it was reported that it was a fake AI-generated image.

Such instances highlight the challenge of identifying AI-generated content in various contexts.

It also brought back the larger discussion on deep fakes and fake images—which has been amplified due to Generative AI tools and their ability to produce hyperrealistic images just from prompts.

All of this has led to governments across the globe scrambling for solutions and a way to safely regulate Artificial Intelligence. Prominent AI firms, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, have voluntarily committed to measures like watermarking AI-generated content to enhance its safety, but there hasn’t been any solid development yet.

Current methods like encoding data into images or audio can be easily bypassed. A robust, invisible watermark that’s easily applied and detected, yet resistant to transformations, is necessary, as studies suggest that it is difficult for humans to differentiate human and AI-generated content. As online IP theft is rampant, the ability to prove content creation’s origin is increasingly essential.

Meanwhile, a California-based platform Steg.AI has developed a deep learning-based solution that embeds nearly imperceptible watermarks into digital content. Even if the images are altered, compressed, or manipulated, the Steg.AI watermark remains intact. Remarkably resilient, these watermarks can even be captured using an iPhone camera when displayed on screens or printed.

Steg.AI’s watermarking solution finds applications in diverse scenarios, such as stock photography services, content sharing on platforms like Instagram, pre-release copies of films, and safeguarding confidential documents. Early iterations of their product faced challenges, leading to a shift in focus towards robustness, a standout feature that resonated with customers.

How it Works

Steg.AI core concept involves seamlessly integrating watermarks into AI-generated images before distribution. While the specifics of the process remain proprietary, the basic idea revolves around a pair of machine-learning models. One model customises the watermark’s placement within the image, ensuring imperceptibility to the human eye while remaining detectable by the decoding algorithm.

Analogous to an invisible, mostly unchangeable QR code, this method potentially holds kilobytes of data – sufficient for URLs, hashes, and plaintext information. Each page of a multi-page document or video frame could harbour distinct codes, exponentially increasing the capacity.

The company’s extensive work can be traced back to a 2019 CVPR paper, along with the acquisition of Phase I and II SBIR government grants. Co-founders Eric Wengrowski and Kristin Dana, who were previously involved in academic research, have dedicated years to refining their approach.

Steg.AI’s progress has been supported by NSF grants and angel investments totaling $1.2 million. Recently, the company announced a significant milestone with a $5 million seed funding round led by Paladin Capital Group, accompanied by participation from Washington Square Angels, NYU Innovation Venture Fund, and individual angel investors.

What Big Techs are Doing

Major tech companies have taken steps to incorporate watermarking into their content. Microsoft announced at its annual Build conference that it’s adding new media provenance features to Bing Image Creator and Designer, enabling users to verify AI-generated images and videos. This innovation involves cryptographic methods to mark and sign content with metadata indicating its origin. For this, websites must adopt the Coalition for Content Provenance and Authenticity (C2PA) specification developed with Adobe, Arm, Intel, Microsoft, and Truepic.

However, the impact of Microsoft’s efforts relies on broader media provenance standard adoption, with support from companies like Stability AI and Google who are also exploring similar approaches. Shutterstock and Midjourney have adopted guidelines to embed markers indicating generative AI-created content.

On the other hand, collaborative research involving Meta AI, Centre Inria de l’Universite de Rennes’, and Sorbonne University has developed an innovative technique that seamlessly incorporates watermarking into the image generation process while preserving the architecture. This method modifies pre-trained generative models to effectively integrate watermarks into generated images, enhancing security and computational efficiency. This technology enables model providers to distribute versions of their models with distinct watermarks for different user groups, facilitating ethical usage monitoring.

The technique is valuable for media organisations in identifying computer-generated images. Leveraging Latent Diffusion Models (LDM), the researchers successfully integrated watermarks with minimal adjustments to generative models. The process involves fine-tuning LDM decoders using perceptual image loss and hidden message loss from a streamlined deep watermarking method called HiDDeN. The technique showcases strong performance in image editing tasks, even with heavily cropped images, maintaining the original model’s utility across various LDM-based tasks.

The move by the seven big techs late last month supports the Biden administration’s push to regulate the booming and popular AI technology. The US Congress is also reviewing a bill that would mandate the disclosure of AI involvement in creating political ads.

The post Steg.AI’s Unique Watermarking Approach Overshadows Tech Giants appeared first on Analytics India Magazine.

Bing Chat now has the highly-requested dark mode. Here’s how to turn it on

Bing Chat dark mode

The AI-powered feature is now available in light, dark, or automatic mode.

Microsoft continues to improve its AI-powered Bing Chat in an effort to make the service more accessible for users, and the latest update includes the ability to change its appearance from the default Light mode to Dark mode.

Also: How to use Bing Chat (and how it's different from ChatGPT)

To recap, Bing Chat is Microsoft's generative AI chatbot in response to ChatGPT. Microsoft invested heavily into OpenAI and has been adopting the powerful technology behind ChatGPT and the DALL-E image creator into its search engine and across Azure and Microsoft Office.

Since launching at the beginning of the year, Bing AI for desktop has only been presented in Light mode, with visual elements that are presented in brighter hues. But a new Dark mode setting flips things around, arguably for the better.

How to turn on Dark mode in Bing Chat

Setting up dark mode on Bing Chat.

What you'll need: At the moment, you must use Microsoft Edge to access Bing Chat on your desktop. However, you don't need to log in to a Microsoft account to change your appearance settings.

Also: Bing AI chat expands to Chrome and Safari for select users

Also: Google and Microsoft partner with OpenAI to form AI safety watchdog group

That's all you need to set your Bing Chat appearance to dark mode.

Bing AI is now in dark mode.

FAQ

Is dark mode available for the mobile app?

You can't change the appearance settings for Bing Chat on the mobile app, but the theme will adjust if you have system-wide dark mode turned on. Likewise, if your phone is natively set to light mode, Bing Chat will also be displayed in light mode.

More on AI tools

AI can conduct breast cancer screenings in less time than humans but just as well, study finds

AI doctor using a screen with medical data

Breast cancer is a significant health issue in the US, with approximately 240,000 women being diagnosed with breast cancer every year, according to the Centers for Disease Control and Prevention (CDC).

As a result, women are encouraged to do yearly breast screenings or mammograms to detect breast cancer early when treatment is the most effective. Now, AI can help with those screenings.

Also: Amazon Clinic expands telemedicine services nationwide

A trial published in The Lancet Oncology journal performed a randomized trial with 80,000 women between the ages of 40 through 80, with a median age of 54, to compare the efficacy of AI in reading mamograms compared to standard readings by radiologists.

To conduct the trial, the women who opted to participate in the study across the four screening sites in Sweden were randomly assigned to AI-supported screenings or standard double readings without AI on a 1:1 ratio, meaning half were screened by AI and the other half by radiologists.

The results of the study were promising.

The AI-supported screenings detected 244 screen-detected cancers, comparable to the 203 screen-detected cancers identified by the standard screenings. The cancer detection rates were 6.1 per 1000 participants from AI groups and 5.1 per 1000 in the control group.

Also: The best blood pressure watches

The false positive rates were 1.5% in both groups, meaning they incorrectly identified a screening as cancerous at the same rate.

Both the AI screenings and the radiologist screenings produced very comparable results, which is especially impressive because of the massive potential work reduction the technology could have on radiologists.

The trial found that radiologists' screen-reading workload was reduced by 44.3% through the implementation of AI.

"AI-supported mammography screening resulted in a similar cancer detection rate compared with standard double reading, with a substantially lower screen-reading workload, indicating that the use of AI in mammography screening is safe," the study concluded.

The implementation of AI technology in the medical field could help radiologists allocate their time to other responsibilities, such as face-to-face patient interactions, that could better impact patient experience.

Artificial Intelligence

LinkedIn is testing Microsoft’s AI art generator to design your posts. Here’s how it works

LinkedIn app on a phone

LinkedIn posts are a great way to showcase your latest career accomplishments with your network. However, LinkedIn feeds are so saturated with posts that it can take time to stand out. AI is here to help with that.

Also: These are my 5 favorite AI tools for work

A post shared on X, formerly known as Twitter, by a beta user shows that LinkedIn is testing the integration of Microsoft Designer on its platform. With the integration, users will be able to create unique visual assets within LinkedIn.

Microsoft Designer is Microsoft's take on Canva, a graphic design platform that can be used to create everything from social media posts to presentations, and even company branding. The major difference is that Designer leverages AI.

Microsoft's take on a graphic design platform uses generative AI throughout its platform to enable users to create anything they want by simply using text. LinkedIn is testing adding those abilities to its platform to help users create original posts without leaving the app.

Also: How AI can turn any photo into a professional headshot

As shown by the video attached to the X post, LinkedIn users would have the opportunity to add an image and have Designer incorporate it into an original with a simple text prompt.

The instructions can include as much or as little detail including colors, the motif, and purpose. Once the prompt is finalized, Designer will generate multiple renditions within the platform, facilitating the posting process.

LinkedIn has quickly adopted AI on its platform, unveiling several AI features including AI-generated recruiter messages, profile sections, copy suggestions for ads, and even post generation. If LinkedIn continues to keep up its current pace, we can expect to see this integration soon.

Artificial Intelligence

Could AI disclaimers on Instagram help you spot AI-generated influencers?

Person using Instagram on phone

Instagram could be working on a feature that would notify users when AI has partially or totally created a post they come across. According to a post from app researcher Alessandro Paluzzi, posts made by AI will have an accompanying label explaining that AI played a part in the post's creation.

Also: 5 things to know about Meta's Threads app before you entangle your Instagram account

But could the labels help people spot when generative AI creates an entire account?

Three popular influencers on Instagram have amassed over eight million followers and received brand deals worth millions of dollars. However, all three are AI-generated. Lil Miquela, Imma, and Shudu are Instagram influencers whose photos are AI-generated.

They each live exciting lives and have partnered with brands like Dior, Calvin Klein, Chanel, and Prada. Despite each AI influencer having a variation of the phrase "digital persona" in their bio and all of their photos having an uncanny valley feel, many commenters and followers believe the influencers are real.

Each digital influencer is the product of a tech firm that employs graphic designers and digital artists to create images of the influencers with the help of artificial intelligence.

Digital influencers are attractive to brands and marketing companies because they cut the costs associated with travel, eliminate language barriers, and can change their look to conform to any brand at the drop of a hat.

Also: Can AI detectors save us from ChatGPT? I tried 5 online tools to find out

More importantly, digital influencers aren't a brand risk. They don't have opinions or political values and don't have any questionable tweets from ten years ago. There is nothing a digital influencer can do that would jeopardize the integrity of a brand.

Additionally, digital influencers can't age or do something to their appearance that doesn't align with a brand's values. For example. Lil Miquela has been 19 since creating her Instagram account in 2016. Since then, she's collaborated with celebrities, graced magazine covers, and raked in millions of dollars.

Also: How to achieve hyper-personalization using generative AI platforms

But when people quickly scroll past Lil Miquela's posts, how many can immediately tell she is AI-generated? Experts say not many. Young people are particularly impressionable, and the content they see online shapes their view of themselves and the world around them.

And with digital influencers, they can look perfect and live an ideal life at all times, which could add to the pressures and uneasy feelings teens get from scrolling social media.

So, is it up to Instagram to "out" Lil Miquela as an AI-generated influencer, her "owners," or Instagram users to better judge the content they consume? The government says it should create new agencies to hold Big Tech to stricter standards.

Big Tech says such regulation could stifle innovation, and many users believe the responsibility lies on tech and social media companies.

AI-generated content labels would not be the first time Instagram has tried to help users better understand the content they come across. In 2020 during the throes of the COVID-19 pandemic, Instagram blocked hashtags that spread vaccine misinformation and provided users with trusted information about COVID-19 and the vaccines.

Also: Instagram feed fix: How to see more of what you want (and less of what you don't)

But generative AI isn't as cut and dry as providing links to the National Health Service or the Centers for Disease Control and Prevention. AI can be harder to spot and contain, and tech companies must combat the possible dangers of misinformation propagated by generative AI.

Social Media