AI Development Lifecycle: Complete Breakdown in 2023

An illustration of a computerized mind and two humans are working on it.

Artificial intelligence (AI) has emerged as a game-changing technology in recent years, offering businesses the potential to unlock new insights, streamline operations, and deliver superior customer experiences. 91.5% of leading businesses have invested in AI on an ongoing basis. Since AI continues to grow as a powerful solution to modern business problems, the AI development lifecycle is becoming increasingly complex. Today, AI developers are facing several challenges, including data quality, quantity, selecting the right architecture, etc., that must be addressed throughout the AI lifecycle.

Hence, realizing AI benefits requires a structured and rigorous approach to AI development that spans the entire lifecycle, from problem definition to model deployment and beyond. Let’s explore the different stages of a successful AI development lifecycle and discuss the various challenges faced by AI developers.

9 Stages of Building A Successful AI Development Lifecycle

Developing and deploying an AI project is an iterative process that requires the revisitation of steps for optimal results. Here are the nine stages of building a successful AI development lifecycle.

1. Business Objective Use Case

The first step of the AI development lifecycle is identifying the business objective or problem that AI can solve and developing an AI strategy. Having a clear understanding of the problem and how AI can help is crucial. Equally important is having access to the right talent and skills is crucial for developing an effective AI model.

2. Data Collection and Exploration

After having established a business objective, the next step in the AI lifecycle is collecting relevant data. Access to the right data is critical in building successful AI models. Various techniques are available today for data collection, including crowdsourcing, scraping, and the use of synthetic data.

Synthetic data is artificially generated information helpful in different scenarios, such as training models when real-world data is scarce, filling gaps in training data, and speeding up model development.

Once the data is collected, the next step is to perform exploratory data analysis and visualizations. These techniques help to understand what information is available in the data and which processes are needed to prepare the data for model training.

3. Data Preprocessing

Once data collection and exploration are done, the data goes through the next stage, data preprocessing, which helps prepare the raw data and make it suitable for model building. This stage involves different steps, including data cleaning, normalization, and augmentation.

  • Data Cleaning – involves identifying and correcting any errors or inconsistencies in the data.
  • Data Normalization – involves transforming the data to a common scale.
  • Data Augmentation – involves creating new data samples by applying various transformations to the existing data.

4. Feature Engineering

Feature engineering involves creating new variables from available data to enhance the model’s performance. The process aims to simplify data transformations and improve accuracy, generating features for both supervised and unsupervised learning.

It involves various techniques, such as handling missing values, outliers, and data transformation through encoding, normalization, and standardization.

Feature engineering is critical in the AI development lifecycle, as it helps create optimal features for the model and makes the data easily understandable by the machine.

5. Model Training

After preparing the training data, the AI model is iteratively trained. Different machine learning algorithms and datasets can be tested during this process, and the optimal model is selected and fine-tuned for accurate predictive performance.

You can evaluate the performance of the trained model based on a variety of parameters and hyperparameters, such as learning rate, batch size, number of hidden layers, activation function, and regularization, which are adjusted to achieve the best possible results.

Also, businesses can benefit from transfer learning which involves using a pre-trained model to solve a different problem. This can save significant time and resources, eliminating the need to train a model from scratch.

6. Model Evaluation

Once the AI model has been developed and trained, model evaluation is the next step in the AI development lifecycle. This involves assessing the model performance using appropriate evaluation metrics, such as accuracy, F1 score, logarithmic loss, precision, and recall, to determine its effectiveness.

7. Model Deployment

Deploying an ML model involves integrating it into a production environment to produce useful outputs for business decision-making. Different deployment types include batch inference, on-premises, cloud-based, and edge deployment.

  • Batch Inference – the process of generating predictions recurrently on a batch of datasets.
  • On-Premises Deployment – involves deploying models on local hardware infrastructure owned and maintained by an organization.
  • Cloud Deployment – involves deploying models on remote servers and computing infrastructure provided by third-party cloud service providers.
  • Edge Deployment – involves deploying and running machine learning models on local or “edge” devices such as smartphones, sensors, or IoT devices.

8. Model Monitoring

AI model performance can degrade over time due to data inconsistencies, skews, and drifts. Model monitoring is crucial for identifying when this happens. Proactive measures like MLOps (Machine Learning Operations) optimize and streamline the deployment of machine learning models to production and maintain them.

9. Model Maintenance

Model maintenance of the deployed models is critical to ensure their continued reliability and precision. One approach to model maintenance is to build a model retraining pipeline. Such a pipeline can automatically re-train the model using updated data to ensure it remains relevant and efficient.

Another approach to model maintenance is reinforcement learning, which involves training the model to improve its performance by providing feedback on its decisions.

By implementing model maintenance techniques, organizations can ensure that their deployed models remain effective. As a result, models provide accurate predictions that align with changing data trends and conditions.

What Challenges Can Developers Face During The AI Development Lifecycle?

An illustration of humans working in front of computer dashboards trying to find solutions.

Image by L_Nuge from Adobe Stock

With the increasing complexity of AI models, AI developers, and data scientists can struggle with different challenges at various stages of the AI development lifecycle. Some of them are given below.

  • Learning curve: The continuous demand for learning new AI techniques and integrating them effectively can distract developers from focusing on their core strength of creating innovative applications.
  • Lack of future-proof hardware: This can hinder developers from creating innovative applications aligned with their current and future business requirements.
  • Use of complicated software tools: Developers face challenges when dealing with complicated and unfamiliar tools, resulting in slowed development processes and increased time-to-market.
  • Managing large volumes of data: It is difficult for AI developers to get the computing power needed to process this vast amount of data and manage storage and security.

Stay up-to-date on the latest technology trends and developments in AI with Unite.ai.

Human Brain Reacts Differently to Table Tennis Matches Against Human and Machine Opponents

Researchers at the University of Florida have found that the brains of table tennis players react differently when playing against human opponents compared to machine opponents. The study, led by graduate student Amanda Studnicki and her advisor, Daniel Ferris, a professor of biomedical engineering, aimed to understand how our brains respond to the demands of high-speed sports like table tennis and how the choice of opponent affects this response.

Ferris explained the significance of the study: “Humans interacting with robots is going to be different than when they interact with other humans. Our long-term goal is to try to understand how the brain reacts to these differences.”

Examining the Neuroscience Behind Sports Performance

The brain’s performance during sports activities has been a subject of interest for researchers for years. In complex, fast-paced sports like table tennis, understanding how the brain processes information and controls movements can provide valuable insights into sports training and the development of more effective training methods.

This research also has implications for the future of human-robot interactions, as robots become more common and sophisticated in various aspects of human life. Understanding the brain’s response to robotic counterparts can help make artificial companions more naturalistic and improve their integration into our daily lives.

To investigate the brain’s response during table tennis matches, Studnicki and Ferris used a brain-scanning cap equipped with 240 electrodes. This allowed them to focus on the parieto-occipital cortex, the region responsible for turning sensory information into movement. They recorded the brain activity of players while they played against both human opponents and a ball-serving machine.

Studnicki said, “We wanted to understand how it worked for complex movements like tracking a ball in space and intercepting it, and table tennis was perfect for this.”

Synchronization vs. Desynchronization: The Brain’s Response to Different Opponents

The researchers observed that when playing against another human, players’ neurons worked in unison, displaying synchronization. In contrast, when playing against a ball-serving machine, the neurons in their brains were not aligned with one another, leading to desynchronization.

Ferris explained the difference: “If we have 100,000 people in a football stadium and they’re all cheering together, that’s like synchronization in the brain, which is a sign the brain is relaxed. If we have those same 100,000 people but they’re all talking to their friends, they’re busy but they’re not in sync. In a lot of cases, that desynchronization is an indication that the brain is doing a lot of calculations as opposed to sitting and idling.”

The team suspects that players’ brains were more active while waiting for robotic serves because machines provide no cues of what they are going to do next. This difference in brain processing suggests that training with a machine might not offer the same experience as playing against a real opponent.

The Future of Machine-assisted Sports Training

Although the study highlights the differences in brain activity when facing human and machine opponents, it does not dismiss the value of machine-assisted training. Studnicki believes that machines will continue to play a significant role in sports training: “I still see a lot of value in practicing with a machine. But I think machines are going to evolve in the next 10 or 20 years, and we could see more naturalistic behaviors for players to practice against.”

As technology advances, it is likely that machines will become more capable of mimicking human behavior and providing more realistic training experiences. By understanding the nuances of human brain activity in response to different opponents, researchers can contribute to the development of more effective training methods and enhance

Workers are wary — but also optimistic — about AI

Workers are wary — but also optimistic — about AI

Staffers aren't necessarily opposed to AI, new surveys show

Kyle Wiggers 9 hours

As workplaces show increased enthusiasm for AI, staffers expect that the technology will have a major impact on their work lives.

Two recent surveys — one from Pew and the other from AI startup Scale AI — sought to gauge companies’ interest in AI as well as employees’ reactions to those plans and ambitions. The Pew report looked at responses from over 11,000 U.S. adults at a range of companies, while the Scale AI poll recruited 3,000 machine learning practitioners (e.g., people who work with AI tools but don’t necessarily build them) and executives.

The Scale AI survey results suggest spending on AI remains robust, with 72% of companies planning to “significantly” increase their AI investments every year over the next three years. Fifty-nine percent of those companies view AI as critical to their business in the next year, while 69% believe it’ll become critical in the next three years.

But what do workers think? It’s a bit of a mixed bag.

Is there a future in light-powered AI chips?

Is there a future in light-powered AI chips?

Photonics is proving to be a tough nut to crack

Kyle Wiggers 5 days

The growing compute power necessary to train sophisticated AI models such as OpenAI’s ChatGPT might eventually run up against a wall with mainstream chip technologies.

In a 2019 analysis, OpenAI found that from 1959 to 2012, the amount of power used to train AI models doubled every two years, and that the power usage began rising seven times faster after 2012.

It’s already causing strain. Microsoft is reportedly facing an internal shortage of the server hardware needed to run its AI, and the scarcity is driving prices up. CNBC, speaking to analysts and technologists, estimates the current cost of training a ChatGPT-like model from scratch to be over $4 million.

One solution to the AI training dilemma that’s been proposed is photonic chips, which use light to send signals rather than the electricity that conventional processors use. Photonic chips could in theory lead to higher training performance because light produces less heat than electricity, can travel faster and is far less susceptible to changes in temperature and electromagnetic fields.

Lightmatter, LightOn, Luminous Computing, Intel and NTT are among the companies developing photonic technologies. But while the technology generated much excitement a few years ago — and attracted a lot of investment — the sector has cooled noticeably since then.

There are various reasons why, but the general message from investors and analysts studying photonics is that photonic chips for AI, while promising, aren’t the panacea they were once believed to be.

Xavier ‘X’ Jernigan, the voice of Spotify’s DJ, explains what it’s like to become an AI

Xavier ‘X’ Jernigan, the voice of Spotify’s DJ, explains what it’s like to become an AI Lauren Forristal 5 days

In March, Spotify launched its first AI-powered feature with the debut of its AI DJ — a smart audio guide with a convincingly realistic voice. That AI persona was actually based on a real person, as it turns out — Spotify’s head of Cultural Partnerships, Xavier “X” Jernigan, who had the honor of becoming the first voice model for the AI feature.

TechCrunch sat down with Jernigan to learn more about the process for training the AI and Spotify’s future plans for its AI DJ efforts.

The new AI DJ personalizes the music listening experience for listeners, curating a selection of music based on their interests. It also has spoken commentary about each song — much like a real radio host.

In addition to Jernigan’s primary role at Spotify, he’s also the host of various Spotify podcasts, including “The Window,” “Showstopper” as well as the now-defunct podcast “The Get Up.” So, he’s used to having his voice heard by millions of listeners. Still, having his voice memorialized as an AI is a unique experience.

Spotify chose Jernigan to be the first voice model because his “voice and personality resonated with a lot of our listeners already,” Jernigan told TechCrunch. “[The company was] fairly confident that I would resonate in this way as well.”

Spotify’s Morning Show, “The Get Up,” garnered nearly 6 million listeners and was a top 10 podcast on Spotify before it abruptly ended in 2022, demonstrating Jernigan’s pull.

Still, being the voice model for DJ was hard to wrap his head around at first, the podcast host admitted.

“I got pitched on being this voice model for DJ and my mind was blown when it was explained to me,” Jernigan told us. “Imagine if you’re hearing this for the first time you don’t have anything to look at and I’m just like, ‘Wait, what? It’s gonna be me but it’s not me, and it’s text and voice, but it’ll sound like me, and it’s AI?”

“For me, it was a new experience working with AI in this way. I was just blown away,” he added.

Spotify launches ‘DJ,’ a new feature offering personalized music with AI-powered commentary

Spotify says its AI DJ was built using both Sonantic and OpenAI technologies.

Sonantic is an AI startup that Spotify acquired last year. The company’s tech was responsible for building AI-based realistic voices, including the one used for Val Kilmer’s voice in “Top Gun: Maverick.”

Prior to the acquisition, Spotify spent a few years researching AI-powered technology and worked on the DJ feature “in some iteration,” Jernigan noted. He declined to share exactly how long the process took but said integrating the Sonantic technology “really kicked it into high gear.”

Jernigan explained the process of training the AI, which entailed going into a studio, reading off a script and speaking in various cadences and inflections to convey different emotions. He fed the AI certain words that only he uses to make it feel as authentic as possible.

“We use words that I say… I don’t say ‘tunes’ for songs. That’s just not how I talk,” he said. “I say, ‘hits’ or ‘bangers.’ So, you will hear DJ say those kinds of words,” Jernigan continued. “We even did a whole process of like, how do I say ‘hey,’ how do I say ‘hello.’ I carried around a notebook, and I would just write down these different phrases that were something I would say.”

He added that the Spotify team made sure to keep in his natural pauses and breaths so the AI voice would truly sound human-like.

Even Jernigan’s mom gave her stamp of approval to the results.

“[DJ] passed the mama test. I played it for her before it came out, explaining it to her and I’m trying to get her to wrap her mind around it,” he said. “She listened to all my podcasts, so she’s used to hearing my voice recorded and played before and she was like ‘That sounds exactly like you.’ My mama said it sounded like me, so I knew it was spot on.”

Although realistic AI voices already exist, we’d argue that Spotify’s DJ is the calmest and most chill-sounding compared with others we’ve heard. Though Google’s Duplex technology may sound authentic, it’s not necessarily a voice that’s nice to listen to when you’re trying to vibe out to your summer jam playlist.

“For me, doing the performance from a voice acting standpoint, my aim was to connect with people and to converse with people and to think about one person. So, when I was training the AI, I just pictured one person when I was in the studio, talking to them and being their friend,” he added.

Spotify’s new ‘DJ’ feature is the first step into the streamer’s AI-powered future

In addition to making the AI voice sound friendly to listeners, the design of the DJ itself was also made to feel approachable.

The animated green circle that users see when listening to the DJ is a nod to the Spotify logo and moves like a mouth when the AI talks.

“When it came to the design, we thought about the entire experience — how it works, how it sounds, how it looks and how to make it personal for each user,” Emily Galloway, head of Product Design for Personalization at Spotify, told TechCrunch. “Early on for the visual side, we explored some options that felt more technical (imagine things like soundwaves). Yet this didn’t feel right since we wanted to humanize the AI…”

“We wanted to make it look and feel unique. In fact, it was so unique that it was awarded a design patent,” Galloway added.

Jernigan contributed to DJ in other ways besides recording his voice.

In order for the AI to provide expert commentary about the music, Spotify put together a writer’s room comprised of curators, culture experts and music experts.

Jernigan has an extensive background in music, so he was also a participant in the writer’s room. He previously worked for top artists like Diddy, Amy Winehouse and 2 Chainz, among others.

And while Jernigan is the first voice model for DJ, there’s the potential for listeners to hear more voices in the future.

TechCrunch asked Jernigan if the company had any plans to hire voice models that speak other languages.

“Stay tuned,” he hinted.

The AI DJ is currently only available in English for Premium subscribers in the U.S. and Canada. As of February, the DJ feature is still in beta testing.

“We got a whole bunch of really cool new features coming out across the board,” Jernigan said. “We got really dope stuff that’s coming out.”

Spotify introduces new tech for turning radio broadcasts into podcasts

We all contribute to AI — should we get paid for that?

We all contribute to AI — should we get paid for that? Connie Loizos @Cookie / 5 days

In Silicon Valley, some of the brightest minds believe a universal basic income (UBI) that guarantees people unrestricted cash payments will help them to survive and thrive as advanced technologies eliminate more careers as we know them, from white-collar and creative jobs — lawyers, journalists, artists, software engineers — to labor roles. The idea has gained enough traction that dozens of guaranteed income programs have been started in U.S. cities since 2020.

Yet even Sam Altman, the CEO of OpenAI and one of the highest-profile proponents of UBI, doesn’t believe that it’s a complete solution. As he said during a sit-down earlier this year, “I think it is a little part of the solution. I think it’s great. I think as [advanced artificial intelligence] participates more and more in the economy, we should distribute wealth and resources much more than we have and that will be important over time. But I don’t think that’s going to solve the problem. I don’t think that’s going to give people meaning, I don’t think it means people are going to entirely stop trying to create and do new things and whatever else. So I would consider it an enabling technology, but not a plan for society.”

The question begged is what a plan for society should then look like, and computer scientist Jaron Lanier, a founder in the field of virtual reality, writes in this week’s New Yorker that “data dignity” could be an even bigger part of the solution.

Here’s the basic premise: Right now, we mostly give our data for free in exchange for free services. Lanier argues that in the age of AI, we need to stop doing this, that the powerful models currently working their way into society need instead to “be connected with the humans” who give them so much to ingest and learn from in the first place.

The idea is for people to “get paid for what they create, even when it is filtered and recombined” into something that’s unrecognizable.

The concept isn’t brand new, with Lanier first introducing the notion of data dignity in a 2018 Harvard Business Review piece titled, “A Blueprint for a Better Digital Society.”

As he wrote at the time with co-author and economist Glen Weyl, “[R]hetoric from the tech sector suggests a coming wave of underemployment due to artificial intelligence (AI) and automation.” But the predictions of UBI advocates “leave room for only two outcomes,” and they’re extreme, Lanier and Weyl observed. “Either there will be mass poverty despite technological advances, or much wealth will have to be taken under central, national control through a social wealth fund to provide citizens a universal basic income.”

The problem is that both “hyper-concentrate power and undermine or ignore the value of data creators,” they wrote.

Untangle my mind

Of course, assigning people the right amount of credit for their countless contributions to everything that exists online is not a minor challenge. Lanier acknowledges that even data-dignity researchers can’t agree on how to disentangle everything that AI models have absorbed or how detailed an accounting should be attempted. Still, Lanier thinks that it could be done — gradually.

Alas, even if there is a will, a more immediate challenge — lack of access — is a lot to overcome. Though OpenAI had released some of its training data in previous years, it has since closed the kimono completely, citing competitive and safety concerns. When OpenAI President Greg Brockman described to TechCrunch last month the training data for OpenAI’s latest and most powerful large language model, GPT-4, he said it derived from a “variety of licensed, created, and publicly available data sources, which may include publicly available personal information,” but he declined to offer anything more specific.

Unsurprisingly, regulators are grappling with what to do. OpenAI — whose technology in particular is spreading like wildfire — is already in the crosshairs of a growing number of countries, including the Italian authority, which has blocked the use of its popular ChatGPT chatbot. French, German, Irish and Canadian data regulators are also investigating how it collects and uses data.

Margaret Mitchell, an AI researcher who was formerly Google’s AI ethics co-lead, tells the outlet Technology Review that it might be nearly impossible at this point for all these companies to identify individuals’ data and remove it from their models.

As explained by the outlet: OpenAI would be better off today if it had built in data record-keeping from the start, but it’s standard in the AI industry to build datasets for AI models by scraping the web indiscriminately and then outsourcing some of the clean-up of that data.

How to save a life

If these players have a limited understanding of what’s now in their models, that’s a daunting challenge to the “data dignity” proposal of Lanier.

Whether it renders it impossible is something only time will tell.

Certainly, there is merit in determining some way to give people ownership over their work, even if that work is made outwardly “other” by the time a large language model has chewed through it.

It’s also highly likely that frustration over who owns what will grow as more of the world is reshaped by these new tools. Already, OpenAI and others are facing numerous and wide-ranging copyright infringement lawsuits over whether or not they have the right to scrape the entire internet to feed their algorithms.

Either way, it’s not just about giving credit where it’s due; recognizing people’s contribution to AI systems may be necessary to preserve humans’ sanity over time, suggests Lanier in his New Yorker piece.

He believes that people need agency, and as he sees it, universal basic income “amounts to putting everyone on the dole in order to preserve the idea of black-box artificial intelligence.”

Meanwhile, ending the “black box nature of our current AI models” would make an accounting of people’s contributions easier — which would make them more inclined to stay engaged and continue making contributions.

It might all boil down to establishing a new creative class instead of a new dependent class, he writes. And which would you prefer to be a part of?

What is Auto-GPT and why does it matter?

What is Auto-GPT and why does it matter? Kyle Wiggers 4 days

Silicon Valley’s quest to automate everything is unceasing, which explains its latest obsession: Auto-GPT.

In essence, Auto-GPT uses the versatility of OpenAI’s latest AI models to interact with software and services online, allowing it to “autonomously” perform tasks like X and Y. But as we are learning with large language models, this capability seems to be as wide as an ocean but as deep as a puddle.

Auto-GPT — which you might’ve seen blowing up on social media recently — is an open source app created by game developer Toran Bruce Richards that uses OpenAI’s text-generating models, mainly GPT-3.5 and GPT-4, to act “autonomously.”

There’s no magic in that autonomy. Auto-GPT simply handles follow-ups to an initial prompt of OpenAI’s models, both asking and answering them until a task is complete.

Auto-GPT, basically, is GPT-3.5 and GPT-4 paired with a companion bot that instructs GPT-3.5 and GPT-4 what to do. A user tells Auto-GPT what their goal is and the bot, in turn, uses GPT-3.5 and GPT-4 and several programs to carry out every step needed to achieve whatever goal they’ve set.

What makes Auto-GPT reasonably capable is its ability to interact with apps, software and services both online and local, like web browsers and word processors. For example, given a prompt like “help me grow my flower business,” Auto-GPT can develop a somewhat plausible advertising strategy and build a basic website.

#AutoGPT is the new disruptive kid on the block- It can apply #ChatGPT's reasoning to broader, more intricate issues requiring planning & multiple steps.

Still early but very impressive with many health and biomedicine applications.

Just tried #AgentGPT and asked it to… pic.twitter.com/ywFhtjxjYD

— Daniel Kraft, MD (@daniel_kraft) April 12, 2023

As Joe Koen, a software developer who’s experimented with Auto-GPT, explained to TechCrunch via email, Auto-GPT essentially automates multi-step projects that would’ve required back-and-forth prompting with a chatbot-oriented AI model like, say, OpenAI’s ChatGPT.

“Auto-GPT defines an agent that communicates with OpenAI’s API,” Koen said. “This agent’s objective is to carry out a variety of commands that the AI generates in response to the agent’s requests. The user is prompted for input to specify the AI’s role and objectives prior to the agent starting to carry out commands.”

In a terminal, users describe the Auto-GPT agent’s name, role and objective and specify up to five ways to achieve that objective. For example:

  • Name: Smartphone-GPT
  • Role: An AI designed to find the best smartphone
  • Objective: Find the best smartphones on the market
  • Goal 1: Do market research for different smartphones on the market today
  • Goal 2: Get the top five smartphones and list their pros and cons

Behind the scenes, Auto-GPT relies on features like memory management to execute tasks, along with GPT-4 and GPT-3.5 for text generation, file storage and summarization.

Auto-GPT can also be hooked up to speech synthesizers, like ElevenLabs’, so that it can “place” phone calls, for example.

Auto-GPT is publicly available on GitHub, but it does require some setup and know-how to get up and running. To use it, Auto-GPT has to be installed in a development environment like Docker, and it must be registered with an API key from OpenAI — which requires a paid OpenAI account.

It might be worth it — although the jury’s out on that. Early adopters have used Auto-GPT to take on the sorts of mundane tasks better delegated to a bot. For example, Auto-GPT can field items like debugging code and writing an email or more advanced things, like creating a business plan for a new startup.

“If Auto-GPT encounters any obstacles or inability to finish the task, it’ll develop new prompts to help it navigate the situation and determine the appropriate next steps,” Adnan Masood, the chief architect at UST, a tech consultancy firm, told TechCrunch in an email. “Large language models excel at generating human-like responses, yet rely on user prompts and interactions to deliver desired outcomes. In contrast, Auto-GPT leverages the advanced capabilities of OpenAI’s API to operate independently without user intervention.”

In recent weeks, new apps have emerged to make Auto-GPT even easier to use, like AgentGPT and GodMode, which provide a simple interface where users can input what they want to accomplish directly on a browser page. Note that, like Agent-GPT, both require an API key from OpenAI to unlock their full capabilities.

Like any powerful tool, however, Auto-GPT has its limitations — and risks.

AutoGPT just exceeded PyTorch itself in GitHub stars (74k vs 65k). I see AutoGPT as a fun experiment, as the authors point out too. But nothing more. Prototypes are not meant to be production-ready. Don't let media fool you – most of the "cool demos" are heavily cherry-picked: 🧵 pic.twitter.com/I44H7BkCqr

— Jim Fan (@DrJimFan) April 16, 2023

Depending on what objective the tool’s provided, Auto-GPT can behave in very… unexpected ways. One Reddit user claims that, given a budget of $100 to spend within a server instance, Auto-GPT made a wiki page on cats, exploited a flaw in the instance to gain admin-level access and took over the Python environment in which it was running — and then “killed” itself.

There’s also ChaosGPT, a modified version of Auto-GPT tasked with goals like “destroy humanity” and “establish global dominance.” Unsurprisingly, ChaosGPT hasn’t come close to bringing about the robot apocalypse — but it has tweeted rather unflatteringly about humankind.

Arguably more dangerous than Auto-GPT attempting to “destroy humanity” are the unanticipated problems that can crop up in otherwise perfectly normal scenarios, though. Because it’s built on OpenAI’s language models — models that, like all language models, are prone to inaccuracies — it can make errors.

That’s not the only problem. After successfully completing a task, Auto-GPT usually doesn’t recall how to perform it for later use, and — even when it does — it often won’t remember to use the program. Auto-GPT also struggles to effectively break complex tasks into simpler sub-tasks and has trouble understanding how different goals overlap.

“Auto-GPT illustrates the power and unknown risks of generative AI,” Clara Shih, the CEO of Salesforce’s Service Cloud and an Auto-GPT enthusiast, said via email. “For enterprises, it is especially important to include a human in the loop approach when developing and using generative AI technologies like Auto-GPT.”

This Week in Apps: Apple ‘sherlocks’ journaling apps, Twitter’s checkmark apocalypse, Snap summit recap

This Week in Apps: Apple ‘sherlocks’ journaling apps, Twitter’s checkmark apocalypse, Snap summit recap Sarah Perez @sarahintampa / 4 days

Welcome back to This Week in Apps, the weekly TechCrunch series that recaps the latest in mobile OS news, mobile applications and the overall app economy.

The app economy in 2023 hit a few snags, as consumer spending last year dropped for the first time by 2% to $167 billion, according to data.ai’s “State of Mobile” report. However, downloads are continuing to grow, up 11% year-over-year in 2022, to reach 255 billion. Consumers are also spending more time in mobile apps than ever before. On Android devices alone, hours spent in 2022 grew 9%, reaching 4.1 trillion.

This Week in Apps offers a way to keep up with this fast-moving industry in one place with the latest from the world of apps, including news, updates, startup fundings, mergers and acquisitions, and much more.

Do you want This Week in Apps in your inbox every Saturday? Sign up here: techcrunch.com/newsletters

Top Stories

Apple plans its next “sherlock:” journaling apps

My hope is this brings millions more into journaling, and those who are serious about it will upgrade to a serious paid tool like Day One. https://t.co/yJe867frze

— Matt Mullenweg (@photomatt) April 21, 2023

Apple is planning to “sherlock” a new class of applications if a new report from The Wall Street Journal holds true. The paper reported Apple is planning to introduce an iPhone journaling application as part of its expansion of health initiatives. The new app, which is unnamed, would challenge those on the market like Day One (acquired by WordPress.com maker Automattic in 2021). The WSJ said a document describing Apple’s app noted journaling helps to improve mental and physical well-being.

The app is reportedly set to arrive with the launch of iOS 17 and would put Apple again in the crosshairs of regulatory scrutiny. The company has come under fire in recent years for its habit of lifting ideas from the wider app developer and partner community. The practice has become so common, it’s got its own name — sherlocking — a reference to Apple software that started this trend decades ago.

The timing of this move is worth noting. Apple is currently under DoJ investigation for alleged anticompetitive behavior in the App Store and in other business practices. The DoJ has spoken to companies who have been “sherlocking” victims as part of its inquiry, including Tile, whose business was hit by the launch of Apple’s AirTag. The Justice Dept. has also spoken to other app developers, including smaller companies like Basecamp and parent control software maker Mobicip, as well as bigger developers like Match and Spotify, about Apple’s App Store terms.

For Apple to now launch yet another app that competes with a number of third-party developers shows Apple is not worried much about the regulatory pressure and isn’t adjusting its behavior.

Related to this, The WSJ also recently ran a feature on Apple’s “kiss of death,” citing partners who detailed what it felt like when the tech giant came calling. After initially being excited by the prospect of an Apple partnership, many partners now say Apple has stolen their ideas for itself, hurting their own businesses.

Twitter’s Checkmark Apocalypse has arrived — and it’s quite the debacle

twitter-legacy-verified-removed

Image Credits: Bryce Durbin / TechCrunch

Twitter has finally made good on its promise to yank its users’ verification checkmarks from their profiles in what has to be one of the more ridiculous decisions Elon Musk has made to date since taking ownership of the social media platform.

Seemingly not understanding the value of the company he owns, Musk believes that no one should be verified unless they’re paying Twitter. But in reality, the verification service was a resource provided to Twitter’s community that added value. The blue checkmark symbol indicated that a high-profile figure, celebrity, institution or journalist was who they said they were and not an impersonator.

Twitter’s legacy blue check mark era is officially over

Twitter is not a curated, visual platform like Instagram, where a verification mark (which you can also now pay for!) provides an influencer with clout or bragging rights. Instead, Twitter is a network that’s centered around the rapid-fire dissemination of news and information in real time. The checkmark meant the source had been already vetted to be the real person, organization or official in question, allowing for faster fact-checks. This aids in newsgathering and establishes a baseline of trust across the platform.

But of course, Musk doesn’t understand this.

He has such a low opinion and value for journalism that he went around adding “state-affiliated media” and then “government-funded” labels to the profiles of news outlets like PBS, NPR, CBC, BBC and others, lumping these editorially independent news-gathering organizations alongside state-run media entities like the Kremlin-backed Russia Today. Some of the news organizations finally left Twitter — something more should do, in fact. (No, I don’t control TC’s social media efforts.)

It’s unclear what’s happening with those labels now, as they’re disappearing from accounts on Friday, including those of China state-affiliated media.

Musk historically has demonstrated a callous disregard for journalism, calling The NYT “fake,” while tweeting out actual fake news himself. He also has Twitter’s comms email respond to press inquiries with a poop emoji. For that reason, it’s almost funny to watch Musk run headfirst into a wall with his complete mishandling of such a pivotal Twitter feature.

After all, if Musk had wanted to generate revenue from Twitter power users, he could have done so by giving ID-verified users their own checkmarks, perhaps with a different color scheme, that provided the set of special features and timeline prioritization that Twitter is now selling with its Blue subscription. That would have added value without disrupting the existing system.

Chaos reigns after Twitter’s blue checks vanish

Instead, he’s again created chaos by removing checkmarks from almost everyone, allowing for impersonation — and, in some cases, the spread of dangerous misinformation, as well. On top of that, he left legacy checkmarks on some high-profile accounts, like LeBron James and Stephen King, both of who said they would not pay for Twitter Blue. It was a power play, clearly. If the celebs don’t leave, they’re tacitly confirming they’ve accepted the new system.

In addition, Twitter is being dishonest about who is truly a paid Twitter Blue subscriber.

Yours truly paid for Blue earlier this year to fact-check a story and then immediately canceled. I now continue to have a checkmark despite the subscription’s expiration in February, as documented below. (In any event, don’t bother to follow me on Twitter, by the way — I’m on Bluesky, T2, Post and Mastodon.)

I briefly paid for Blue earlier this year to check something out I needed to confirm. I immediately canceled. The subscription expired on Feb. 18, 2023. Yet Twitter would like you to believe that I am a continued Blue subscriber. That's not true. pic.twitter.com/N6eXQRY3Pr

— Sarah Perez – please use Signal not DMs (@sarahintampa) April 20, 2023

Along with the checkmark removals, Twitter has also now begun pressuring advertisers to either pay for Twitter Blue or Verified Organizations to continue running ads on the platform. Those businesses that already spend over $1,000 per month will have gold checks automatically, Twitter said.

Snap’s Partner Summit focuses on Shopping, AR and AI

Image Credits: Snap

Snap this week hosted its Partner Summit where it shared a number of features, updates and initiatives in areas like e-commerce, AR and AI. The company also used the time to introduce a range of consumer-facing updates for its Snapchat mobile app.

At the event, CEO Evan Spiegel commented on the proposed TikTok ban in the U.S., joking at first that Snap would “love that,” but noting that such a ban sets a dangerous precedent for other social platforms. Though he acknowledged there could be national security concerns, the exec, like Zuckerberg has, also pushed for tech regulations.

“It is important for us to be thoughtful and really develop a regulatory framework to deal with security concerns, especially around technology,” Spiegel.

Snap CEO Evan Spiegel on TikTok ban: ‘We’d love that’

Among the other event highlights and news:

  • Snap said the Snapchat+ paid subscription now has over 3 million users. That’s up from 2 million in February and 1 million last August.
  • Snap opened its Public Story revenue share program to creators with at least 50,000 followers and 25,000 monthly Snap views who post at least 10x per month.

Public Profiles on Snapchat

Image Credits: Snapchat

  • Snapchat added new Story modes like “After Dark” for posts after 8 pm and “Communities” which let users interact with people in their same school.
  • Snapchat updated its flashback feature Memories to show friends what they were doing on a given day exactly a year ago.
  • The Snap Map will start suggesting places that Snap thinks users would like. A new “Popular Last Night” tag will also show people where their friends were hanging out.
  • Snapchat is adding an interactive Lens that lets users complete puzzles and play games together while they’re face-to-face on a video call.

Snapchat's calling Lens

Image Credits: Snapchat

  • Snap also announced new AR Lenses powered by generative AI, starting with a new “Cosmic Lens” that turns you and your surroundings into an immersive, animated sci-fi scene. The move follows TikTok’s recently successful launch of the AI filter, “Bold Glamour.” The app will also use AI to recommend Lenses based on the photo or video users provided.

Snapchat's new AI lens

Image Credits: Snapchat

  • Bitmoji’s avatar style is being updated with a more expressive look with realistic dimensions, shading and lighting.
  • Snap’s enterprise biz, ARES, introduced AR Mirrors — a way to bring AR experiences to real-world locations, like retail stores. Men’s Wearhouse and Nike have used its AR Mirrors in stores and Coca-Cola is building a prototype drink machine with Snap that lets consumers use hand gestures to control the screen.

Image Credits: Snap

  • Snapchat announced its AI chatbot, My AI, is now free for all Snapchat global users instead of only Snapchat+ subscribers, as before. However, Snap is also rolling out a subscriber-only My AI feature which will see the chatbot able to “Snap” you back using generative AI to create photos. The AI chatbot will also now be able to be added to group chats with an @mention, make recommendations for places on Snap Map, suggest Lenses and send chat replies when you send it Snaps.

Image Credits: Snap

Platform News

Apple

  • Apple is introducing a new feature that will reduce the burden on app developers when it comes to solving subscription billing issues. Often, when an app’s subscribers have a payment method that fails, they’ll turn to the app developer for help. But the developer doesn’t handle billing issues for their App Store apps — those are managed by Apple itself. Now, Apple says a new warning message will appear to prompt users inside the app when their payment method fails, meaning they’ll no longer need to bother the developer for help with this common issue.

  • Apple is rumored to be developing VR apps and services for its upcoming mixed-reality headset in categories like gaming, fitness, live sports and collaboration.
  • Researchers said they found evidence that Apple’s Lockdown Mode has helped block an attack by hackers using spyware made by the infamous mercenary hacking provider NSO Group.
  • Apple launched its Apple Card savings account inside Apple Wallet offering an attention-getting 4.15% APY. The accounts are open to Apple Card holders in the U.S. and are technically managed by Goldman Sachs, so they have FDIC protections.
  • Apple Watch’s software is due to get its biggest update since its release, according to a new report by Bloomberg’s Mark Gurman. Details were sparse but we expect to hear more at WWDC.

Google

  • Google Play will tell users to update their buggy, crashing apps: Google announced a new Play Store feature that will prompt users to update developers’ apps if the app crashes in the foreground and there’s a more stable version of the outdated app already available for download. The feature will apply to phones and tablets running Android 7.0 (SDK level 24) and above. Developers don’t need to do any integration work to take advantage of the feature, which is enabled automatically when Google Play determines a newer version of the app has a statistically relevant, lower crash rate.

  • Ahead of Google I/O, a leak is suggesting the upcoming Google Pixel Tablet will be priced around €600-650 ($658.63-$713.52 if converted directly to USD) — pricier than rivals — and the dock will cost around $120.
  • Google shared a number of updates to help app publishers increase revenue and grow their businesses with AdMob, including those around inventory access, bidding, revenue optimization and more.
  • Google Play Points can now get you more stuff. The company this week announced changes to the program which rewards users with points for making purchases on Google Play to now include app offers — like $10 off DoorDash or Instacart; Google merchandise (like Chrome dino game socks!); in-game items and coupons; and Google Play Credit for making in-app purchases, apps, books and subscriptions.

Image Credits: Google

App Updates

Messaging & Communications

  • Telegram’s latest update brings shareable chat folders, custom wallpapers and other features to users. The app’s chat folders can now be shared with a link, the company says, allowing users to invite friends or colleagues to dozens of work groups, collections of news channels and more.
  • Google Fi, the tech giant’s carrier service, is being rebranded to Google Fi Wireless and gaining new features, including the ability to add on a Pixel Watch or Samsung Galaxy Watch to their plan at no extra charge. Users can also get a free phone for adding a line if they agree to stay with the service for 24 months, among other things. The options are available from the Google Fi mobile app and website, where consumers manage their service.
  • The company behind the popular iPhone customization app Brass and others launched an AI chat app called Superchat, which allows iOS users to chat with virtual characters powered by OpenAI’s ChatGPT. Other companies already offer AI chats with characters in more advanced ways, including D-ID. Meanwhile, the developer of another AI chat app called Superchat says their concept was ripped off by another Superchat app before they could launch. “Super chat” is not a unique name, though, as it’s well-known as YouTube’s paid live chatting feature for creators and fans.

Gaming

  • Roblox’s reach into a slightly older demographic is expanding, data shows. The gaming platform maker’s 17-to-24 age group has grown 33% year-over-year as kids are aging up but remaining on the platform.
  • Netflix is launching a follow-up to the supernatural thriller Oxenfree after acquiring the studio behind the game (Night School Studio) in 2021. The company says Oxenfree II: Lost Signals will arrive on July 12 on Netflix, Nintendo Switch, PS4/PS5 and Steam. Netflix recently announced it has 40 games slated for launch this year and has 70 in development with its partners.
  • Netflix also just hired former Halo Infinite creative head Joseph Staten to develop a multi-platform AAA title for the Netflix Games division. Staten will serve as a creative director at Netflix, he announced in a tweet, adding that his work will focus on developing on original IP.
  • Meta opened up its social VR space Horizon Worlds to teen users aged 13 to 17 after originally keeping it to 18 and up. The company said as part of its expansion it would include age-appropriate protections and safety defaults. Children’s rights activists had earlier urged Meta to abandon its plans to court younger users.
  • Niantic announced a partnership with Capcom to launch a game within the Monster Hunter franchise later this year. The new mobile title will come to both iOS and Android and will have players hunt monsters in the real world.

The hunt is on. “Monster Hunter Now” launches September 2023. @MH_Now_En @CapcomUSA_ 🗡 https://t.co/EPA8vRXt8X 🗡 #monsterhunternow pic.twitter.com/oudIHJ90iD

— Niantic (@NianticLabs) April 18, 2023

Social

Image Credits: Meta

  • Instagram said users can now add up to five links to their profiles, in a move that challenges Linktree, Beacons and numerous other “link in bio” solution providers.
  • Reddit is shifting to a paid subscription model for API access, impacting app developers like the makers of the popular Reddit app, Apollo. The change will likely mean most third-party apps will need to shift to their own subscription model going forward. The company’s decision has to do with the demand for data to train AI models like OpenAI’s ChatGPT and others. “The Reddit corpus of data is really valuable…we don’t need to give all of that value to some of the largest companies in the world for free,” said Reddit CEO Steve Huffman.
  • The Verge does a deep divide into ActivityPub, the open source, decentralized social networking protocol powering Mastodon and the wider Fediverse. Want to get up to speed on the state of the Fediverse and its potential? This is a good place to start.
  • Fiction apps Wattpad and Yonder are now being overseen by KB Nam, previously head of Strategy and Research at their parent company Naver Webtoon. Nam will report directly to Webtoon Americas president, Ken Kim.
  • The Jack Dorsey-backed Twitter alternative Bluesky arrived on Android but remains invite-only. The community has around 20,000 users but the app has been downloaded 240,000 times on iOS to date.

Image Credits: Bluesky

  • Magazine app Flipboard is furthering its investment in the Fediverse with its newly launched “editorial desks” that curate news for the Mastodon community. Initially, the company will launch four desks — News, Tech, Culture and Science — which it says won’t be automated by bots but instead by professional curators who have expertise in discovering and elevating interesting content.
  • Pinterest hired a Google Pixel VP to fill its CPO position. Sabrina Ellis spent the last 12 years at Google, where she led the work on Google Pixel. Previously, she spent eight years at Yahoo in numerous leadership roles. She will replace Pinterest’s current senior vice president of product, Naveen Gavini.
  • Imgur plans to ban explicit images, while still allowing for artistic nudity starting on May 15. The company says the service will adopt a mix of automatic and human moderation. The changes may impact NSFW subreddits (communities) on Reddit which allow for explicit images. The MediaLab-owned company said explicit content was a risk to Imgur’s “community and its business,” as the reason for the move.

Streaming and Entertainment

Spotify and Bereal integration

Image Credits: Spotify

  • Spotify announced it will now work with BeReal to allow the social app’s users to share what they’re listening to on Spotify through a new integration. After connecting your accounts, BeReal will automatically pull in the song or podcast you’re listening to on Spotify at the time you capture a BeReal.
  • Creator company Jellysmack is partnering with Spotify to bring its creators to the streaming platform. A selection of its creators will upload weekly video podcast episodes to the service, including Ed Bolian (VINwiki), Audit the Audit, Christina Randall, Brooke Makenna, and Jessica Kent.
  • Cameo introduced Cameo Collage, a free group-gifting feature. Gift givers can now combine celebrity Cameo videos with more personalized videos, images, GIFS and written messages from friends and family to create a digital collage for the recipient.
  • Netflix in its Q1 earnings said it would begin its password-sharing crackdown in the U.S. and other countries this summer (Q2). It has already implemented the changes in Canada, New Zealand, Portugal and Spain.
  • Netflix reported mixed earnings with revenue of $8.16 billion, behind estimates of $8.18 billion. It reported higher-than-expected earnings of $2.88 per share in Q1, as analysts had anticipated $2.86 per share.

Transportation

Image Credits: Google / Waze

  • Waze on Google built-in has come to Volvo Cars and Polestar 2 cars. After a one-time setup, Volvo and Polestar drivers can access Waze’s real-time routing, navigation, alerts, settings, preferences and saved places on a bigger, eye-level display.

Health & Fitness

  • Marvel announced a new mobile fitness app, Marvel Move, featuring immersive audio-based running routines with popular Marvel Comics characters. The app, part of a collaboration with Six to Start, co-creator of the popular fitness app Zombies, Run!, includes five storylines to choose from including Thor & Loki, X-Men, The Hulk, Daredevil and Doctor Strange and the Scarlet Witch.

News

  • Samsung launched its own take on Apple News with its new “Samsung News” app that gives users access to everyday news from a variety of publications. The app will replace the company’s current “Samsung Free” app, and includes custom news feeds in addition to morning and evening briefings about the top news of the day.

Government, Policy and Lawsuits

  • WhatsApp, Signal, Viber, Wire and other encrypted messaging apps signed an open letter asking the U.K. government to “urgently rethink” its Online Safety Bill legislation, which they say will force tech companies to break end-to-end encryption on private messaging services, weakening the “privacy of billions of people around the world.”
  • Google has asked the court to dismiss multiple claims in its antitrust trial with Epic Games, Match, state AGs and others. In a new filing, Google’s legal team is now asking the court to dismiss several of the plaintiffs’ arguments regarding the nature of its app store business, revenue-sharing agreements and other app store-related projects in a partial motion for summary judgment. Google believes the court should now have enough information on hand to make determinations on a handful of the plaintiffs’ claims before the case goes to trial, saying that these items are not in violation of antitrust law. If the court agrees with Google’s position, the trial would still move forward as other claims would still need to be argued in court.

Google asks court to dismiss multiple claims in Epic Games antitrust trial

  • The U.K. Competition and Markets Authority (CMA) opened a consultation on Google’s proposal to let developers use alternative payment methods for in-app purchases on Android, aka “User Choice Billing.” It’s inviting interested stakeholders to respond to Google’s proposal by May 19 and will then make a decision on whether to accept the comments and resolve the case. Google is suggesting it cuts its commission to 4% if the developer offers Google’s own billing alongside their own. But this would only be cut to 3% if just third-party billing was offered.
  • Montana lawmakers approved a bill that would ban TikTok and would bar app stores from offering the app within the state, starting on January 1, 2024. It’s unclear how such a measure would be enforced as the app stores don’t offer a way to block distribution by state, only by country.

Funding and M&A

  • Starboard (Formerly Olympic Media) concluded the acquisition of right-wing Twitter alternative Parler and shut it down. The company said of the decision: “No reasonable person believes that a Twitter clone just for conservatives is a viable business any more [sic].” The Parler app will undergo a strategic assessment and it’s not clear what the company has in store for its future.
  • Epic Games expanded its Latin American footprint with its acquisition of the Brazilian game development studio Aquiris. The developer is best known for “Wonderbox: The Adventure Maker,” a magic-themed game-creation sandbox available on Apple Arcade.
  • Myxt, an audio file management platform for creators, raised $2 million in seed funding led by Accel Ventures and Quiet Capital. The startup offers a collaborative workplace app for audio creators that’s available on web, iOS and Android, where users can stream tracks, organize files and back up their library.
  • SoundHound closed on $100 million in strategic funding from Atlas Credit Partners as part of a new $125 million loan facility. The publicly traded company is using the money to refinance its debt and continue to fund its long-term strategy.
  • Japanese gaming giant Sega is acquiring Finland’s Rovio in an all-cash deal worth €706 million ($775 million). The deal is expected to close in Q2 or Rovio’s fiscal year (in the next couple of months). Sega’s offer represents a 63.1% premium on Rovio’s closing price on January 19.

Downloads

Wavelength

Wavelength group chat

Image Credits: Wavelength

An interesting new chat app called Wavelength has arisen out of the ashes of the social networking app Telepath, which shut down last year. Now the team has shifted its focus to improving group chat experiences. Instead of having different group chats, it introduces the idea of threaded messaging combined with AI. The threads help to keep group chats less cluttered by making it easier to follow multiple conversations at once.

In addition, users can add OpenAI’s GPT-3.5 into their group chats by mentioning @AI. This makes the app among the first to offer chatting with AI. Snapchat is also now doing this with its My AI feature as is Ghost, which allows groups to chat with ChatGPT.

The startup aims to focus in other areas as well, like privacy, moderation, discovery and more. Notably, John Gruber is an advisor for the currently iOS and Mac-only app.

You can read more about Wavelength here.

Wavelength is a new app trying to make group chat suck less

Nocam

A new social video app called Nocam has a radical idea to make social networking more authentic — it’s turning off the camera so you can’t see how you look while filming. The idea is to make capturing a moment feel natural while reducing the friction that comes with seeing a preview of your own image, which can often leave users hesitant to post or scrambling to add edits and filters to touch up their appearance.

Image Credits: Nocam

Nocam believes this concept better reflects the way people interact in real life, where we aren’t faced by a mirror that shows us what we look like, that is. The company describes its app as BeReal meets TikTok. But perhaps it’s more accurate to say BeReal meets TikTok Challenges, as the app focuses on sending users fun or silly prompts they have to act out with the camera, like doing a dance or just showing what they’re up to. Users can also prompt their friends, too.

You can read more about Nocam here.

Nocam unveils a social video app that’s like BeReal meets TikTok challenges

Proton Pass

A screenshot of the browser extension of Proton Pass

Image Credits: Proton

Proton, the maker of the end-to-end encrypted email service Proton Mail, Proton VPN, Proton Drive and Proton Calendar, this week launched a new password manager called Proton Pass. Everything stored in the app is end-to-end encrypted, and Proton itself never has access to your data. The beta version is live now to Proton users with a lifetime plan and will then roll out to other subscribers and customers in the future.

The app comes about from the company’s acquisition of SimpleLogin, an email alias startup, and is available as a desktop as a browser extension, iOS or Android app.

You can read more about Proton Pass here.

Proton announces Proton Pass, a password manager

Etc.

Does anyone wish they still had their old phone?

Snapchat sees spike in 1-star reviews as users pan the ‘My AI’ feature, calling for its removal

Snapchat sees spike in 1-star reviews as users pan the ‘My AI’ feature, calling for its removal Sarah Perez @sarahintampa / 2 days

The user reviews for Snapchat’s “My AI” feature are in — and they’re not good. Launched last week to global users after initially being a subscriber-only addition, Snapchat’s new AI chatbot powered by OpenAI’s GPT technology is now pinned to the top of the app’s Chat tab where users can ask it questions and get instant responses. But following the chatbot’s rollout to Snapchat’s wider community, Snapchat’s app has seen a spike in negative reviews amid a growing number of complaints shared on social media.

Over the past week, Snapchat’s average U.S. App Store review was 1.67, with 75% of reviews being one-star, according to data from app intelligence firm Sensor Tower. For comparison, across Q1 2023, the Snapchat average U.S. App Store review was 3.05, with only 35% of reviews being one-star.

The number of daily reviews has also increased by five times over the last week, the firm noted.

Another app data provider, Apptopia, reports a similar trend. Its analysis shows “AI” was the top keyword in Snapchat’s App Store reviews over the past seven days, where it was mentioned 2,973 times. The firm has given the term an “Impact Score” rating of -9.2. This Impact Score is a weighted index that measures the effect a term has on sentiment and ranges from -10 to +10.

Apptopia also said that Snapchat received around 3x more one-star ratings than usual on April 20, 2023. That’s the day after the My AI global release was announced.

Now, the number of one-star reviews is starting to come down a bit, but they still remain elevated.

Image Credits: Apptopia (analysis of Snapchat app ratings)

The backlash against Snapchat’s My AI comes at a time when the hype around AI is at an inflection point. Companies are weighing how to integrate AI into their businesses, not if they should.

For Snap, the addition of an AI chatbot to its social app would have been thought to be a smart move, as dozens of AI chatbot apps are filling the app stores, raking in millions of dollars — a signal that could easily be interpreted to indicate growing consumer demand for social AI chat experiences.

But many Snapchat users aren’t thrilled with My AI, which appeared inside their app without warning or their consent.

Image Credits: Snapchat screenshot

To some extent, it’s the chatbot’s placement that’s the cause of concern.

My AI is pinned to the top of users’ Chat feed inside the app and can’t be unpinned, blocked or removed, as other conversations can be. This feed is where Snapchat users have regular interactions with friends and isn’t necessarily a place they want to toy around with experimental features. Plus, Snapchat already has an established presence in this feed with its own “Team Snapchat” chats, and now it’s doubling the screen real estate it wants to take for itself — or, at least, that’s how some users see it.

It’s not difficult to find complaints about the My AI feature on social media. A simple search for “My AI” on Twitter, for instance, will reveal numerous results. Users are also sharing their complaints with Snapchat directly.

After announcing the new chatbot in a tweet last week during Snap’s Partner Summit event, users took the replies to cite their grievances.

Can we have the option to delete it? Asking for the majority of us who didn’t request this feature

— ¿Bebé, Tré fue? (@tremcleod_) April 19, 2023

In dozens of responses to Snap’s tweet, users are fully panning the AI bot. They’re saying it should be opt-in only or that they should be given the option to remove it, instead of having it forced upon them. Some users are so upset they’re even threatening to quit Snapchat over this and delete the app entirely.

Many are also pushing back at the fact that removing the My AI from their Chat feed requires a Snapchat+ subscription. According to Snap’s own documentation, Snapchat+ subscribers will receive early access to new My AI features and have the ability to unpin My AI or remove it from their Chats.

This angers people who now feel like they’re being forced to pay Snapchat after it messed up their app with an unwanted feature.

Snapchat’s AI chatbot is now free for all global users, says the AI will later ‘Snap’ you back

Not only do users find the AI feature invasive, some find it creepy, as well.

They’re surprised to learn that Snapchat’s AI knows their location, for example, and can use that information in its responses, even if they’re not sharing their location on the Snap Map.

In a way, the AI bot is surfacing the level of personal data collection that social media companies do in the background, and putting it directly in front of the consumer. As it turns out, that’s not a great selling point when the users don’t feel they’ve specifically opted in to share that data with the AI.

This speaks to a larger debate now taking place around AI, as people are waking up to the fact that it’s our own data and our labor in creating information for the web that has allowed these AI systems to come into existence in the first place. Modern AIs are trained on large data sources, including those they licensed but also on publicly available data on the internet and our personal information.

This post on fb is so creepy!! Absolutely just no, we don’t want it ‼️ pic.twitter.com/TsPYeOtAmO

— Madison Carroll (@MadisonCarroll0) April 21, 2023

In addition, Snapchat’s My AI had already been the subject of serious concerns before its public rollout.

While available as a subscriber-only feature, The Washington Post reported the bot was responding in an unsafe manner. After telling the bot the user was a 15-year-old, the AI made suggestions about how to mask the smell of alcohol and pot at a birthday party. It also wrote an essay for school for the teen. When the bot was told the user was 13, it responded to a question about how to set the mood when having sex for the first time, the paper reported.

Snap downplayed the claims at the time, saying some people had tried to “trick the chatbot into providing responses that do not conform to our guidelines.” However, it then rolled out new tools including age filters to keep the AI responses more age-appropriate, and promised parental controls were on the way.

Those parental controls were still not available at the time of My AI’s public launch and Snap gave no update as to when they could be expected.

Despite the numerous complaints, there were a handful of dissenters to the backlash over My AI.

“Am I the only one who loves it?,” asked one user in the replies to Snapchat’s tweet. Only one person responded to them, saying just “yo.”

Image Credits: Sensor Tower (analysis of Snapchat app reviews)

Digging into the negative review spike, it becomes clear that Snapchat’s app ratings don’t even tell the full story here.

A chart from Sensor Tower, for instance, shows that five-star reviews also spiked over the past few days alongside the one-star reviews where users were complaining about the My AI feature. That would lead one to believe that the AI feature is divisive, as opposed to being widely panned.

But a closer inspection of those five-star reviews indicates that many of them also include My AI complaints. For example, one threatens “Get rid of AI. Or I will change my review to a one star. Nobody at all wants AI on Snapchat.”

Image Credits: Sensor Tower (analysis of Snapchat app reviews)

Several other five-star reviews demand to block the AI or remove it, call it creepy or “crap” and yet the user has still rated the app five stars. It’s unclear if that’s due to user error, issues with Sensor Tower analysis or something else. In any event, a number of these “5-star” reviews should be considered negative reviews or complaints, based on their actual commentary.

Still, scrolling through the App Store reviews sorted by “Most Recent” shows how many complaints there are. Nearly all the new reviews have something to say about My AI, and the majority are not good.

Snapchat declined to comment on the situation but noted that Snapchat+ users sent nearly 2 million chats to the AI while in early testing.

The company says it’s constantly iterating on Snapchat’s features based on the community’s feedback but did not commit to removing the AI.

Instead, a Snapchat spokesperson said if users didn’t like the AI feature, they don’t have to use it.

Snapchat launches an AI chatbot powered by OpenAI’s GPT technology

Volvo Cars Tech Fund invests in driver monitoring startup CorrActions

Volvo Cars Tech Fund invests in driver monitoring startup CorrActions Frederic Lardinois @fredericl / 2 days

CorrActions, an Israeli startup that, among other things, built a driver monitoring system that can understand a driver’s cognitive state, today announced that it has raised a strategic investment from Volvo Cars Tech Fund. According to the company, the target for this round is $6 million.

The idea behind CorrActions is to use sensors that are already built into the car to monitor the driver’s micro muscle movements. These movements, the company argues, can reflect brain activity that CorrActions’ algorithms can then evaluate to check if the driver is tired, distracted or intoxicated. Simply using a cell phone and watching users interact with an app, CorrActions CEO Ilan Reingold told me during a meeting in Israel earlier this year, the company can determine blood alcohol levels with 90% accuracy and zero false positives (CorrActions previously worked with Volkswagen on a POC for this).

Image Credits: CorrActions

“Today, we’re working with several OEMs and large fleet managers to monitor the capabilities of the person behind the wheel,” Reingold noted, but he also stressed that companies building autonomous driving fleets could use this system to monitor the motion sickness level of passengers, for example, to manage how the car drives to ensure that passengers feel comfortable. That, however, would be powered by a camera-based system or millimeter wave radar.

Currently, the company mostly works with data from steering wheel sensors and pressure-based seat sensors, as well as motion data from apps used by fleet managers to communicate with their drivers.

“With the Tech Fund, we aim to be a strategic partner of choice for exciting startups that can help boost our position as a tech leader in our industry,” said Alexander Petrofski, head of the Volvo Cars Tech Fund. “CorrActions fits the bill perfectly and focuses on a mission that is close to our heart: making cars and traffic safer.”

Volvo notes that the company’s EX90 flagship electric SUV already includes numerous systems to understand a driver’s cognitive state. “The CorrActions technology is a highly relevant complement to our driver understanding system. As a result, we’ve decided to take a stake in CorrActions to support the further development and commercialization of its technology,” Volvo Cars Tech Fund explains in the funding announcement.

Founded in 2019 by neuropsychologist Elad Hochman and business executive Zvi Ginosar in 2019, the company previously raised a $2.7 million seed round in 2021. The company brought on Reingold, who previously held a number of executive and R&D roles at various startups and large enterprises like Broadcom and Sony, in June 2022 to take the CEO role.

CorrActions raises $2.7M to help avoid errors in human-machine interactions