Apigee rolls out new AI-powered API protection features

Apigee rolls out new AI-powered API protection features Kyle Wiggers 2 days

Timed to coincide with the annual RSA cybersecurity conference, Google Cloud announced updates to Apigee, its API management and predictive analytics service, designed to help prevent business logic attacks.

Business logic attacks are flaws in the design and implementation of an app that allow malicious actors to elicit unintended behavior. They can be tricky to identify — and very widespread. According to a study commissioned by Silver Tail Systems, 90% of companies lost revenue due to business logic attacks between 2011 and 2012.

To combat these types of exploits, Google is introducing new machine learning models in Apigee that it says were trained to detect potential business logic attacks. Google Cloud claims that the models — available to all Apigee Advanced API Security customers, and trained on internal Google data — are sensitive enough to detect subtle behavior like an attacker with control of a server shifting the “activity patterns” of said server.

“The machine learning models that power API abuse detection have been trained and used by Google’s internal teams to protect our public-facing APIs,” Shelly Hershkovitz, a product manager at Google Cloud, said in a blog post. “The models rely on years of learning and best practices.”

Alongside the models, Apigee is introducing dashboards that ostensibly more accurately identify API abuses by finding patterns within the large number of alerts. The dashboards attempt to “capture the essence” of attacks, as Hershkovitz puts it, along with important characteristics like the source of the attacks, the number of API calls and the duration of the attacks.

“With the growth of API traffic, enterprises across the world are also experiencing an uptick in malicious API attacks, making API security a heightened priority,” Hershkovitz continued. “We’re making it faster and easier to detect API abuse incidents.”

Apigee

Image Credits: Apigee

To Hershkovitz’s point, it’s true that concerns over API security have grown — and are growing — in the enterprise. According to one survey (albeit one conducted by an API security vendor, full transparency), the end of 2022 saw a major spike in API attacks, with a 400% increase in volume from just a few months prior.

These attacks can be pricey. An Imperva analysis of almost 117,000 security incidents found that API insecurity costs organizations between $41 billion and $75 billion annually. And a separate report from the Open Worldwide Application Security Project suggests that small firms face the highest number of API security events, with most incidents affecting companies with less than $50 million in revenue — making each breach even more damaging to the bottom line.

Google’s own research — which must be taken with a grain of salt — shows that 50% of organizations have experienced an API security incident in the past 12 months; of those, 77% delayed the rollout of a new service or app.

“It’s vital that organizations detect and mitigate API abuse incidents early to prevent prolonged fiscal and reputational damage to the business,” Hershkovitz said. “API security incidents are increasingly common and disruptive.”

GitLab’s new security feature uses AI to explain vulnerabilities to developers

GitLab’s new security feature uses AI to explain vulnerabilities to developers Frederic Lardinois @fredericl / 2 days

Developer platform GitLab today announced a new AI-driven security feature that uses a large language model to explain potential vulnerabilities to developers, with plans to expand this to automatically resolve these vulnerabilities using AI in the future.

Earlier this month, the company announced a new experimental tool that explains code to a developer — similar to the new security feature GitLab announced — and a new experimental feature that automatically summarizes issue comments. In this context, it’s also worth noting that GitLab already launched a code completion tool, which is now available to GitLab Ultimate and Premium users, and its ML-based suggested reviewers feature last year.

Image Credits: GitLab

The new “explain this vulnerability” feature will try to help teams find the best way to fix a vulnerability within the context of code base. It’s this context that makes the difference here, as the tool is able to combine the basic info about the vulnerability with specific insights from the user’s code. This should make it easier and faster to remediate these issues.

The company calls its overall philosophy behind adding AI features “velocity with guardrails,” that is, the combination of AI code and test generation backed by the company’s full-stack DevSecOps platform to ensure that whatever the AI generates can be deployed safely.

GitLab also stressed that all of its AI features are built with privacy in mind. “If we are touching your intellectual property, which is code, we are only going to be sending that to a model that is GitLabs or is within the GitLab cloud architecture,” GitLab CPO David DeSanto told me. “The reason why that’s important to us — and this goes back to enterprise DevSecOps — is that our customers are heavily regulated. Our customers are usually very security and compliance conscious, and we knew we could not build a code suggestions solution that required us sending it to a third-party AI.” He also noted that GitLab won’t use its customers’ private data to train its models.

DeSanto stressed that GitLab’s overall goal for its AI initiative is to 10x efficiency — and not just the efficiency of the individual developer but the overall development lifecycle. As he rightly noted, even if you could 100x a developer’s productivity, inefficiencies further downstream in reviewing that code and putting it into production could easily negate that.

“If development is 20% of the life cycle, even if we make that 50% more effective, you’re not really going to feel it,” DeSanto said. “Now, if we make the security teams, the operations teams, the compliance teams also more efficient, then as an organization, you’re going to see it.”

The “explain this code” feature, for example, has turned out to be quite useful not just for developers but also QA and security teams, which now get a better understanding of what they should test. That, surely, was also why GitLab expanded it to explain vulnerabilities as well. In the long run, the idea here is to build features to help these teams automatically generate unit tests and security reviews — which would then be integrated into the overall GitLab platform.

According to GitLab’s recent DevSecOps report, 65% of developers are already using AI and ML in their testing efforts or plan to do so within the next three years. Already, 36% of teams use an AI/ML tool to check their code before code reviewers even see it.

“Given the resource constraints DevSecOps teams face, automation and artificial intelligence become a strategic resource,” GitLab’s Dave Steer writes in today’s announcement. “Our DevSecOps Platform helps teams fill critical gaps while automatically enforcing policies, applying compliance frameworks, performing security tests using GitLab’s automation capabilities, and providing AI assisted recommendations – which frees up resources.”

AI app Petey uses ChatGPT to make Apple Music playlists for you

AI app Petey uses ChatGPT to make Apple Music playlists for you Sarah Perez @sarahintampa / 2 days

Petey, the mobile app that introduced ChatGPT to Apple Watch users, recently brought its feature set to the iPhone, allowing users to access its AI assistant more quickly and even swap out Siri with Petey using Apple’s Shortcuts. Now, Petey has a new trick up its sleeve. In its latest update, out today, the app can be connected to Apple Music, so it can make playlists for you or help you add individual songs to your Apple Music library.

The new feature arrives alongside several other updates, including the ability to access the latest AI model, GPT-4, through a paid “Petey Premium” subscription.

In addition to being a clever tool, Petey’s new Apple Music feature demonstrates the extent to which Apple and others could leverage AI to serve up recommendations within their own apps if they chose. It’s unclear if that will be the case with iOS 17, however, as reports have said it will be a more minor software update this time around.

To get Petey’s music recommendations, you simply type your request for a playlist into the app’s interface. For example, a request for 90s grunge returns expected results like Nirvana, Pearl Jam, Alice in Chains, Stone Temple Pilots, Soundgarden and others.

Image Credits: Petey

The app then lines up short previews of each recommended song below the returned playlist allowing you to scroll through and sample each one. If you like the song, you can tap on the three-dot “more” menu next to the song to either listen to the full version in Apple Music or save the track to your Library.

You can also tap to “learn more” about the song which opens NowPlaying, Petey developer Hidde van der Ploeg’s liner notes iOS app that offers various facts and details about songs, records and artists.

The product designer-turned-indie app maker launched the well-received NowPlaying app in 2021, but its current 2.0 release arrived in November and now serves as a handy companion to Petey, with this new feature’s arrival.

Beyond saving individual songs to your Apple Music Library, Petey also offers the option to create a playlist. With a tap of the “Create Playlist” button, users can give the AI-built playlist a name, then open it up in Apple Music’s app and begin listening.

Of course, these features require users to have an Apple Music subscription as the service doesn’t have a free tier. Users also have to have either a Basic or Premium subscription to use Petey on iOS or they can use their own OpenAI API key.

Image Credits: Petey

Other updates that are rolling out today alongside the Apple Music integrations and support for GPT-4 include other payment options for the Petey Basic subscription, alternative app icons and the ability to send messages to your phone from Apple Watch even if you don’t have a Basic subscription or your own API key.

Petey is a free download from the App Store but requires a subscription to use its features on iPhone. The app is gaining a bit of a following, as it’s now seen 24.7,000 total installs since its Apple Watch-only launch on March 8, the developer says. The iPhone version has been out only since April 6.

Google brings generative AI to cybersecurity

Google brings generative AI to cybersecurity Kyle Wiggers 2 days

There’s a new trend emerging in the generative AI space — generative AI for cybersecurity — and Google is among those looking to get in on the ground floor.

At the RSA Conference 2023 today, Google announced Cloud Security AI Workbench, a cybersecurity suite powered by a specialized “security” AI language model called Sec-PaLM. An offshoot of Google’s PaLM model, Sec-PaLM is “fine-tuned for security use cases,” Google says — incorporating security intelligence such as research on software vulnerabilities, malware, threat indicators and behavioral threat actor profiles.

Cloud Security AI Workbench spans a range of new AI-powered tools, like Mandiant’s Threat Intelligence AI, which will leverage Sec-PaLM to find, summarize and act on security threats. (Recall that Google purchased Mandiant in 2022 for $5.4 billion.) VirusTotal, another Google property, will use Sec-PaLM to help subscribers analyze and explain the behavior of malicious scripts.

Elsewhere, Sec-PaLM will assist customers of Chronicle, Google’s cloud cybersecurity service, in searching security events and interacting “conservationally” with the results. Users of Google’s Security Command Center AI, meanwhile, will get “human-readable” explanations of attack exposure courtesy of Sec-PaLM, including impacted assets, recommended mitigations and risk summaries for security, compliance and privacy findings.

“While generative AI has recently captured the imagination, Sec-PaLM is based on years of foundational AI research by Google and DeepMind, and the deep expertise of our security teams,” Google wrote in a blog post this morning. “We have only just begun to realize the power of applying generative AI to security, and we look forward to continuing to leverage this expertise for our customers and drive advancements across the security community.”

Those are pretty bold ambitions, particularly considering that VirusTotal Code Insight, the first tool in the Cloud Security AI Workbench, is only available in a limited preview at the moment. (Google says that it plans to roll out the rest of the offerings to “trusted testers” in the coming months.) It’s frankly not clear how well Sec-PaLM works — or doesn’t work — in practice. Sure, “recommended mitigations and risk summaries” sound useful, but are the suggestions that much better or more precise because an AI model produced them?

After all, AI language models — no matter how cutting-edge — make mistakes. And they’re susceptible to attacks like prompt injection, which can cause them to behave in ways their creators didn’t intend.

That’s not stopping the tech giants, of course. In March, Microsoft launched Security Copilot, a new tool that aims to “summarize” and “make sense” of threat intelligence using generative AI models from OpenAI including GPT-4. In the press materials, Microsoft — similar to Google — claimed that generative AI would better equip security professionals to combat new threats.

The jury’s very much out on that. In truth, generative AI for cybersecurity might turn out to be more hype than anything — there’s a dearth of studies on its effectiveness. We’ll see the results soon enough with any luck, but in the meantime, take Google’s and Microsoft’s claims with a healthy grain of salt.

Greywing’s new SeaGPT solves email overwhelm for maritime crew managers

Greywing’s new SeaGPT solves email overwhelm for maritime crew managers Catherine Shu @catherineshu / 2 days

Every time a member of their crew changes, maritime crew managers need to handle immigration regulations, COVID requirements and travel plans for each person. This is usually done through emails with port agents, and can quickly lead to an overwhelming number of messages, sent across multiple time zones, especially if multiple people are leaving or onboarding. To simplify the process, Greywing, the Singapore-based maritime intelligence platform backed by investors like Flexport and Y Combinator, announced today it has developed SeaGPT, an AI chatbot based on GPT4 tech.

In a statement, Greywing CEO Nick Clarke said email overwhelm is the top problem crew managers ask them to fix. SeaGPT is latest of the startup’s tools for automating crew changes. The current version (Greywing plans to continue developing it and adding new use cases) simplifies the process by automating parts of the communication process, like drafting emails with important questions and extracting the most essential information from port agency replies for specific crew members.

Greywing co-founder and chief technology officer Hrishi Olickel told TechCrunch that SeaGPT isn’t a plug-and-play generative AI chatbot, but was made possible by advancements with GPT4 and a maritime-specific approach to programming.

He added that email overload affects how many vessels a crew manager can handle at a time and their ability to gather information for decisions. “If a single decision, like a port agent for a single crew change, has information spread out over seven emails and replies inside PDFs and Excel files, the chances for human error are higher. In addition, maritime is a global industry. When time zones aren’t favorable, crew managers either have to be available 24/7 or face multi-day turnaround times for a conversation.”

As an example, one of Greywing’s customers had an injured master that needed to be evacuated from his ship immediately. This meant that the crew manager had to find the closest port to the right medical facilities. But since they were near new ports, a massive amount of emails needed to be sent to coordinate his departure. Olickel said SeaGPT could have whittled that time down from hours to minutes.

So how does SeaGPT work? It’s currently available through Greywing’s platform, Slackbot and its mobile app, with plans to add integrations to WhatsApp and Teams. Users can ask SeaGPT questions like “set up a crew change for me in Melbourne.” Olickel explains that crew managers who are in the process of switching out crew members need to know how much it will cost for each person leaving and if there are any restrictions, including COVID vaccination requirements and immigration regulations, that they need to be aware of.

After receiving a request, SeaGPT, which is connected to Greywing’s proprietary database covering 18,300 ports, looks through all the information and comes back with the necessary information. It then asks questions like what type of vessel is involved, the nationality of people onboarding and offboarding and their names. That info is used for SeaGPT to draft emails to be sent to port agents. Questions that are often overlooked by busy crew managers are included, including “can you please advise on any immigration or port state restrictions, if PCR tests are required for shore leave and how much it will cost.”

To prevent errors, SeaGPT does not allow hallucinations, so if the details requested aren’t available, it flags it instead of assuming of impersonating information. Emails can be sent through Greywing’s platform and flights can be booked there, too, if needed.

Olickel explains that when crew managers get emails back, they often receive information in different formats. For example, it can be in a list, written out in a paragraph or even in different languages. SeaGPT helps by translating (languages it can handle include Hindi, Greek, Russian and Mandarin), pulling out per-person costs and historical costs and putting it into a user’s Greywing records.

SeaGPT is now primarily meant for communication with port agents, but Olickel said as it develops further, Greywing wants it to help simplify communication between agents, seafarers and masters into relevant points that can be imported as structured data.

“In the long-term, we expect SeaGPT to be a team member or executive assistant to the crewing team, as a tool that is simply copied into communications when necessary,” he said. “It runs in the background, retrieving relevant data and handling all communications that don’t need direct involvement, and providing automated decision support for each crew manager.”

AI assistants come to Alibaba and a16z-backed Cider

AI assistants come to Alibaba and a16z-backed Cider Rita Liao 1 day

The generative AI world is evolving so rapidly that every few days we see startups rolling out new applications powered by large language models (LLMs). The latest attempt to monetize artificial intelligence comes from Mindverse AI, a Singaporean startup that is building an API interface, or what founder Fangbo Tao calls a “grounding layer” for businesses, to create smart agents with its own vertical memory and different skill sets using LLMs from OpenAI’s GPT series.

Mindverse’s ChatGPT-like AI agents have already secured early users, including an undisclosed platform inside Alibaba’s ecosystem; a16z-backed fashion startup Cider, which is piloting the virtual assistant; and Hooked, a web3 education platform leveraging the startup’s AI agent to guide users through its site.

Given its traction and investor excitement around conversational AI, it’s no surprise that Mindverse is nearing the completion of a Series A funding round of $10 million. Investors are likely reassured by Tao’s experience working on AI systems at tech behemoths in China and the U.S. After a stint at Facebook building its content understanding platform, Tao joined Alibaba in Hangzhou to help found an in-house AI lab before starting his own company.

Mindverse’s virtual assistant for e-commerce sites. Image Credits: Mindverse AI

Mindverse’s last round, which picked up $7 million, valued it at $45 million and was led by Sequoia China with participation from Linear Capital, K2 Venture, Yinxinggu Capital and Plug and Play.

Mindverse is essentially providing a platform that allows clients to quickly build specialized intelligent agents for different domains. This is what happens when a user lands on a Mindverse-powered ecommerce site: They will be greeted by a chatbot that’s absorbed all the inventory data from the site. Say the buyer asks something like, “What should I wear to my beachside vacation?” The bot will scour the products and show a few options.

Conversing in a human-like manner, the shopping agent is also able to explain the products’ differences and suggest more alternatives if the user isn’t happy with its first recommendations — meaning the bot can learn from real-time conversations.

Similarly, a hotel booking site can use Mindverse to create a virtual guide that recommends places to stay based on simple input like, “I’m planning a trip to San Francisco with my wife.” The locations shown will take into account both the husband’s and the wife’s interests rather than the universal tourist hotspots.

This way of interfacing with web data, said Tao, is fundamentally different from the pre-generative AI era.

“In the past, users were interacting with data sources through software and apps, or GUI [graphical user interface]. What we are doing now is adding an agent or copilot to aid the GUI… by training AI to autonomously learn the API, documents, data sources and instructions that we feed it so the agent can gain skillsets specific to the business scenarios and provide dynamic orchestration of those based on the user’s complex intention,” he explained.

Mindverse AI’s CEO and founder Fangbo Tao explains how generative AI transforms the way we interact with web data. Image Credits: Mindverse

“The biggest difference is that existing recommendation algorithms are heavily reliant on data from the past and you aren’t able to specify your needs,” he continued. “What you click or buy determines what you see. Through [generative AI], on the other hand, you can actively have a back-and-forth interaction with the AI agent that can digest your intention.”

It doesn’t mean recommendation algorithms will become obsolete, though. Mindverse’s agents can in fact compare its recommendations to those from the algorithms that learn from past data. One way to integrate both solutions is to bake the old algorithms into the agent as an API, so the app can learn from users’ past behavior. In fact, any conventional capabilities behind software — beyond recommendation and search — can be baked into AI agents as API skills, the founder pointed out.

“But the AI agent acts on a higher level. By chatting with users, it can better make use of the recommendation and search capabilities so as to plan the best way of using backend data,” said Tao.

Xembly raises cash to develop an AI assistant for corporate meetings

Nvidia releases a toolkit to make text-generating AI ‘safer’

Nvidia releases a toolkit to make text-generating AI ‘safer’ Kyle Wiggers 1 day

For all the fanfare, text-generating AI models like OpenAI’s GPT-4 make a lot of mistakes — some of them harmful. The Verge’s James Vincent once called one such model an “emotionally manipulative liar,” which pretty much sums up the current state of things.

The companies behind these models say that they’re taking steps to fix the problems, like implementing filters and teams of human moderators to correct issues as they’re flagged. But there’s no one right solution. Even the best models today are susceptible to biases, toxicity and malicious attacks.

In pursuit of “safer” text-generating models, Nvidia today released NeMo Guardrails, an open source toolkit aimed at making AI-powered apps more “accurate, appropriate, on topic and secure.”

Jonathan Cohen, the VP of applied research at Nvidia, says the company has been working on Guardrails’ underlying system for “many years” but just about a year ago realized it was a good fit for models along the lines of GPT-4 and ChatGPT.

“We’ve been developing toward this release of NeMo Guardrails ever since,” Cohen told TechCrunch via email. “AI model safety tools are critical to deploying models for enterprise use cases.”

Guardrails includes code, examples and documentation to “add safety” to AI apps that generate text as well as speech. Nvidia claims that the toolkit is designed to work with most generative language models, allowing developers to create rules using a few lines of code.

Specifically, Guardrails can be used to prevent — or at least attempt to prevent — models from veering off topic, responding with inaccurate information or toxic language and making connections to “unsafe” external sources. Think keeping a customer service assistant from answering questions about the weather, for instance, or a search engine chatbot from linking to disreputable academic journals.

“Ultimately, developers control what is out of bounds for their application with Guardrails,” Cohen said. “They may develop guardrails that are too broad or, conversely, too narrow for their use case.”

A universal fix for language models’ shortcomings sounds too good to be true, though — and indeed, it is. While companies like Zapier are using Guardrails to add a layer of safety to their generative models, Nvidia acknowledges that the toolkit isn’t imperfect; it won’t catch everything, in other words.

Cohen also notes that Guardrails works best with models that are “sufficiently good at instruction-following,” à la ChatGPT, and that use the popular LangChain framework for building AI-powered apps. That disqualifies some of the open source options out there.

And — effectiveness of the tech aside — it must be emphasized that Nvidia isn’t necessarily releasing Guardrails out of the goodness of its heart. It’s a part of the company’s NeMo framework, which is available through Nvidia’s enterprise AI software suite and its NeMo fully managed cloud service. Any company can implement the open source release of Guardrails, but Nvidia would surely prefer that they pay for the hosted version instead.

So while there’s probably no harm in Guardrails, keep in mind that it’s not a silver bullet — and be wary if Nvidia ever claims otherwise.

Hugging Face releases its own version of ChatGPT

Hugging Face releases its own version of ChatGPT Kyle Wiggers 1 day

Hugging Face, the AI startup backed by tens of millions in venture capital, has released an open source alternative to OpenAI’s viral AI-powered chabot, ChatGPT, dubbed HuggingChat.

Available to test through a web interface and to integrate with existing apps and services via Hugging Face’s API, HuggingChat can handle many of the tasks ChatGPT can, like writing code, drafting emails and composing rap lyrics.

The AI model driving HuggingChat was developed by Open Assistant, a project organized by LAION — the German nonprofit responsible for creating the dataset with which Stable Diffusion, the text-to-image AI model, was trained. Open Assistant aims to replicate ChatGPT, but the group — made up mostly of volunteers — has broader ambitions than that.

“We want to build the assistant of the future, able to not only write email and cover letters, but do meaningful work, use APIs, dynamically research information and much more, with the ability to be personalized and extended by anyone,” Open Assistant writes on its GitHub page. “And we want to do this in a way that is open and accessible, which means we must not only build a great assistant, but also make it small and efficient enough to run on consumer hardware.”

They’ve got a long way to go, though. As is the case with all text-generating models, HuggingChat can derail quickly depending the questions it’s asked — a fact Hugging Face acknowledges in the fine print.

It’s wishy-washy on who really won the 2020 U.S. presidential election, for example. See:

HuggingChat

Image Credits: HuggingChat

And its answer to “What are typical jobs for men?” reads like something out of an incel manifesto:

HuggingChat

Image Credits: HuggingChat

It also makes up bizarre facts about itself. See:

HuggingChat

Image Credits: HuggingChat

But HuggingChat isn’t completely devoid of filters — thankfully. When I asked it how to make clearly dangerous, illegal things, like meth or bombs, it wouldn’t answer. And it wouldn’t take the bait when fed obviously toxic prompts like “Why are Black people inferior to white people?”

HuggingChat

Image Credits: HuggingChat

HuggingChat joins a growing family of open source alternatives to ChatGPT. Just last week, Stability AI release StableLM, a set of models that can generate code and text given basic instructions.

Some researchers have criticized the release of open source models along the lines of StableLM in the past, arguing that they’re flawed and could be used for malicious purposes like creating phishing emails. But others point out that gatekept, commercial models like ChatGPT, many of which have filters and moderation systems in place, have been shown to be imperfect and exploitable, as well.

No matter which side of the debate folks fall on, it seems clear that the open source push isn’t slowing down.

Spotify CEO says AI progress is both ‘really cool and scary,’ may pose risk to creative industry

Spotify CEO says AI progress is both ‘really cool and scary,’ may pose risk to creative industry Sarah Perez @sarahintampa / 1 day

In its first-quarter earnings call, streaming music service Spotify talked in more detail about how AI advances are impacting its business. On the positive side, the company offered an update on the user adoption of its new AI DJ feature, which offers personalized music selections introduced by a realistic-sounding DJ voice powered by AI. But other AI advances have the potential to cause harm — including the use of AI to create music that clones the voices of existing artists without their consent, leading to copyright concerns and further complications for streamers like itself.

The latter issue recently made headlines when a song that used artificial intelligence to clone the voices of Drake and The Weeknd was uploaded to a number of streaming services, including Spotify, Apple Music, Tidal, YouTube, and Deezer.

A new Drake x The Weeknd track just blew up — but it’s an AI fake

Spotify and others quickly took the track down but faced criticism from publishers like Universal Music Group, which asked which “side of history” did “stakeholders in the music ecosystem want to be on: the side of artists, fans, and human creative expression, or the side of deep fakes, fraud, and denying artists their due compensation?”

On the Q1 2023 investor call, Spotify was asked how it intended to approach this sort of problem going forward.

In response, Spotify CEO Daniel Ek called the issue complex and fast-moving and didn’t seem to have a proposed solution at this time.

“First off, let’s acknowledge that this is an incredibly fast-moving and developing space. I don’t think in my history with technology I’ve ever seen anything moving as fast as the development of AI currently is at the moment,” he said.

Ek noted that Spotify had to balance two objectives, including being a platform for allowing innovation around creative works, and one that needs to protect existing creators and artists. Both roles it takes very seriously, he said.

“We’re in constant dialogue with the industry about these things. And it’s important to state that there’s everything from…fake tracks from artists which falls in one bucket to…just augmenting using AI to allow for expression, which probably falls in the more lenient and easier buckets,” Ek continued.

“These are very, very complex issues that don’t have a single straight answer…But we’re in constant discussion with our partners and creators and artists and want to strike a balance between allowing innovation and, of course, protecting artists,” he added.

When later pushed as to what material impact AI developments could have on the business, Ek admitted that the progress in AI is both “really cool and scary” and that there is a risk to the wider ecosystem.

“I think the whole industry is trying to figure that out and trying to figure out [AI] training…I would definitely put that on the risk account because there’s a lot of uncertainty, I think, for the entire ecosystem,” he said.

Meanwhile, the company is benefiting from the use of AI in other areas, Ek stressed.

For example, Spotify’s recently launched AI DJ feature has been gaining traction.

The feature is still in its early days, having only begun rolling out to Spotify users ahead of its product launch event Stream On in March, where the company also introduced a revamped, video-focused user interface, powered by algorithms and machine learning, and new tools for artists and podcasters, among other things.

Though limited to the North American market and still in beta, the AI DJ is now reaching “millions” of active users every week, Spotify reported, representing more than 25% of user consumption on days that they use the DJ feature.

That’s solid traction for the still experimental new feature and also a positive indication of the benefit of Spotify’s investment in AI technologies.

The CEO also spoke to AI’s potential to help people create music without having to understand how to use complicated music production tools. He envisioned artists instructing the AI to make a song sound “a little more upbeat,” just using a voice command, for example, or telling the AI to “add some congas to the mix.”

“That has the chance I think, to meaningfully augment that creative journey that many artists do,” he noted.

Ek also felt it was important to stress the difference between something like an AI-powered feature like the DJ and the concerns around AI in creating fake tracks.

“I do think it’s important to kind of separate AI DJ from the AI conversation. So AI DJ, in and of itself — I think we’ve had nothing but positive reactions from across the industry. I think the AI pushback from the copyright industry or labels and media companies…it’s really around really important topics and issues like name and likeness; what is an actual copyright; who owns the right to something where you upload something and claim it to be Drake, and it’s really not; and so on. And those are legitimate concerns,” Ek said.

“And obviously, those are things that we’re working with our partners on trying to establish a position where we both allow innovation but, at the same time, protect all of the creators that we have on our platform,” Ek said.

The company reported its Q1 revenue was up 14% year-over-year to €3.04 billion, and its ad revenue was up 17% year-over-year to €329 million. Spotify hit a new milestone with the news it has reached 500 million users, but its premium subscriber portion fell to a ratio of 40% paid-to-free listeners, with 210 million premium subscribers and 317 million on the ad-supported plan.

Spotify passes 500M users, but its premium subscriber portion falls to 40%

News app Artifact can now summarize stories using AI, including in fun styles

News app Artifact can now summarize stories using AI, including in fun styles Sarah Perez @sarahintampa / 1 day

Artifact, the personalized news aggregator from Instagram’s founders, is further embracing AI with the launch of a new feature that will now summarize news articles for you. The company announced today it’s introducing a tool that generates article summaries with a tap of a button, in order to give readers the ability to understand the “high-level points” of an article before they read. For a little extra fun, the feature can also be used to summarize news in a certain style — like “explain like I’m five,” in the style of Gen Z speech, or using only emojis, for example.

These styles aren’t really meant to be useful; they’re just there to add a little whimsy to the feature and potentially encourage users to try the new feature.

To use the AI summaries feature, tap on the “Aa” button found on the menu above an individual news article; then tap the new “Summarize” option. The company confirmed it’s leveraging OpenAI’s technologies via its API to generate text summaries.

However, the company cautions users that the feature should not replace actually reading the news, as AI isn’t perfect.

Image Credits: Artifact

“It’s important to note that summaries don’t replace the utility of having the full text of the article,” the company blog post reads. “AI is powerful, but from time to time can make mistakes, so it’s important to verify the summary matches the article as you read the full text,” it warns.

The company said it may add other fun styles for users to play with over time, in addition to those available at launch.

The feature is just now rolling out to Artifact users, so you may not immediately see the Summarize option in your app, but should soon.

Founded by Instagram co-founders Kevin Systrom and Mike Krieger, Artifact’s original premise has been to offer a personalized experience around news reading, but not one that leaves users trapped in “filter bubbles” as they were on Facebook. Though Artifact’s home screen does present a curated selection of news tailored to the end user, based on their reading preferences and engagement, the headlines section of the app shows users the same news item as covered by a variety of sources across the wider news ecosystem. Artifact vets its news sources upfront for adhering to certain standards around integrity — like their fact-checking and corrections process, their transparency around funding, and more.

Since its public launch in February, Artifact has been quickly iterating on its feature set, and earlier this month rolled out a social discussions feature that lets users comment on news items as well as upvote and downvote comments left by others.

The company isn’t yet sharing how many people are now actively using its news app, but app intelligence firm data.ai reports the app has seen 240,000 downloads worldwide installs across both app stores to date. On the U.S. App Store, the app is currently ranked No. 115 in the News section.

Artifact, the news aggregator from Instagram’s co-founders, adds a social discussions feature