AI Dividend, Universal Basic Income, and Economic Multiplier Effect

Slide1

We are truly living in unprecedented times. Artificial Intelligence (AI) is anticipated to transform the global economy by intelligently automating tasks, re-engineering operational processes, and paving the way for new avenues of customer, product, service, and market value creation. According to a report by PwC, AI has the potential to contribute up to $15.7 trillion to the global economy by 2030. This significant figure underscores the transformative influence of AI across all countries, industries, and professions (Figure 1).

Slide2

Figure 1: The Economic Potential of AI is Staggering…

Unfortunately, AI is also likely to displace many workers during that transformation. The World Economic Forum estimates that by 2025, 85 million jobs may be displaced by AI.[1] Many jobs may be eliminated, while others may be downgraded to lower-skilled positions (Figure 2).

Slide3

Figure 2: …But Those Economic Benefits are Unevenly Distributed

This situation underscores the opportunities and challenges of AI-driven economic transformation. Thankfully, the significant economic advantages of enhanced productivity, improved operational effectiveness, and the facilitation of new customer, product, service, and market innovations will result in financial surpluses as companies become adept at using AI to reshape their economic value curve to “Do more with less” (Figure 3).

Slide4

Figure 3: Transforming the Economic Value Curve to “Do More With Less”

The challenge lies in capturing and utilizing that financial surplus to benefit all.

Welcome to the “AI Dividend” Opportunity.

The AI Dividend Opportunity

The “AI Dividend” encapsulates the substantial and recurring economic benefits generated by advancements in artificial intelligence (AI), emphasizing the importance of harnessing these gains to improve societal well-being and economic equity.

The AI Dividend marks a generational inflection point for society, driven by increased productivity, efficiency, and innovation across all sectors and professions due to AI. Governments can dramatically enhance overall human, society, and environmental well-being by utilizing this AI Dividend to ensure that the benefits are widely shared and contribute to social equity and human wellness. This investment can propel human progress by fostering curiosity, exploration, creativity, and innovation. However, these societal benefits can only be realized with government and social policies that ensure the economic gains from AI are distributed equitably rather than solely benefiting the privileged few.

The potential loss of jobs driven by AI has been the focus of many media discussions and government leaders’ pontifications. A Universal Basic Income (UBI) has been proposed as a solution to AI-induced job displacement, with studies indicating that UBI can reduce poverty and provide a safety net. However, UBI will likely have severe unintended consequences, such as disenfranchisement and frustration among recipients who may not feel they have “earned” their income. This lack of purpose or contribution to society could lead to citizen frustrations, depression, social instability, and social instability[2].

I have a more pragmatic proposal for what we should do with the AI Dividend…

The Schmarzo AI Dividend Recommendation

Governments should use the AI Dividend to redesign our society significantly, creating a society that values and rewards long-term services that enhance everyone’s quality of life. Here are three immediate actions we could take with the upcoming AI Dividend:

  • Raising Minimum Wages: Increasing the minimum wage could ensure workers benefit from economic growth while maintaining employment. Research indicates that higher minimum wages can reduce poverty and income inequality without adverse employment effects​ (Phys.org)​.
  • Subsidizing Critical Jobs: Subsidizing essential society roles—such as teaching, nursing, healthcare, childcare, public safety, mental health, conservation, public transportation, elderly care, housing, community development, and the arts—that are undervalued by our current economic systems. These sectors are crucial for societal well-being and often suffer from low wages despite high social value. Public investment in these areas can drive economic growth, improve citizen satisfaction, and enhance service quality, leading to broader social benefits​ (Center for American Progress)​.
  • Promoting Human Creativity. We can unlock human potential and drive cultural and economic progress by funding initiatives that foster innovation and creativity, such as grants for artistic endeavors, research projects, and entrepreneurial ventures. Investing in education and lifelong learning opportunities will empower individuals to explore new ideas and solutions, leading to a more dynamic and innovative society.

The economic powerhouse behind these recommendations is based upon a well-established modern wonder known as the economic multiplier effect.

Mastering the Economic Multiplier Effect

The Economic Multiplier Effect occurs when an initial injection of spending (such as government investment or consumer spending) leads to a more significant overall increase in economic activity and national income due to successive rounds of re-spending by businesses and consumers (Figure 3).

Slide5

Figure 4: Source: “The Multiplier Effect” November 2019 by Tejvan Pettinger

The economic multiplier effect is a powerful engine that drives the growth and advancement of the modern world. This effect occurs when an initial increase in spending results in additional economic activity and development as the money circulates through the economy. It is a crucial concept in economic theory and policy, illustrating how a change in one sector can have ripple effects throughout the entire economy.

However, the impact and effectiveness of the economic multiplier effect are substantially influenced by the strategic placement of monetary stimulus in the economy. The effectiveness of the economic multiplier effect differs based on the income level of the recipients. Studies indicate that lower-income recipients create a more significant multiplier effect than higher-income recipients.

For example, the Bolsa Família program in Brazil, one of the world’s most extensive cash transfer programs, increased real GDP by R$1.04 for every R$1 spent. Similarly, the GiveDirectly initiative in Kenya, which provided a one-time transfer of $1,000 to poor households, led to a multiplier effect of 2.5, meaning each dollar generated $2.50 in local economic activity. These findings highlight the significant positive impact of cash transfers on low-income households, as they spend more of their additional income, effectively stimulating local economies[3].

Additionally, investments in social programs can lift families out of poverty, boost economic growth, and improve overall societal wellness. For instance, Head Start has demonstrated a benefit-cost ratio of more than 7-to-1, showing the economic impact of focused public investment (Center for American Progress)​.

Coupling social programs with the economic multiplier effect further supports the idea that utilizing the upcoming AI dividend can help reduce poverty, improve economic well-being, increase human development opportunities, drive economic growth, and more.

Summary: Exploiting the AI Dividend For Social Utopia

To fully realize AI’s potential and ensure that everyone benefits, not just a select few, it is important to implement comprehensive and inclusive government and social policies that prioritize investing the economic gains from AI into areas such as healthcare, education, welfare, safety, and other public services. By doing so, we can improve the quality of life for everyone and work towards creating a society with unprecedented prosperity for all.

The emergence of AI Dividends represents a pivotal moment for society, offering the potential to create a world where everyone, not just the wealthy or privileged, can thrive. However, implementing a Universal Basic Income is not the solution, as it could result in unintended harmful outcomes such as disenfranchisement, frustration, and social unrest among recipients who may feel a lack of purpose or contribution to society.

Instead, we can start with alternative strategies, such as raising minimum wages, subsidizing critical jobs, and promoting human creativity. These measures provide more balanced and sustainable solutions to mitigate job displacement, enhance social equity, and power economic growth.

The time to seize this historic opportunity is now! We cannot allow corporate short-term profits to dictate how the AI Dividend benefits are distributed. We must prioritize long-term prosperity and fulfillment over short-term financial gains. By taking a long-term view and considering the broader impact of the AI Dividend, we can create a more sustainable and equitable future and power unprecedented levels of economic growth that benefit everyone.

[1] ​ Source: BioMed Central

[2] Source: (Center for American Progress)​.

[3] Sources​: Phys.org and​​ BioMed Central.

Google is Giving Away a Custom Electric 1981 DeLorean as Grand Prize in ‘Gemini API Developer Competition’

Google has launched a Gemini API Developer Competition , with a unique twist: the event is promoted by Christopher Lloyd, famously known for his role as Dr. Emmett “Doc” Brown in the Back to the Future trilogy. The grand prize for the winning team is none other than a DeLorean, the iconic car from the beloved film series.

Key Details of the Hackathon

The Gemini API Developer Competition, powered by Google, is a skill-based contest where participants are tasked with developing innovative applications using Google’s Generative AI models, specifically the Gemini.

The event is open to tech enthusiasts, developers, and AI aficionados from around the globe, with the aim of pushing the boundaries of what generative AI can achieve.

The contest, which aims to spur innovation, features a total prize pool of $1 million spread across multiple categories.

Prizes and Categories

The competition includes awards in both innovation and technology categories, with substantial cash prizes:

Innovation

  • Most Impactful App: $300,000
  • Most Useful App: $200,000
  • Most Creative App: $200,000

Technology

  • Best Flutter App: $50,000
  • Best Android App: $50,000
  • Best Web App: $50,000
  • Best Use of ARCore: $50,000
  • Best Use of Firebase: $50,000
  • Best Game App: $50,000

Participants can also vie for the People’s Choice Award, with the most voted app receiving the prestigious Gemini API Developer trophy.

Entry Process

Developers can enter the competition by following three steps:

  1. Build an app using the Gemini API.
  2. Create a demo video showcasing the app.
  3. Publish and submit the app to the competition platform.

Judging Criteria

An expert panel from Google will evaluate submissions based on:

  • Remarkability: The app must showcase AI in a significant and impactful manner.
  • Creativity: The app should be original, innovative, and not a mere copy of existing solutions.
  • Usefulness: The app must clearly define and address specific problems, offering practical benefits.
  • Impactfulness: The app should contribute to accessibility, sustainability, or improve lives.
  • Execution: The app must be of high quality, well-executed, and free from bugs.

Key Dates

  • May 14, 2024: Competition launch.
  • August 12, 2024: Deadline for submissions.
  • August 16, 2024: People’s Choice voting begins.
  • August 16 – September 4, 2024: Judges review entries and select winners.
  • October 2024: Winners announced.

Integration with Google Tools

Participants are encouraged to leverage Google developer tools, such as Android Studio, ARCore, Chrome, Flutter, Firebase, and Web, to enhance their app development process.

MachineHack has been actively hosting a variety of AI hackathons, each designed to foster innovation and provide hands-on experience with generative AI tools like Ideathon: How to Detect AI-Generated Content, Predict the Price of Books, Bhasha Techathon, and many more.

The post Google is Giving Away a Custom Electric 1981 DeLorean as Grand Prize in ‘Gemini API Developer Competition’ appeared first on AIM.

Bhashini Launches ‘Be our Sahayogi’ for Multilingual AI Innovation Focused on Voice

Bhashini Launches ‘Be our Sahayogi’ for Multilingual AI Innovation Focused on Voice

Bhashini in collaboration with Nasscom has launched the “Be our Sahayogi” program on National Technology Day to crowdsource multilingual AI problem statements. Organisations are invited to submit their ideas for “Reimagining the User Journey” as a “Multilingual User Journey.”

Amitabh Nag, CEO of Bhashini, said, “‘Voice first’ is the way to actually make a difference and bridge the Digital and Literacy divide besides Transcending the language Barrier.”

Bhashini launched a crowdsourcing initiative to collect voice and text data in multiple Indian languages called Bhasha Daan. “It’s performing well, but not meeting our initial expectations. We plan to run a campaign to build this up further,” Nag told AIM, when asked about its status.

The partnership between Nasscom and BHASHINI aims to develop, deploy, and enhance solutions to transform society. The collaboration seeks to revolutionise the Bhartiya Bhashaien ecosystem through innovative technology.

“Bhashini will definitely change lives because people will be more collaborative, cooperative, and innovative, without the burden of trying to learn more languages,” added Nag.

This initiative aims to foster innovation and creativity, driving the development of new technologies and solutions for the Bhartiya Bhashaien community. By prioritising voice as a medium, the partnership will promote innovation, collaboration, and empowerment. Joint hackathons and innovation challenges will provide a platform for entrepreneurs and innovators to showcase their talents.

In an interview with AIM, Ankit Bose, head of AI at Nasscom, said, “Bhashini is a very good start and the government has done a phenomenal job at this. It is one of the most important tasks, as we can increase the whole country’s productivity, including people who don’t speak English,” talking about collecting Indic data.

The post Bhashini Launches ‘Be our Sahayogi’ for Multilingual AI Innovation Focused on Voice appeared first on AIM.

Meet the Team Spearheading OpenAI’s Safety and Security Committee 

OpenAI Safety and Security Committee

The announcement of OpenAI’s new Safety and Security Committee, tasked with crucial decision-making in OpenAI projects and operations got the internet buzzing, considering CEO Sam Altman is a part of it too.

The discussions revolved around a likely early arrival of GPT-5 and how the committee is a safety bunker for OpenAI. However, the most interesting aspect of this announcement seems to be the members on this committee.

In addition to being led by OpenAI Board directors, the group will also have technical and policy experts to guide them. With Altman in the lead, here’s the team spearheading OpenAI’s new Safety and Security Committee.

Bret Taylor

American entrepreneur and computer programmer Bret Taylor joined the board after Altman was reinstated as CEO following a brief ousting. Former co-CEO of Salesforce, Taylor comes with a vast experience of having also served on the board of tech companies such as Twitter and Shopify. He was also the co-creator of Google Maps.

Taylor has been Altman’s close friend who stood by him during last year’s ousting episode. Recently, Taylor and Larry Summers (another Board member) reacted sharply to Helen Toner’s (former board member who was removed after Altman’s reinstatement as CEO) accusation of Altman lying to the board multiple times and withholding information as some of the reasons for his ousting.

Taylor and Summers rejected the claims made by Toner and were disappointed at her for discussing these issues.

Adam D’Angelo

Adam D’Angelo, co-founder and CEO of Quora, also the former CTO of Facebook, joined the board as an independent director in 2018. He was the only board member whose position remained unaffected after Altman’s ousting and reinstatement as the CEO.

D’Angelo is also the founder of Poe, a platform for multi-chatbot interactions that allows users to interact from all the available LLMs in the market.

Jakub Pachocki

OpenAI’s new chief scientist, Jakub Pachocki, took over Ilya Sutskever’s role upon his exit. Leading OpenAI’s research efforts, Pachocki is one the technical experts on the new safety committee. In Sutskever’s exit announcement on X, Pachocki was referred to as having ‘excellent research leadership’.

Born in Poland, Pachocki excelled in programming contests during his studies and even won $10,000 at the Google Code Jam in 2012. Having studied computer science from the University of Warsaw in 2013, he did a PhD in the same subject from Carnegie Mellon University.

Interestingly, Pachocki took up the role of the director of research in October last year, a month before Altman’s sacking.

Ilya introduced me to the world of deep learning research, and has been a mentor to me, and a great collaborator for many years. His incredible vision for what deep learning could become was foundational to what OpenAI, and the field of AI, is today. I am deeply grateful to him… https://t.co/nsbMIOZHpS

— Jakub Pachocki (@merettm) May 14, 2024

John Schulman

One of the co-founders and head of security at OpenAI, John Schulman is a prominent researcher. At OpenAI, he is focussed on creating and improving algorithms that allow machines to learn from interactions with their environment.

Schulman pursued his undergraduate studies in physics at Caltech and later switched to neuroscience at UC Berkeley before completing his PhD in electrical engineering and computer sciences. His academic work laid the foundation for his future research in reinforcement learning and deep learning.

In a recent podcast with Dwarkesh Patel, Schulman spoke about his anticipation of AGI safety. “If AGI came way sooner than expected, we would definitely want to be careful about it. We might want to slow down a little bit on training and deployment until we’re pretty sure we know we can deal with it safely,” he said.

Matthew Knight

The head of security at OpenAI, Matthew Knight, joined the company in 2020. With a strong background in hardware, software, and wireless security, Knight leads the efforts to ensure the safety and security of OpenAI’s AI models and systems. This also includes ensuring the robustness of AI models against adversarial attacks.

Prior to joining OpenAI, Knight co-founded Agitator, a startup that developed secure and resilient dynamic radio frequency spectrum management technologies.

Lilian Weng

The head of safety systems at OpenAI, Lilian Weng, joined OpenAI in 2018 as a research scientist. At OpenAI, Weng’s work majorly focused on developing algorithms that enable machines to learn, adapt, and perform complex tasks autonomously.

Weng has contributed to the development of advanced reinforcement learning techniques, which are used to train AI agents to make decisions by interacting with their environment and learning from the outcomes of their actions.

She earned her PhD in electrical engineering and computer science from the Massachusetts Institute of Technology.

Aleksander Madry

The head of preparedness at OpenAI, Aleksander Madry, is a professor at MIT in the department of electrical engineering and computer science. He earned his PhD in computer science from MIT and has since become a leading figure in AI research, particularly focusing on machine learning, optimisation, and algorithmic robustness.

Nicole Seligman

A member of the board of directors at OpenAI, Nicole Seligman, is a corporate and civic leader and lawyer. Former EVP and general counsel at Sony Corporation, Seligman currently serves on three public company corporate boards – Paramount Global, MeiraGTx Holdings PLC, and Intuitive Machines Inc. Seligman has made significant contributions to the fields of law and corporate governance.

The post Meet the Team Spearheading OpenAI’s Safety and Security Committee appeared first on AIM.

This Week in AI: Can we (and could we ever) trust OpenAI?

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own. By the way, TechCrunch plans to launch an AI newsletter […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Forget Copilot: 5 major AI features Google rolled out to Chromebooks this week

Holding up the HP Dragonfly Pro Chromebook.

It was only a matter of time before Google injected AI into Chrome OS.

That time has now come.

Also: These top Chromebooks choices for students do it all

Chromebook Plus (a standard for higher-end Chrome OS hardware that includes exclusive features) is getting an update today that levels up the platform such that AI plays a more important role. The goal is to help you get the most out of AI when you're using Chrome OS for certain tasks.

Let's break down the most useful AI features coming to Chromebook Plus with this latest update.

1. Magic Editor in Google Photos

By now, you likely know what Magic Editor is and how it can help you create the perfect photo. Up until now, however, this feature was only available on phones. Get ready, because Magic Eraser is now available on Chromebook Plus.

Also: Magic Editor and other AI features in Google Photos are coming to your phone for free

You'll be able to select a photo in the Google Photos app, tap (or click) Magic Eraser, and start editing the image to your liking. You can reposition and resize objects, use contextual suggestions to improve the lighting and background, and completely reimagine your photos with a few clicks.

2. Gemini on your Chromebook

That's right, Gemini is now available on your Chromebook. Any time you need help with an idea, need to get an answer to a question, plan a trip, research a subject, and more, all you have to do is tap the Gemini icon on your app shelf and start interacting.

If you're a new Chromebook Plus user, you'll get the Google One AI Premium plan at no cost for 12 months. After that, you'll have to pay for the Gemini subscription. That plan includes access to Gemini Advance, 2TB of cloud storage, Gemini in Docs, Sheets, Slides, Gmail, and more.

3. Help Me Write

Help Me Write leverages Google's AI chops in all the places you write, such as websites, PDF forms, online applications, web apps, and more. When you need help writing, right-click (or two-finger tap) the text area to get suggestions or even get help changing the tone to fit your audience.

Also: Chrome now has a new AI writing tool to help you write almost anything online

Help Me Write helps generate text from scratch, using a prompt, or can help you rewrite existing text to make it more formal, shorten it, or totally rephrase it.

4. AI-generated wallpapers and video call backgrounds

With the help of AI, you'll be able to dream up just about any kind of image you want or need to serve as your Chromebook wallpaper or video call backgrounds.

You'll find some pre-built prompts included to help you build your backgrounds of all types (such as fun, whimsical, zen, and professional). Select what you want to see, and Google's AI will take it from there to generate an image specific to your prompt.

5. Quick access to Google Tasks

If you're a fan of Google Tasks, you might be happy to hear that you'll now have one-click access, via a built-in view of Google Tasks that makes it easy to add or check off todos.

Also: ChatGPT vs. Microsoft Copilot vs. Gemini: Which is the best AI chatbot?

Google Tasks will be accessible from the date icon on the bottom-right of your home screen, and will also be accessible across Google Workspace apps and devices. That means if you've added a task from Gmail on your Android phone, you can pick up where you left off on your Chromebook.

These new features will be available to Chromebook Plus devices on the latest Chrome OS version to be released on (or after — depending on your location) May 28, 2024.

Featured

Real Struggles of Bringing Robots from Simulation to Reality 

Real Struggles of Bringing Robots from Simulation to Reality

“Robots need to be able to deal with uncertainty if they’re going to be useful to us in the future. They need to be able to deal with unexpected situations and that’s sort of the goal of a general purpose or multi-purpose robot, and that’s just hard,” said Robert Playter, CEO of Boston Dynamics, in an interview with Lex Fridman last year.

Playter couldn’t have been more real in describing the difficulty in robotics. Boston Dynamics, which began developing general purpose robots in the early 2000s, introduced its humanoid Atlas only in 2013. Apart from struggling for investments in robotics, training robots is always a challenge.

Simulation for Robots

Simulated training is the most commonly adopted technique to equip general purpose robots for the real world. This is where virtual environments are created to develop, test and refine algorithms for robots to mimic real-world conditions.

“Simulation works very well for certain aspects. They work well in simulation for tasks like walking and doing backflips, where you need to balance your robot. And that is the only way,” said Mankaran Singh, founder of Flow Drive, which makes autonomous vehicle capabilities.

However, for tasks that can be learned through imitation such as folding shirts, it does not require a simulated environment.

Simulation is Not the Only Way

CynLr Robotics, a Bengaluru-based deep-tech company that is building robotic arms, believes simulation is not the only way to train its robots. “There are so many layers of perception and fundamental intuition using perception that are still missing. These are capabilities that we should focus on to be able to make them more autonomous,” said Gokul NA, founder of CynLr.

Meanwhile, NVIDIA’s Isaac Sim that is powered by Omniverse, is a robotic simulation platform that provides a virtual environment for AI-based robots to design, test, and train.

“We do leverage those [Omniverse] technologies as a tool, but you can’t say a tool is the solution,” said Gokul. The limitations come into the picture when you bring these robots into the real world.

“When you bring from a simulated assumption to reality, it doesn’t work. It doesn’t work at all, because it has never learned that. It has learned something else independently. Your mistakes are what it has learned, what you have left out,” he said.

He attributes this gap to machines lacking the cognitive layers that aid in understanding objects and environments that can lead to discrepancies between what is seen and what is understood.

Imitation learning is another common method for training robots where a user can demonstrate a task. However, it also comes with its limitations. For instance, if a user tries to train a robot to pick a white-coloured mug, the robot will fail to pick mugs of other colours.

Arm and Humanoid Robots

Similarly the form factor of general purpose robots also has a huge role to play in training them. For instance, robotics arm manufacturing requires a lot of manipulation, something that most companies overlook.

Gokul believes that today’s robotics developments, especially robotic arms, are more of ‘record and playback machines’ with sophisticated manipulation, however, they lack in perception. “Most cases where you want to commercially deploy these robots, you don’t need legs. Wheels are more than enough, but you need more capability with the hands,” said Gokul, hinting at the current humanoids that are being developed.

2024 being the year of robotics, many players such as Figure AI, Tesla, UniTree, and Aptronix are focusing on building humanoid robots, while Google DeepMind and other research institutes are training and developing arm-based robots to execute multiple functions.

AutoRT, SARA-RT and RT-Trajectory are a few robotics research systems Google DeepMind released. Stanford University introduced Mobile ALOHA, a system designed to replicate bimanual mobile manipulation tasks necessitating full body control – cooking being the main task demonstrated.

NVIDIA: The Robot-Enabler

In addition to Omniverse, GPU giant NVIDIA is aggressively investing in robotics and recently unveiled GR00T, a general-purpose foundation model for humanoid robots. Robots powered by GR00T are engineered to understand natural language and mimic human movements by observing action.

“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” said NVIDIA chief Jensen Huang, at GTC 2024.

NVIDIA is even building a comprehensive AI platform for all the leading humanoid robot companies, including OpenAI-backer 1X Technologies, Agility Robotics, Boston Dynamics, Figure AI, Unitree Robotics and many more.

Not just NVIDIA, other players are also enabling the robot training ecosystem. OpenAI-backed Physical Intelligence which recently raised $70M in funding, is an emerging startup working towards bringing general-purpose AI into the physical world.

The post Real Struggles of Bringing Robots from Simulation to Reality appeared first on AIM.

Hugging Face says it detected ‘unauthorized access’ to its AI model hosting platform

Hugging Face

Late Friday afternoon, a time window companies usually reserve for unflattering disclosures, AI startup Hugging Face said that its security team earlier this week detected “unauthorized access” to Spaces, Hugging Face’s platform for creating, sharing and hosting AI models and resources.

In a blog post, Hugging Face said that the intrusion related to Spaces secrets, or the private pieces of information that act as keys to unlock protected resources like accounts, tools and dev environments, and that it has “suspicions” some secrets could’ve been accessed by a third party without authorization.

As a precaution, Hugging Face has revoked a number of tokens in those secrets. (Tokens are used to verify identities.) Hugging Face says that users whose tokens have been revoked have already received an email notice and is recommending that all users “refresh any key or token” and consider switching to fine-grained access tokens, which Hugging Face claims are more secure.

It wasn’t immediately clear how many users or apps were impacted by the potential breach. We’ve reached out to Hugging Face for more information and will update this post if we hear back.

“We are working with outside cyber security forensic specialists, to investigate the issue as well as review our security policies and procedures. We have also reported this incident to law enforcement agencies and Data [sic] protection authorities,” Hugging Face wrote in the post. “We deeply regret the disruption this incident may have caused and understand the inconvenience it may have posed to you. We pledge to use this as an opportunity to strengthen the security of our entire infrastructure.”

The possible hack of Spaces comes as Hugging Face, which is among the largest platforms for collaborative AI and data science projects with over one million models, data sets and AI-powered apps, faces increasing scrutiny over its security practices.

In April, researchers at cloud security firm Wiz found a vulnerability — since fixed — that would allow attackers to execute arbitrary code during a Hugging Face-hosted app’s build time that’d let them examine network connections from their machines. Earlier in the year, security firm JFrog uncovered evidence that code uploaded to Hugging Face covertly installed backdoors and other types of malware on end-user machines. And security startup HiddenLayer identified ways Hugging Face’s ostensibly safer serialization format, Safetensors, could be abused to create sabotaged AI models.

Hugging Face recently said that it would partner with Wiz to use the company’s vulnerability scanning and cloud environment configuration tools “with the goal of improving security across our platform and the AI/ML ecosystem at large.”