To say that Santa Monica startup Rabbit has had a roller coaster of a first year would be an understatement. The company debuted the R1, an AI companion meant to answer questions, help you process visual information, and, most notably, learn and replicate how you interact with the internet, back in January at CES — and to global acclaim. At the time, the idea of a pocket square that was smaller, more whimsical, and possibly more advanced than a traditional smartphone's assistant capabilities felt like the AI dream.
Also: I tested the AI gadget that got the internet buzzing and it left me wanting more
And, like a dream, reviewers, including myself, quickly discovered that the $199 gizmo was too good to be true; it launched with underbaked features, underwhelming battery life, and a toolkit that didn't include its most promising feature, the large action model (LAM). Rabbit did mention that the feature would come later in the year, so I don't blame them. With LAM, users would be able to train their R1 to browse and interact with the web, including adding items on Amazon to their carts, booking an Airbnb (with the proper filters and requirements), and more.
Since publishing my initial review, Rabbit has patched up most of those early concerns, and the company's efforts to roll out software updates consistently since launch are commendable. Its latest update may be its most significant one yet, with Rabbit today launching LAM playground, a cross-platform agent system for users to finally teach their R1s.
Of course, the "playground" bit is just as important as "LAM" in this context, with Rabbit pitching the platform as a testing field for users to experiment with different websites and applications, share feedback, and refine the processes. While the first generation of LAM was limited to Uber, Spotify, and DoorDash (and worked less than half of the time), LAM playground opens things up to the web, so users can train their R1 agents to navigate sites like Google, Walmart, YouTube, and more.
Ahead of today's update, Rabbit CEO Jesse Lyu demoed LAM playground to me, all of which happens in the company's aptly named, secure cloud hub, Rabbithole. From the webpage, Jesse began by typing in the prompt, "Find a six-pack of Diet Coke and add it to my Amazon shopping cart." I watched as the multi-modal agent scanned every element on Google to start a search, clicked on a relevant Amazon buying link, and selected "Add to cart." The process was slow, taking roughly 45 seconds from start to finish, but the idea of having an AI agent get something done while I'm pouring myself a cup of tea is enticing.
Also: OpenAI's Altman sees 'superintelligence' just around the corner – but he's short on details
Similarly, I ran a prompt for LAM playground to "Find me the best iPhone 16 deal at Walmart," and it did just that but with a slight hiccup. When accessing the Walmart page, the website tasked the AI in training with a CAPTCHA, which it failed to solve. Lyu told me the blunder had to do the demo not running Rabbit's IP cluster, which is reasonable. Still, considering how many CAPTCHA prompts I get even when I'm surfing the web in bed, I wonder how prevalent the issue of solving them will be as more users test LAM playground.
By the end of my briefing, Lyu left me with a peek at an even further vision for R1: the ability to handle prompts at a desktop and app level, such as firing up a separate Linux OS or uploading and editing an image in Adobe Photoshop. Considering the track record of OS-based AI tools, security and privacy are at the top of my mind, so here's to hoping Rabbit has learned a thing or two from Microsoft's mishaps. Until then, the LAM playground remains the company's core focus and should give antsy R1 users a taste of life with AI agents.