The competition to build the iPhone of artificial intelligence is heating up.
On Tuesday, the technology startup Rabbit unveiled its contender: a small, orange, walkie-talkie style device that, according to the company, can use “AI agents” to carry out tasks on behalf of the user.
In a pre-recorded keynote address shown at the Consumer Electronics Show in Las Vegas, Rabbit’s founder Jesse Lyu asks the device to plan him a vacation to London; the keynote shows the device designing him an itinerary and booking his trip. He orders a pizza, books an Uber, and teaches the device how to generate an image using Midjourney.
The gadget, called the Rabbit r1, is just the latest in an increasingly active new hardware category: portable AI-first devices that can interact with users in natural language, eschewing screens and app-based operating systems. Retailing at $199, the r1 is a cheaper competitor to the Humane Ai Pin, a $699 wearable device unveiled in November that offers a similar suite of capabilities, and the $299 Meta and Rayban smart-glasses, which have an AI-powered assistant. Prominent tech investors are betting that the new advances in AI, like large language models (LLMs), will open up new vistas of personalized computing. OpenAI’s CEO Sam Altman is an investor in Humane. Altman and Softbank’s Masayoshi Son are reportedly in talks to design a separate AI hardware product with iPhone designer Jony Ive. Rabbit is funded to the tune of $30 million led by the billionaire Vinod Khosla’s venture capital firm Khosla Ventures. Whoever can design the appropriate hardware form factor, these billionaires’ line of thinking goes, will win big in the AI era.
Read More: Humane Wants Its New Ai Pin to Liberate You From Your Phone Screen
More From TIME
Rabbit’s r1 is based on a new type of AI system called a “large action model,” Lyu said during his keynote unveiling the device. The problem with large language models, the technology that tools like ChatGPT are based on, he said, are that they struggle to take actions in the real world. Instead, Rabbit’s large action model is trained on graphical user interfaces like websites and apps, which means it can navigate interfaces designed for humans and take actions on their behalf. “Things like ChatGPT are extremely good at understanding your intentions, but could be better at taking actions,” Lyu said. “The large language model understands what you say, but the large action model gets things done.”
To give r1 the ability to do things like book vacations, order pizza, and call an Uber, users will need to sign into their various accounts via Rabbit’s web portal. Rabbit’s AI agents (which it calls rabbits), running on an external server rather than on the device itself, will then use those accounts to execute their actions. Rabbit says each user is assigned a “dedicated and isolated” environment on its secure servers, and that it does not store user passwords. “Rabbits will ask for permission and clarification during the execution of any tasks, especially those involving sensitive actions such as payments,” the company says on its website.
Whether a new piece of hardware is even necessary for users to interact with AI agents is an open question. “Only those who have lost touch with the way consumers use tech believe these products can succeed,” Francisco Jeronimo, a vice president for device data and analytics at the market intelligence firm IDC, wrote on X, referring to both Rabbit and Humane’s new products. “Although the ideas have merit on their own, the reality is that consumers don’t need these kinds of devices, they need intelligent phones!”
Altman has publicly expressed a desire to build increasingly agential capabilities into OpenAI’s own software, which could obviate the need for new AI-first devices. “Eventually, you’ll just ask a computer for what you need, and it will do all of these tasks for you,” Altman said at an OpenAI developer conference in November.
But the trend toward companies empowering AIs to take actions in the real world has left some experts worried. AI devices like the Rabbit r1 have limited levers they can pull to act upon the world, but increasingly powerful agential AIs could pose many risks, according to a paper published in October by the Center for AI Safety. “AI agents can be given goals such as winning games, making profits on the stock market, or driving a car to a destination,” the paper says. “AI agents therefore pose a unique risk: people could build AIs that pursue dangerous goals.” A society that becomes dependent on a complex network of different interacting AI agents would, the paper argues, be vulnerable to problems like inescapable feedback loops or agents’ goals “drifting” in ways that could be harmful to humanity.
Altman suggested in November that safety concerns were a reason OpenAI was only taking small steps toward giving its AI tools the power to take actions in the real world. “We think it’s especially important to move carefully towards this future of agents,” he said. “It’s going to require … a lot of thoughtful consideration by society.”