The $20,000 Robot That Isn't
NEO's home robot, the data land grab, and a better way to build the future
A Strange Utopia
I do hope that, in not so long, just like we take energy for granted around us, we will be able to take labor around us for granted. (CEO of 1X Technologies)
What if I told you that for just $20,000 you could have a sweater-wearing stranger snoop around your house and take care of all your chores way worse than if you did them yourself?
If that’s too steep, you can make the modest commitment of $499 per month. Are you ready to put down your deposit yet? If so, get ready for NEO, the humanoid home robot from the Norwegian-American startup 1X Technologies.
They promise an abundance of labor that will one day make work as effortless as flipping a light switch. (You’ll have to ignore the effort and catastrophic climate impact of our energy system for the pitch to work on you).
Founder Bernt Børnich unveiled this vision publicly earlier this year when he told TED audiences that robots like NEO will make labor “as effortlessly accessible as energy is today,” and they’ll “redefine what it means to be human.”
But the real kicker for me was when he said “I do hope that, in not so long, just like we take energy for granted around us, we will be able to take labor around us for granted.”
Bruh. That’s called capitalism, and our current model of that system is doing a very fine job of taking advantage of labor. No venture-backed fake robot innovation needed.
Reality Check: Robot ‘Slop’ and Human Operators
the problem isn’t deception so much as it’s premature celebration
In Joanna Stern’s Wall Street Journal review, NEO took over a minute to fetch a water bottle from 10 feet away and five minutes to load three dishes into a dishwasher. To be fair, I have moved that slowly when severely hungover in my early 20s, but I didn’t charge my roommates $500 a month for my services.
Marques Brownlee’s thumbnail says about all you need to know.
But if you watch it you’ll find out that every single task in the demos were remotely controlled by a human operator wearing a VR headset.
To their credit, 1X doesn’t hide this. When a human takes over, NEO’s head lights up to show it’s being “tele-operated” aka remote controlled by a human. Briefly setting aside the idea that such an indicator could be overridden, the problem isn’t deception so much as it’s premature celebration. (For more about the deception involved in AI systems that are really powered by humans, see my conversation with Milagros Miceli about how often we are actually talking to people).
This is still a distributed lab experiment being sold like a finished product. If you’re running human-in-the-loop robotics trials in people’s living rooms, call it what it is: a paid field study, not the future of domestic life.
The company calls its messy public development “robotics slop,” claiming at the same time that this imperfection is “incredibly useful.” I’m excited for the market to give them the feedback they so justly deserve.
The Data Imperative
If we don’t have your data, we can’t make the product better
(CEO of 1X Technologies)
In cases of all technology but especially AI promises, it’s always worth invoking our inner toddler (who we would never have wash our dishes) and asking… why? Why is this company prematurely launching this not-quite-product?
The CEO revealed in the TED talk a crucial truth: factories as environments are too predictable to train robots to be intelligent. In those controlled settings, they “stop learning.” But homes are messy, chaotic, and diverse. This makes them perfect training grounds. As Børnich put it to Stern in the WSJ video, “If we don’t have your data, we can’t make the product better.” To be honest, hearing that from a tech CEO is refreshingly honest.
All these AI companies are desperate for data. It’s why OpenAI launched a browser so they can see everything we see as we browse the web. It’s why Friend.com made that microphone necklace to record to every sound in your life. And it’s why 1x wants to put a camera and sensor-covered robot inside your home. Surveillance, meet Capitalism.
Neuroscientist and Life With Machines guest Jeff Hawkins (previous Life With Machines guest) argues that today’s AIs are basically “fancy calculators” and that to achieve true intelligence, synthetic minds, like ours, need true sensory-motor understanding, the ability to build an internal map of the world by physically moving through it. I think NEO is a giant bet that dumping robots into our homes will help them close that gap.
Your clutter is the curriculum and your routine is the dataset. All that sensor data going into NEO is going to help 1x create something that could actually be valuable someday. This is reminiscent of how Tesla continually promised and charged people (myself included) for the self-driving car that wasn’t, all while leveraging the data from millions of cars to train its AI systems.
I just wish 1X and others were more as honest with financial as they are about their need for data. The flow of money is backward. They should be paying people for this testing the way researchers pay lab assistants for their work. And they should be offering far more rigorous privacy protections.
While NEO has indicator lights that activate when it’s under human control, every camera, microphone, and sensor inside a home still expands the attack surface for abuse. People who purchase or rent this thing need 1x to not just have good values but also perfect cybersecurity.
The Ethics of Emotional Connection
1X markets NEO not just as a machine that does chores, but as a “companion” and “part of the family.” In my interview with robotics scholar Kate Darling, she reminded me that humans form attachments to machines almost by reflex.
We name our Roombas and feel guilty unplugging them. Corporations know that. Sony’s Aibo dog once required a subscription to keep your pet alive. NEO’s companion framing and ongoing fees risk the same emotional exploit.
And I don’t really have to spend lots of time getting into the detailed risks of forming bonds with synthetic systems, especially where kids are involved. Just putting an LLM into a human body form so it can sit on a couch and reinforce all your crazy ideas could absolutely never go wrong.
But that “companion” mode is an irresistible goal for many tech leaders now. What starts as a butler will quickly emerge into a caretaker, confidante, and more.
What a Better Future Would Look Like
There’s a better way to build the future, and it’s available to 1X and anyone else who wants to actually earn our trust. I’m going to share a few examples in some key categories we should keep in mind.
People should own the data they produce and be able to delete it, move it, and put it to collective benefit as well as oversight.
Examples:
The Workers’ Algorithm Observatory. They recently merged with Drivers Seat Cooperative and focus on returning agency to gig workers to see into the black box algorithms that shape their lives and livelihoods.
Salus Coop. Part of GovLab this European and Central Asian project lets people contribute their health data and control its use in research.
Open Data Institute works to make sure data collection benefits the collective.
A humanoid robot in your house should stay in your house, including as much of the data processing as possible.
This is where Apple tries to excel with systems like HomeKit and Private Cloud Compute
Even more advanced is Home Assistant, an open-source, local-first smart home infrastructure that I’m going to be playing with for my own home. I just don’t, for some strange reason, want to give Amazon and Jeff Bezos constant access to my home life.
And a few other principles
Companies should always disclose when a human is actually operating an allegedly-automated system. 1x got this part right.
Companies should be transparent about all data collected and transmitted.
We really must insist on a kill or shutdown switch. There’s too many examples of agentic systems not shutting down and of course there’s M3GAN.
I’m not against home robots. I’m against charging people to help you finish making your product. I’m against hand waving about becoming more human when the real pursuit is just more AI for the sake of AI.
If households are training high-risk, high-value systems, they should get paid. People should own their data, consent to its use, and have the right to delete it. Otherwise, we’re just offering up our homes and our humanity to someone else.
Would you let a humanoid robot into your home?
What else would you want to see in the future of home robotics that would make you more comfortable with the idea?
Thanks to Associate Producer Layne Deyling Cherland for editorial and production support and to my executive assistant Mae Abellanosa.




