The implication—fueled by latest demonstrations of humanoid robots putting away dishes or assembling cars—is that mimicking human limbs with single-purpose robot arms is the old way of automation. The brand new way is to duplicate the best way humans think, learn, and adapt while they work. The issue is that the dearth of transparency concerning the human labor involved in training and operating such robots leaves the general public each misunderstanding what robots can actually do and failing to see the strange latest types of work forming around them.
Consider how, within the AI era, robots often learn from humans who display tips on how to do a chore. Creating this data at scale is now resulting in –esque scenarios. A employee in Shanghai, for instance, recently spent every week wearing a virtual-reality headset and an exoskeleton while opening and shutting the door of a microwave a whole bunch of times a day to coach the robot next to him, reported. In North America, the robotics company Figure appears to be planning something similar: It announced in September it might partner with the investment firm Brookfield, which manages 100,000 residential units, to capture “massive amounts” of real-world data “across quite a lot of household environments.” (Figure didn’t reply to questions on this effort.)
Just as our words became training data for giant language models, our movements are actually poised to follow the identical path. Except this future might leave humans with a good worse deal, and it’s already starting. The roboticist Aaron Prather told me about recent work with a delivery company that had its employees wear movement-tracking sensors as they moved boxes; the info collected shall be used to coach robots. The trouble to construct humanoids will likely require manual laborers to act as data collectors at massive scale. “It’s going to be weird,” Prather says. “No doubts about it.”
Or consider tele-operation. Though the endgame in robotics is a machine that may complete a task by itself, robotics firms employ people to operate their robots remotely. Neo, a $20,000 humanoid robot from the startup 1X, is about to ship to homes this 12 months, but the corporate’s founder, Bernt Øivind Børnich, told me recently that he’s not committed to any prescribed level of autonomy. If a robot gets stuck, or if the shopper wants it to do a tough task, a tele-operator from the corporate’s headquarters in Palo Alto, California, will pilot it, searching through its cameras to iron clothes or unload the dishwasher.
This isn’t inherently harmful—1X gets customer consent before switching into tele-operation mode—but privacy as we all know it is going to not exist in a world where tele-operators are doing chores in your own home through a robot. And if home humanoids aren’t genuinely autonomous, the arrangement is healthier understood as a type of wage arbitrage that re-creates the dynamics of gig work while, for the primary time, allowing physical tasks to be performed wherever labor is least expensive.
We’ve been down similar roads before. Carrying out “AI-driven” content moderation on social media platforms or assembling training data for AI firms often requires employees in low-wage countries to view disturbing content. And despite claims that AI will soon enough train on its outputs and learn by itself, even the most effective models require an awful lot of human feedback to work as desired.
These human workforces don’t mean that AI is just vaporware. But when they continue to be invisible, the general public consistently overestimates the machines’ actual capabilities.
That’s great for investors and hype, but it surely has consequences for everybody. When Tesla marketed its driver-assistance software as “Autopilot,” for instance, it inflated public expectations about what the system could safely do—a distortion a Miami jury recently found contributed to a crash that killed a 22-year-old woman (Tesla was ordered to pay $240 million in damages).
The identical shall be true for humanoid robots. If Huang is true, and physical AI is coming for our workplaces, homes, and public spaces, then the best way we describe and scrutinize such technology matters. Yet robotics firms remain as opaque about training and tele-operation as AI firms are about their training data. If that doesn’t change, we risk mistaking concealed human labor for machine intelligence—and seeing much more autonomy than truly exists.
