- Vulnerable U
- Posts
- Man Accidentally Gains Control of 7,000 Robot Vacuums
Man Accidentally Gains Control of 7,000 Robot Vacuums
A guy buys a smart vacuum. Instead of using the normal mobile app, he decides to use Claude to reverse engineer the API so he can drive it with a game controller. Claude helps him figure out how the API works. While doing that, he pulls his own authentication token off the server so he can authorize his vacuum.
Except that token was not unique to his device: It was just a valid token, and any valid token worked on any unit.
He suddenly realized his device was one in a sea of devices. His laptop started cataloging thousands of smart vacuums phoning home. Each device reporting every few seconds over MQTT. Serial numbers. Which room they were cleaning. Battery life. Status updates.
At one point he had visibility into roughly 7,000 devices. Add in related power stations and you are talking tens of thousands of devices tied to the same backend infrastructure. That’s wild.
He was able to pull live camera feeds, full floor plans and could see which room a vacuum was in and what its battery percentage was. He demonstrated waving at the camera in his own home while watching it remotely.
He didn’t hack their servers or brute force anything. He extracted his own token, and the backend did the rest.
The Real Problem Is the Cloud
Let’s separate what is normal from what is insane: It’s not surprising that a robot vacuum with a mobile app phones home. If you want to see your vacuum’s location, battery life, and cleaning map from your phone, some metadata has to go back to a cloud server. That part makes sense. What I don’t understand is the video feed.
Why does the live camera feed need to route through the cloud in a way that any valid token can access it? I understand a vacuum having a camera to avoid bumping into things. But why does that camera data need to live in a central backend where it can be queried broadly?
Even if you fix the token issue, there is still a question: Who at the company has access to that cloud infrastructure? Are employees able to view camera feeds? What logging and access controls exist internally?
The company described this as a backend permission validation issue. That is corporate speak for access control failure. The first patch did not fully resolve it and the issue had not been applied universally. A second patch was required to restart remaining services and close it out.
Nearly all identified activity was linked to independent researchers. Nearly all.
This is the part that makes me uneasy. The vulnerability was identified through internal review according to the statement. That sounds nice. It also sounds like it might have been external pressure.
The IoT Pattern We Keep Repeating
This isn’t a story about one vacuum, but about IoT architecture decisions.
We keep building devices that collect data. We keep routing that data through centralized cloud services. We keep layering convenience features on top of them. And then we act surprised when weak authorization models turn those devices into surveillance tools.
The token was not tied to device ownership. That is the core mistake, authentication without proper authorization. Valid credential equals global access.
It’s the same pattern we’ve seen across IoT for years. Ship fast, add cloud analytics, remote-control features and camera feeds. Worry about access control later.
People joke that the S in IoT stands for security. There is a reason that joke persists.
What stands out here is how trivial the path was. No advanced exploitation, zero day or server compromise. Just extracting a token and observing that backend validation did not enforce ownership boundaries.
If you are going to stream video feeds back to the cloud, you need to assume someone will eventually test the boundaries. And when they do, your access controls need to hold, because the alternative is thousands of cameras quietly reporting for duty.