Google's AI Lifts Robot Inspection Accuracy to 98 Percent on Gauges and Instruments
Zero Signal Staff
Published April 15, 2026 at 6:16 PM ET · 3 days ago

Ars Technica
Google DeepMind announced a new AI model on April 14 that enables Boston Dynamics' Spot robot to read analog gauges, thermometers, and other industrial instruments with 98 percent accuracy.
Google DeepMind announced a new AI model on April 14 that enables Boston Dynamics' Spot robot to read analog gauges, thermometers, and other industrial instruments with 98 percent accuracy. The Gemini Robotics-ER 1.6 model combines visual reasoning with code execution to help robots perform complex inspections in factories and warehouses.
The Gemini Robotics-ER 1.6 model represents a major leap in robotic perception. The previous version, Gemini Robotics-ER 1.5, achieved only 23 percent accuracy on instrument-reading tasks. The new model's "agentic vision" capability—which processes complex visual elements like needles, liquid levels, tick marks, and text—drove the jump to 98 percent accuracy when combined with visual scratchpad technology.
Even without agentic vision enabled, the baseline model still reaches 86 percent accuracy on instrument reading. Google DeepMind demonstrated the model's improved reasoning by showing how it correctly identified and counted hammers, scissors, paintbrushes, pliers, and gardening tools in a cluttered image—tasks where the older 1.5 model produced hallucinations, including falsely identifying a wheelbarrow that wasn't present.
Boston Dynamics has been testing Spot across industrial facilities owned by its parent company, Hyundai Motor Group, including automotive factories. The robot's inspection duties require distinguishing between complex visual elements across multiple instruments and camera feeds simultaneously. Google DeepMind also emphasized that Gemini Robotics-ER 1.6 includes improved safety constraints, allowing robots to follow physical safety instructions and assess injury risks to humans more accurately than previous versions.
Context
The jump from 23 percent to 98 percent accuracy marks a significant threshold for industrial robotics. For comparison, Google's baseline Gemini 3.0 Flash model—released in January 2026—achieved only 67 percent accuracy on the same instrument-reading tasks before the robotics-specific optimization. Robots have historically excelled at highly repetitive, choreographed tasks in controlled environments like assembly lines, but struggle in complex, unstructured spaces where visual interpretation matters.
Boston Dynamics has positioned Spot as a general-purpose inspector capable of roaming through varied industrial environments. The company has tested both quadruped robots like Spot and humanoid models across multiple facility types. This new AI capability directly addresses a core limitation: robots could move through spaces but lacked the perceptual sophistication to reliably interpret what they were seeing.
What's Next
The practical test of this model's value depends on how quickly robotics companies and researchers gain hands-on access and deploy it in real facilities. Google is betting that improved AI perception will enable robots to operate as "free-range workers" in less controlled environments rather than remaining confined to highly specialized, repetitive roles. However, the company also acknowledges the risk: robots operating with greater autonomy in complex spaces carry higher stakes if errors occur, particularly around human safety. Boston Dynamics' ongoing trials at Hyundai facilities will likely provide the first real-world data on whether the accuracy gains translate to reliable industrial deployment.
Never Miss a Signal
Get the latest breaking news and daily briefings from Zero Signal News directly to your inbox.
