AI Models Control Physical Robots: Anthropic's New Experiment

Published
November 13, 2025
Category
Emerging Technologies
Word Count
237 words
Listen to Original Audio

Full Transcript

Anthropic's recent experiment showcases the potential of AI models to control physical robots, specifically a robot dog known as the Unitree Go2. In a study referred to as Project Fetch, researchers divided into two groups, one utilizing Anthropic's Claude coding model and the other programming without AI assistance.

The AI-assisted group completed several tasks more efficiently, including programming the robot to autonomously navigate and retrieve a beach ball, a feat the human-only group could not achieve. This experiment highlights the agentic coding abilities of modern AI models and suggests a future where these systems could interact more broadly with the physical world.

Observations from the study noted that the human group without Claude displayed more negative sentiments and confusion, indicating that Claude provided a more user-friendly interface and quicker connectivity to the robot.

The Go2, priced at sixteen thousand nine hundred dollars, is typically deployed in industrial settings for tasks such as remote inspections and security patrols. While current AI models, according to Logan Graham from Anthropic, are not advanced enough to fully control robots independently, the research aims to prepare for a time when AI models might self-embody and operate physical systems.

This development reinforces the importance of responsible AI practices as the industry moves towards more sophisticated automation and AI integration. Ultimately, as these systems evolve, they may redefine how tasks in industries like manufacturing and construction are approached, signaling a significant shift in robotics.

← Back to All Transcripts