Final week, we talked about how agentic AI is lastly attending to work.
AI brokers are actually beginning to plan, motive and perform digital duties with out fixed prompting.
Coders are utilizing them to search for bugs and rewrite damaged code. Sellers on Amazon are utilizing them to assist handle their inventories. And agentic AI is even getting used to tackle extra advanced points.
For instance, final month researchers revealed a paper on HealthFlow, a self-evolving analysis agent constructed to deal with medical analysis challenges.
As an alternative of ready for a human immediate at each step, HealthFlow plans its personal strategy to analysis. It exams totally different methods, learns from the outcomes and improves its strategies over time.
It’s like a junior researcher who will get smarter with each experiment. And in benchmark exams, HealthFlow beat high AI methods on a few of the hardest well being information challenges.
But as thrilling as that’s, these AI brokers are nonetheless software program. They’re trapped contained in the digital world.
Or are they?
Robots Are Getting an Improve
On September 25, Google’s DeepMind launched Gemini Robotics 1.5.
And with this launch, agentic AI has develop into a part of the bodily world.
Gemini Robotics 1.5 is definitely two fashions that work in tandem. Gemini Robotics ER 1.5 is a reasoning mannequin. It might probably use instruments like Google Search to interrupt large objectives into smaller steps and resolve what must occur subsequent.
Gemini Robotics 1.5 is a vision-language-action (VLA) mannequin. It takes the subgoals from ER 1.5 and interprets them into concrete actions like greedy, pointing and manipulating objects.
The mix of the 2 fashions is one thing new in robotics…
A system that thinks earlier than it strikes.
DeepMind says these fashions are designed for multi-step, on a regular basis duties like sorting laundry, packing for the climate or recycling objects primarily based on native guidelines.
This sort of adaptability has been the lacking piece in robotics for many years.
Factories are filled with inflexible machines that carry out a single motion, again and again. However the second the product adjustments, the robotic must be reprogrammed from scratch.
What DeepMind is growing is a robotic that may generalize and make adjustments on the fly.
Equally as vital, they’ve launched movement switch, the power to show a ability as soon as and share it throughout totally different robotic our bodies.
In a single video, they confirmed a robotic arm within the lab studying the right way to carry out particular duties. Gemini Robotics 1.5 then enabled Apptronik’s humanoid Apollo robotic to reuse that information with out ranging from scratch.
Picture: DeepMind on YouTube
This can enable robots to quickly scale the sorts of jobs they’ll do in the true world.
And it’s why DeepMind isn’t alone in these ambitions.
Nvidia has been racing down the identical path. At its GTC convention in March, Nvidia’s CEO Jensen Huang confirmed off one thing referred to as GR00T that’s like a “mind” for humanoid robots.
It’s a basis mannequin educated to assist them see, perceive and transfer extra like folks.
Just a few months later, Nvidia added the “muscle” when it launched Jetson Thor, a robust laptop that sits contained in the robotic itself. As an alternative of sending each choice again to the cloud, it permits robots to assume and act on the spot in real-time.
Collectively, GR00T and Jetson Thor give robots each the intelligence and the reflexes they’ve been lacking.
Amazon has additionally been transferring on this path. Final 12 months, the corporate started testing Digit, a humanoid robotic from Agility Robotics, inside its warehouses.

Picture: Agility Robotics
The trials have been restricted, however Amazon’s objective is apparent. A fleet of humanoid robots wouldn’t solely by no means tire, they might by no means unionize.
Then there’s Covariant, a startup that launched its personal robotics basis mannequin, RFM-1, earlier this 12 months.
Covariant’s robots can observe pure language directions, study new duties on the fly and even ask for clarification once they’re undecided what to do. In different phrases, RFM-1 offers robots human-like reasoning capabilities.
That’s an enormous leap from the senseless machines we’ve been used to.
Sanctuary AI is constructing robots geared up with tactile sensors. Their objective is to make machines that may really feel what they’re touching.
It’s a capability people take without any consideration, however it’s one which robots have at all times struggled with. Mix contact with reasoning and you’ll see how robots might quickly deal with the sort of unpredictable, delicate duties that fill our each day lives.
However what do all these advances in robotics add as much as?
Nothing lower than what I’ve been pounding the desk about for years.
The road between software program and {hardware} is blurring because the digital intelligence of AI brokers is being fused with the bodily capabilities of robots.
As soon as that line disappears, the alternatives are infinite…
And the market potential is staggering.
Goldman Sachs tasks the humanoid market alone might attain $38 billion by 2035.

Whereas the worldwide robotics business is projected to hit $375 billion in a decade — greater than 5X its measurement in the present day.

Right here’s My Take
As at all times, there are causes to measure optimism with warning.
In any case, real-world environments aren’t the identical as digital environments. Lighting adjustments, objects overlap and issues break.
Dexterity and agility are nonetheless points for robots, but security is non-negotiable. A careless robotic might injure somebody.
What’s extra, the prices of constructing and sustaining these methods stay excessive.
But when historical past tells us something, it’s that breakthroughs hardly ever arrive absolutely polished.
I’m positive you bear in mind the sluggish, unreliable dial-up web of the Nineteen Nineties. However that didn’t cease it from turning into the spine of the worldwide financial system.
I imagine that’s the place we’re with the convergence of agentic AI and robotics in the present day…
However I anticipate issues will transfer a lot quicker from right here.
Going ahead, we’re going to start out coping with machines that may assume and act in the identical world we reside in.
And the disruption that follows has the potential to dwarf something we’ve seen up to now.
Regards,

Ian King
Chief Strategist, Banyan Hill Publishing
Editor’s Be aware: We’d love to listen to from you!
If you wish to share your ideas or options in regards to the Day by day Disruptor, or if there are any particular matters you’d like us to cowl, simply ship an electronic mail to [email protected].
Don’t fear, we received’t reveal your full identify within the occasion we publish a response. So be at liberty to remark away!













