While many AI companies fork out big bucks to Nvidia for the chips that power their models, Google takes a different approach. This week, it unveiled its latest generation of custom AI hardware, specifically crafted for what it calls the “agentic era” of artificial intelligence.
| By The Numbers: Alphabet / Google | |
|---|---|
| Stock (GOOGL) | $337.66 (+1.62%) |
| CEO | Sundar Pichai |
| Headquarters | Mountain View, CA |
| Founded | 1998 |
| Sector | Big Tech |
| New TPU chips announced | 2 (one for training, one for inference) |
What Is the “Nvidia Tax”?
To grasp the significance of Google’s chip announcement, it’s crucial to understand how most AI gets built. Training and running AI models demands massive amounts of specialized computing power. Currently, one company — Nvidia — dominates the market for that. Its H100 and B200 GPUs, originally designed for gaming but now adapted for AI, come with steep prices. Nvidia’s impressive gross margins reflect this reality. Essentially, they’ve become a vital toll booth on the road to AI.
Most leading AI labs, like OpenAI, Anthropic, and Meta’s research teams, pay that toll. Google, however, avoids most of those costs. Instead, it designs its own chips in-house, known as TPUs (Tensor Processing Units), which are tailored to handle AI workloads. This gives Google a notable cost advantage over competitors who rely heavily on Nvidia’s supply and pricing.
Two Chips, Two Jobs
Google’s recent announcement involves two distinct chips rather than just one, and there’s a clear purpose behind this division. One chip focuses on training — the demanding process of teaching a model by feeding it billions of examples, often taking weeks and costing millions. The other chip is designed for inference — the crucial moment when a trained model answers a question or completes a task, which occurs billions of times daily across Google’s products.
Imagine the difference between a commercial bakery oven and the display case in front. One is made for high-volume production, while the other serves customers quickly. They each have unique requirements, and Google is now creating hardware that recognizes that difference.
The company describes this generation of chips as built for the “agentic era.” This term refers to AI systems that do more than just answer questions. They can take actions on your behalf, like booking a flight, filling out forms, or researching topics across multiple websites. This kind of multi-step interaction demands different capabilities from chips than a simple chatbot response does. Ars Technica has more technical detail on the chip architecture.
Chrome Gets an AI Coworker
The chip announcement coincided with a flurry of product updates showcasing where all that computing power is headed. Google is introducing Gemini-powered “auto browse” features to Chrome for enterprise users. This means the browser can now navigate websites, fill in forms, and conduct research tasks for you. It feels less like a search engine and more like handing off a task to an assistant right inside your browser tab.
This marks a significant shift in how users engage with a tool that has largely remained the same since the early 2000s. TechCrunch has a detailed breakdown of the new enterprise Chrome features.
Google Meet Now Takes Notes at In-Person Meetings
On a more practical note, Google has also expanded its AI meeting notetaker beyond Google Meet video calls. Gemini can now generate transcripts and summaries for in-person meetings, as well as calls on Zoom and Microsoft Teams — not just on Google’s platform. This is a smart move toward interoperability, especially as many AI features are often tied to specific ecosystems. The Verge has the full breakdown on the expanded notetaker.
What This Means for Everyday Users
If you use Google products at work, like Chrome, Meet, or Workspace, you’ll soon see AI that takes action rather than just makes suggestions. An AI that can browse ten competitor websites and compile a price comparison is much more useful than one that simply tells you how to do it.
For regular consumers, the chip news is more background information than something you’ll directly experience. But it helps explain why Google can keep its AI features competitive with companies like OpenAI and Anthropic without being as reliant on Nvidia’s supply chain. When Nvidia hikes prices or faces export restrictions, like those recently affecting chips bound for China, Google remains largely insulated. This stability is crucial for ensuring these features stay affordable and accessible in the long run.
The focus on workplace applications is also telling. Google seems to believe that enterprise customers — companies paying per seat for Workspace licenses — are where AI tools currently justify their cost. Consumer features usually follow once enterprise use cases prove effective.
Community Reaction
“The TPU advantage is why Gemini can be so aggressive on pricing. They’re not paying H100 rates to run inference at scale. Everyone else is.”
“Auto browse in Chrome sounds cool until you realize you’re handing your browser history and login sessions to an AI agent. I need to see very clear permissions before I enable that at work.”
What To Watch
- Enterprise rollout timeline: Google’s auto browse for Chrome targets enterprise users first. Keep an eye out for a broader consumer rollout announcement, likely connected to a Workspace or Chrome update in the coming months.
- Nvidia’s response: Nvidia isn’t sitting still. Its next-generation Rubin architecture chips are expected in late 2026. The gap between custom silicon and off-the-shelf GPUs is constantly shifting.
- Agentic AI regulation: As AI agents start taking real actions — like clicking buttons or making purchases — expect more regulatory scrutiny, especially from the EU. Google’s enterprise push puts it at the forefront of this discussion sooner than most.
- Google I/O 2026: Google’s annual developer conference typically happens in May. Anticipate more details on TPU availability, Gemini model updates, and a broader consumer rollout of this week’s announcements.
Maya Torres
Maya Torres is the Consumer Tech Editor at Explosion.com with 7 years covering product launches for major technology publications. She has reviewed over 300 devices across smartphones, laptops, wearables, and smart home products. Maya specializes in translating spec sheets into real-world buying advice and attends CES, MWC, and Apple keynotes as press. Her reviews focus on helping readers decide what to buy, not just what specs look good on paper.



