TECHnalysis Research president Bob O'Donnell publishes commentary on current tech industry trends every week at LinkedIn.com in the TECHnalysis Research Insights Newsletter and those blog entries are reposted here as well. In addition, those columns are also reprinted on Techspot and SeekingAlpha.
He also writes a regular column in the Tech section of USAToday.com and those columns are posted here. Some of the USAToday columns are also published on partner sites, such as MSN.
He also writes occasional columns for Forbes that can be found here and that are archived here.
In addition, he has written guest columns in various other publications, including RCR Wireless, Fast Company and engadget. Those columns are reprinted here.
January 30, 2026
By Bob O'Donnell
OK, I’ll admit it; I was a skeptic at first.
After all, some of the early iterations of AI browsers basically took a Chromium-type engine, added a slightly modified UI and replaced the initial home page destination of a search bar with a chatbot prompt. Not exactly revolutionary.
Over time, however, it’s becoming increasingly clear that AI browsers have the potential to do significantly more. In fact, I think they could be the trigger that finally starts to make on-device AI meaningful and impactful to a huge range of consumer and business device users. Beyond that, they could serve as a critical cog in driving distributed, hybrid AI architectures and applications.
What I initially didn’t consider was that AI browsers are much more than just another application—they’re essentially becoming platforms upon which a whole range of other applications and services can be run. Admittedly, the idea of a browser as a platform isn’t new and the concept of websites or a collection of HTML pages that function essentially as standalone applications has been around for a long time as well.
What’s different now, however, is that we’re starting to see the idea of more distributed applications—where certain elements can be run in one environment and other elements in another—is coming to the fore. While this isn’t necessarily because of the rise of cloud-based AI-powered applications, there certainly seems to be a very strong correlation there. Initially, of course, much of this was due to the fact that the core LLMs driving AI applications like chatbots were only available in enormous, cloud-based datacenters. The terminal-like interface of chatbot prompts acted as a simple means to interact directly with those large models.
With the development and proliferation of Model Context Protocol (MCP), however, it became possible to treat AI models less like monolithic endpoints and more like interoperable resources that can be accessed across different environments. In other words, MCP enables coordination—allowing an application to dynamically engage multiple models, potentially running in different locations, as part of a single workflow.
This concept becomes even more powerful when applied to mixture-of-experts (MoE) models and the intelligent, real-time “chunking” of requests into smaller tasks that can be routed to specialized models. Critically, this raises an important architectural question: where should the intelligence that performs this routing actually live?
Placing that decision-making logic close to the user—rather than deep inside a cloud service—creates opportunities to take advantage of local context, available device resources, and even enterprise infrastructure that may be invisible to a purely cloud-based application.
Inherent in this type of architecture is the need for an orchestration engine—something that can determine how to break a problem into smaller workloads and decide where each should be executed. This is the point at which AI browsers begin to look far more consequential than they first appear.
Imagine if that orchestration engine were embedded directly into the browser—an application that already sits at the intersection of user intent, data access, identity, and device resources. Putting it another way, browsers are uniquely positioned to become AI orchestration platforms because they are ubiquitous, frequently updated, already trusted with identity, data, and permissions, and inherently cross-platform.
To complete the picture, there are two other critical elements to consider. First, all of the AI browsers are being built by companies who have their own frontier AI models and who have all created (or are at least working on) smaller versions of these models that are specifically optimized to run directly on devices. By embedding these device-specific Small Language Models (SLMs) into their AI browser, they could customize the orchestration agent to be intelligent about what resources are available within their range of frontier models, thereby allowing the most efficient use of their computing resources. In addition, because of how frequently browsers are updated, this provides an easy mechanism for these vendors to keep these local models up-to-date. While operating system vendors will undoubtedly try to claim this orchestration layer for themselves, the browser’s agility and cross-platform consistency make it a more practical execution engine for these tasks
The second and final critical element is the fact that modern PCs and smartphones are now much better equipped to run these local models, thanks to the integration of more powerful CPUs, GPUs and the brand new class of NPUs (Neural Processing Units). Hardware support is critical to make a distributed hybrid AI architecture possible, and the installed base of these devices is starting to hit critical mass.
So, if you put all these different pieces together you can start to imagine a number of intriguing possibilities with very important implications. First, the ability to have an orchestration agent run locally on a device and determine, for example, what elements of a query it could potentially answer from its own local models and then what elements need to be sent to other environments could be a tremendous improvement in computing efficiency and drive a significant reduction in power consumption. Instead of sending everything to the cloud, workloads could leverage all the compute resources available to them.
Oh, and by the way, the resource to which these workload components could be sent also includes enterprise data centers equipped with AI infrastructure (as well as the cloud). Organizations creating custom applications for their own specific purposes are likely going to want to tap into those enterprise AI factory resources for access to their own proprietary or custom fine-tuned models. Plus, as we enter an era where serious questions about the power grid’s capability to support all the expected AI traffic are rising rapidly, the ability to leverage local datacenters and the computing horsepower of their own devices is going to make a big difference. The desire to have more control and guarantees over not just power but even compute resource availability is undoubtedly going to become more important as we move deeper into the AI computing era.
In addition to compute efficiency and power savings, the privacy and security benefits of initiating AI workloads on device and being able to tap into local data opens up a huge range of opportunities for customization and personalization.
Finally, the real game-changer for AI browsers is their ability to serve as the hub from which AI agents are run and controlled. Everyone, it seems, is expecting agentic AI to completely rewrite the rules on how we perform tasks, get information, and interact with our devices. Exactly how (and where) those interactions will occur hasn't been entirely clear—until now. Just this week, Google announced important extensions to its Chrome browser that not only allow Gemini to run in a sidebar, but to run agentic applications that tap into local resources. This concept of driving agentic AI through the browser hearkens back to my earlier point: AI browsers are becoming the platform upon which more and more of our actual work gets done.
To be clear, there are bound to be several security-related challenges when it comes to running agents on personal devices and I expect a number of hiccups along the way. In addition, while many of these arguments about AI browsers and agentic applications sound good in theory, the reality is that transitions like this tend to take longer than many initially expect, so don’t hold your breath. Ultimately, however, the idea of leveraging a familiar application like a browser in a new and smarter way seems like the most logical means by which to integrate AI and agents into our everyday workflow and onto our devices. While, like others, I originally was thinking that AI-powered features in office productivity and other creative applications were the ticket to driving on-device AI experiences, I’m increasingly convinced there is a new path forward and AI browsers are the way to get there.
Here’s a link to the original column: https://www.linkedin.com/pulse/ai-browsers-could-change-everything-bob-o-donnell-ebw7c
Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.
Podcasts
Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research has a video-based podcast called TECHnalysis Talks.
LEARN MORE |
|
Research Offerings
TECHnalysis Research offers a wide range of research deliverables that you can read about here.
READ MORE |
|