Technalysis Research
Previous Blogs

June 17, 2014
Moving to Markets of One

June 16, 2014
Insider Extra: Dell and the Battle for Business

June 10, 2014
Screen Overload to Drive Screen-less Devices

June 3, 2014
Apple Drives Vision of Seamless Multi-Device Computing

May 27, 2014
Surface Pro 3: The Future of PCs?

May 22, 2014
Insider Extra: SanDisk: The Many Faces of Flash

May 20, 2014
The Technological Divining Rod

May 13, 2014
Computing in the Cloud

May 6, 2014
Device Usage a Question of Degree

April 29, 2014
The Next Smartphone Battleground: Durability

April 22, 2014
BYOD: A Work in Progress

April 18, 2014
Insider Extra: AMD Back in the Groove

April 15, 2014
The Mobility Myth

April 9, 2014
BYOD Dilemma: Devices vs. Data

April 8, 2014
Insider Extra: Qualcomm's Evolving Story

April 1, 2014
A Wearables Forecast

March 25, 2014
Measuring Success in Wearables? It's Thousands of Thousands

March 24, 2014
Insider Extra: Intel Strategy Moves Forward

March 18, 2014
IOT: Islands of Isolated Things?

March 11, 2014
Wearables Cautionary Tale

March 4, 2014
The New Platform Battle

February 25, 2014
Watch What Happens

February 18, 2014
Talkin' 'bout Touchpads

February 11, 2014
The MultiOS Conundrum

February 4, 2014
Computing Redefined

January 28, 2014
The Apple Problem

January 21, 2014
The 2-in-1s People Might Want

January 14, 2014
The Post Tablet Era

January 7, 2014
The Innovation Asymptote

December 31, 2013
Top 5 2014 Predictions

December 17, 2013
Holiday Shoppers Gifting Themselves

December 10, 2013
Companion Apps

December 3, 2013
Aisle Check

TECHnalysis Research Blog

June 24, 2014
The Future of UI: Contextual Intelligence

Despite all the tremendous developments in the world of mobile devices, there’s one aspect that’s been essentially stagnant for quite some time: a user interface based on grids of application icons. Since the 2007 introduction of the iPhone, that visual representation, and variations on it, have been at the heart of virtually all the mobile devices and mobile operating systems we’ve used. Current versions of iOS, Android, Windows, Windows Phone, ChromeOS and even Firefox all basically ascribe to the app grid format and structure. It’s reached the point where mobile devices seem to be defined and, as I’ll argue shortly, confined by it.

To put it bluntly, it’s time for the icon grid to go.

Now, to be fair, the visual metaphor of the icon grid works on many levels. It’s relatively simple to understand and it helped serve the very useful purpose of driving the creation of an enormous variety of different applications—icons to fill the grid. I’d argue, in fact, that part of the reason “apps” have become some such a core part of our experience with mobile devices is due in no small part to the central role they play in the icon grid representation delivered by modern mobile operating systems. The app icons aren’t just the central part of the visual organization of the UI, they are the essential element of the OS and drive the experience of how the device is intended/expected to be used. Given the UI, what else would you do but find and launch apps?

In a world where there are over a million choices of apps/icons to fill many of the grids, however, the metaphor seems woefully inadequate. At a basic level, sorting through even tens of applications can be challenging, let alone hundreds or more. Even more importantly, we’re seeing an increasing emphasis on services that are only modestly tied to applications. While I’m not quite calling for the death of mobile apps, I do believe we are seeing a de-emphasis on them and a move towards services as people look for new means of interacting with their devices.

Through these more service-oriented apps, people are starting to see their devices acting a bit more intelligently. Instead of forcing the device user to initiate all the activities—typically by launching an app—these more service-driven applications start to perform activities on behalf of the user. Apps such as Assistant from Speaktoit, for example, show where these developments are headed.

The problem is, the icon grid metaphor doesn’t really work for these types of services/apps and provides little opportunity for the device to be “intelligent.” Instead, it basically forces you to think about and engage in one specific activity at a time. Moving forward, however, I believe users are going to increasingly expect/demand this type of intelligence and that’s the primary reason why it’s time for a completely different perspective on UI.

Interestingly, and perhaps controversially, I would argue that Microsoft’s recent efforts with Windows Phone 8.1 are starting to move into this new direction. The UI is still primarily icon grid-based, but there are elements of it, including Live Tiles and Siri competitor Cortana’s more proactive assistance to the device user, that start to suggest the future I’m describing.

But there’s still a long way to go. Even simple things like adjusting what applications or services are available at a given time on the home screen of devices is something that a division of the new, hardware-less Nokia just introduced in the form of a smart launcher called Z Launcher (initially available only for Android). It’s a good idea, but there’s so much more that could be done leveraging information that the smartphone already has: location, based on GPS or even WiFi; speed of movement (in a car or plane, for example), based on gyroscope and other common sensors; etc.

More intelligent use of all this data could enable an entirely new type of UI as well as a set of smarter services/experiences that initiate more activities on behalf of the device user. In addition to sensor data, simply logging the activities that a user regularly engages in, then analyzing that (let’s call it “small data analytics”), and applying those simple learnings to changes in the UI could also be part of a UI overhaul.

All of these things are part of understanding where, when and how the user is engaging with the device—it’s context, so to speak—and spending time developing more “contextual intelligence” is key to making devices that are already an important part of people’s lives even more essential.

Most of these new intelligent service-like capabilities can/will leverage sensor data in the device. This is one of the reasons why I expect we’ll see new sensors, from altimeters and barometers to pulse oximeters and more; as the key new hardware capabilities built into next generation phones. It’s also the one opportunity that gives sensor-laden wearable devices a chance to survive as intelligent smartphone peripherals. Future OS’s should be able to use a phone’s built-in sensors as well as any others made available from a nearby connected device.
We already have specific applications that can leverage some of this sensor-based data, but in order to enable the leap forward I believe is necessary in improving the interaction with a device, these kinds of services needed to be embedded throughout the operating system. In addition, the OS developers need to open these kinds of service APIs to others so that they can further enhance the user experience with their own variations, extensions, etc. That’s the future for today’s app developers.

Location-based services and other types of simple “contextual intelligence” have been talked about and even demonstrated for a while, but now’s the time to take things to the next level and really move our mobile devices into a more proactive, more intelligent future. Can’t wait to see where we end up.

Here's a link to the original column:

Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research participates in regular audio podcasts in conjunction with the team at
  Research Schedule
A list of the documents that TECHnalysis Research plans to publish in 2015 can be found here.