Pages

Total Pageviews

Saturday, December 31, 2011

Apple Will Make Smartphones Even Smarter

Copyright (C) Unified-View, All Rights Reserved.
December 31, 2011


By Art Rosenberg, The Unified-View

For my last post of the year, I want to highlight something else that Apple is bringing to the Mobile UC table. They have been notably successful in innovating the design of mobile devices (iPhone, iPad) and it looks like they are converging the user interface modalities even further with their latest patent announcement of the “Smart Bezel” and its Multi-Modal Human Interface (MMHI) Engine.

http://www.patentlyapple.com/patently-apple/2011/12/apples-revolutionary-smart-bezel-project-gains-a-new-chapter.html

As long as I have been writing about the multi-modal benefits of UC for mobile end users, I have been suggesting that contact initiators should be able to dynamically choose their modality of communication independently of what their recipients may want. That will be particularly valuable for all forms of messaging, where both message input and output (retrieval) could be voice or visual. It will also be very useful for “mobile apps” where input commands can exploit the convenience of voice, while output responses (menu choices, information, graphics, etc.) can exploit the screen. Such flexibility is what UC is all about from the practical end user perspective because it makes the mobile user not only more accessible but also more efficient/productive in using their time.

What is particularly interesting about the Apple approach is that it will simplify and dynamically automate any changes in user interface options based upon the individual end user’s environmental situation. This would be particularly important for dark vs. bright lighting conditions as they impact the use of the screen and its battery needs, as opposed to using speech or haptic input/output as much as possible.

We have always suggested that a person driving a car will require “hands-free” input and “eyes-free” output to insure safe driving. (We can always debate the issue of distractions of any kind for safe driving!) Apple’s Multi-Modal Human Interface Engine would be able to detect the fact that the movement of the mobile device indicates it is in a moving vehicle and could automatically invoke limited interface modality choices. Although there will always be an issue of whether the user is actually driving or is a passenger on a car, train, plane or bus, this kind of sensor detection can still initiate some simple form of “confirmation,” whether from the user or from the vehicle itself.

While automated media conversion has long been available after the fact through “visual voicemail” and improved speech recognition technology that converts voice messages to text, Apple’s Smart Bezel may dynamically control all forms of input and output modalities at the endpoint device level where the user interface action is at. So, while we have always looked at UC’s ability to enable end users to utilize any form of communication exchange between people or between mobile business applications, we still require those end users to make most of those choices manually. Now, maybe it can be done more intelligently and automatically by those multi-modal smartphones that are not just “phones” for conversation anymore.

Stay tuned!