Smartphones come equipped with several sensors like Ambient light sensors, proximity, Accelerometer, compass etc. Independently, these sensors have a functions that are used by app developers in their apps. Now what happens when all these sensors are used together? One such concept is called Augmented Reality. Your location, the phone’s compass and even orientation working together like magic.
While doing some research I came across a Microsoft Research project aptly called SensoryPhone. The project is a concept that combines five different concepts and leverages information from the sensors to offer a more contextual mobile experience. Now one of the major concerns with sensors is they can’t be left active all the time since they suck battery but intelligent computation for different sensors theoretically allows for letting the sensors collect data while not impacting battery life.
As I wrote in another column that Apple is using Geofencing to great effect, location is a part of this SensoryPhone concept. Microsoft researchers suggest two methods to obtain location information that are fairly efficient in battery consumption and accuracy. But location is just one of the concept. The others are:
- Little Rock—this is the intelligence that lets engineers keep the sensors always ON while maintaining battery consumption
- Falcon—this is the most interesting component and something I believe trumps Apple’s Geofencing implementation in iOS6
- SpeakerSense—this takes TellMe to the next level, beats Siri and blows your mind, all in one
- A-loc & LEAP—these are two methods for obtaining location information
Little Rock
Let’s take a look at Little Rock first. According to a separate research project within Microsoft Research by the same folks from SensoryPhone, Little Rock is an architecture where sensor data is dumped to a low-energy consuming dedicated processor. Theoretically this means the main phone processor can remain in low-power mode and is only called for when this Little Rock processor asks for its elder brother.
In the research phone, Little Rock included the following sensors:
- Accelerometer
- Compass
- Temperature
- Barometer
- Gyroscope
The first 4 are digital sensors, while the Gyroscope is an analog sensor. The paper suggests Gyroscopes don’t have a low power mode, which means, when idle, they still consume quite a bit of power. The researchers overcame this problem by cutting power to the gyroscopes when not being used. Skipping the graphs and tech, here’s the paper’s conclusion:
[Little Rock] can result in signi cant savings. For a pedometer applica-tion, the energy savings by running with Little Rock is three orders of magnitude compared to running on the current phone architecture.
The Little Rock architecture gives programmers more flexibility to choose where to allocate their applications, but it also brings challenges on application development. As future work, we will investigate how to provide tools and programming models to simplify software development.
Here’s the Little Rock project page.
Falcon
Next up is Falcon. For those who have been keeping a close eye on Microsoft would know Falcon has been in news before. In FALCON’s implementation, the phone starts loading an app based the phone’s locations and time. This means your phone will know what apps you’re likely to use if you’re on a street with several restaurants during your lunch hours. Technically, Falcon is smarter than Geofencing since it predicts and triggers apps even before you start an app.
SpeakerSense
While technologically this sounds pretty cool, I don’t see much advantage of having this in my phone. The idea is to perform real-time speech recognition on the person you are talking to and identify the person.
Here’s the thing, if the person isn’t in my contact list and I don’t recollect their voice, I don’t think the person should take it to heart if I don’t know who (s)he is. Anyhow, having real-time speech analysis and recognition on a phone is technologically tough. There are several parameters to be considered and the Microsoft Research team rightly points two as the most critical:
- Speech analysis
- Battery consumption
The research team was able to increase the efficiency of both by using a separate low-power processor. Based on my reading of the paper, I don’t think this computation done like TellMe via a data connection. There seems to be a local offline server involved. But as I said, I don’t see the point of this.
If you’re interested in the research findings, head here.
There is however a related research project that might several users. Called EmotionSense, the project senses and calculates the emotional state of the person on the other line. No more not knowing whether the girl friend is annoyed, happy or just being normal. Like in SpeakerSense, there doesn’t seem to be a continuous Internet connection involved but there is a Knowledge Base to compare the voice patterns.
The math and charts can be found here.
LEAP & A-loc
Both these concepts deal with obtaining location information continuously but without draining the phone’s devices. Understanding the concepts behind both implementation is better done though the researchers’ explanation:
LEAP:
The LEAP project introduces new methods for GPS signal processing that leverage additional information available on smartphones as well as cloud offloading to reduce the overall energy cost of using GPS based location for latency insenstive location tracking scenarios.
A-loc
We develop energy efficient methods that intelligently determine when to sense location and which location modality to use depending on expected sensor error and application accuracy requirements, in order to create the appearance that the mobile device always knows its location.
Now that we’ve looked at all the components, here is how the research team sees all of them bundled into a device:
The first question to be asked would be, if a new processor chips are added this will impact the size of the device. Now, I’m not sure how this is going to be tackled but it isn’t as complicated a problem. This implementation is not limited to phones. Given how there is consensus around tablets being the next personal computers to own, having these sensors part of a tablet make as much sense as a phone.
Imagine being able to identify the singer of a song using SpeakerSense?! That can be a shot-game, no?
Microsoft Research seems to have applied to patent this idea back in 2010. Some images from the patent show where the researchers see these additional processors being added in a phone: