How contextual apps can finally make smart devices smart

[Update: This article was published in VentureBeat on 4/19/15]

We all have smart products that go awry. Up until a few months ago, for example, Netflix didn’t allow the creation of different profiles, so the recommendations in my house were a strange collection based on the amalgamated viewing tastes of my wife, my two teenage daughters, my 11-year old son, and me.

This becomes particularly annoying as apps jump into the physical world and control devices. We all have experience with devices whose supposedly smart behavior turns out to be inappropriate at the time – and we end up shutting down the “intelligent” features to use them manually (or the devices end up at the bottom of a drawer).

The issue is that smart products today jump to conclusions too easily: they go from sensing to acting in one step. In the rush to get products out to market, many vendors trivialize the complexities of the real world and create products that use very basic reasoning.  Most of the time, they don’t even let users participate actively in the decisions being made.

One step to address this is to decouple the sensing-acting loop, and insert a thinking component into the process. Just like in nature: We become smarter when our stimuli are not processed immediately by our reflexes but are first evaluated by our prefrontal cortex.

This thinking component needs to ensure the app really understands the context it is in, as well as the appropriateness of taking an action, before taking it. Because this implies that apps are aware of the context that surrounds them, we call these “contextual applications.”

Here are a few ways that highlight how contextual applications are different:

1. Driven by contextual changes

The first and probably most fundamental difference is that these apps are driven by contextual changes, not users. User input is, of course, an essential component of context: “the user is pressing a button” – is clearly a signal to consider! However, that’s only part of it: a change in a sensor reading, the passing of time, or any other similar factor could drive the app, as well as other abstract things such as the presence of another device or the absence of an event.

2. Are always on

Because context changes continually, contextual apps need to be always on and monitoring. They are like a service layer, always running in the background. However, they shouldn’t interact with you or anyone else unless they have something relevant to say, in which case they should be proactive about it. Actually, a key objective for them should be to reduce the number of times the user opens them in a day. This is a stark contrast from the majority of apps today, which make efforts to increase the time you spend engaging with them.

3. Leverage dynamic data streams

The more data your app has, the better understanding it can have of the world around it. Geolocation, proximity, battery level, speed, state of other devices, and data from public and private systems are all useful sources. But because of the massive amounts of data this could represent, a key element is the ability to dynamically turn on and off different data streams and regulate the resolution at which they work.

4. Create meaning through abstraction

To make interactions and information relevant, we need apps to function at a higher conceptual level. For example: “a device enters the location 30.534oN, -97.231oW” is not very meaningful. However, a notification saying that my son arrived safely home from school is very relevant indeed. We need to think of concepts and information as building blocks that stack on each other reaching higher levels of abstraction.

As contextual information becomes dissociated from the devices that generate it, it can transform into abstract data elements. This is important because as new data sources become available, they can be incorporated into compatible concepts. For example, I can put a beacon in my car and define it as a “place.” Note that even if the beacon doesn’t contain geolocation data and is actually moving, it can define a place abstraction just as well as a geofence.

5. Use heuristics for common sense

To create even higher-level concepts, there are several tools we can use. One of them is heuristics. These are simple rules that mediate relationships between concepts and can be helpful in implementing common-sense features. For example: Record the user location at 3:00am every day. If after 10 days, more than 90 percent of the recorded locations are the same, suggest that place as their home. Or the amount of time spent at a particular location: When someone is inside a geo-fence around a coffee shop, there’s a difference between them being static for 15 minutes (probably having coffee) and them traveling at 30 mph (probably passing by in their car).

6. Apply statistical analysis and regressions

Another mechanism we can use is statistical analysis and regression techniques. It is relatively easy to have the computer run statistical analysis over data at different conceptual levels and to detect patterns and significant deviations from them. One example is monitoring frequent routes, like an afternoon carpool that brings your kids home from school.

Let’s assume the trip home normally takes 20 minutes with a standard deviation of 5 minutes. If my son Ricky isn’t home 35 minutes after school ended, there is a 99 percent likelihood that something is going on. Possibly someone forgot to pick him up or the car broke down, in any case, a good reason for an alert.

7. Capable of learning: algorithms

Yet another resource are learning algorithms, including Bayesian networks, neural networks, and reinforcement learning. These mostly work by taking incoming data, building internal models, and trying to predict what will happen next. They then take actual events and see whether predictions were correct or not. The results are fed back into the system to help it learn, reinforcing factors when they succeed and weakening them when they don’t. Over time, we can watch the predictions’ success rate and start trusting them after they have hit a certain confidence level.

8. Exhibit resiliency and adaptability

Two other important considerations are resiliency and adaptability. Resiliency refers to the fact that in the real world, things happen and the environment is variable: Data sources come and go. Maybe the phone is out of coverage or is running low on battery. Or the operating system killed the app. In these cases, the app needs to try to recover and degrade gracefully if it can’t. Adaptability refers to the fact that the baseline environment needs to consider and differentiate temporary modifications (for example, the family went on vacation) from rmanent ones (a new device type was introduced or new beacons were installed).

9. Build trust through conversational interactions

The final elements to consider when building contextual apps are patience and trust. We are all used to instant gratification, so we expect to buy a smart app and have it work immediately out of the box. I believe that we should rethink this. Bringing a smart device in our lives is more like getting a pet: We will need to house train it.

Growing Together

I believe devices should be inherently conservative in their approach to taking action, checking and working with their users as they learn. This conversational interaction will make our devices more like partners (who sometimes fail), and less like obedient tools. At the same time, this will help users trust the decisions the app is making, as the two become familiar.

The downside of this approach, of course, is that the cost of this process in terms of time is high – which is why it is critical that new devices are able to learn from other, fully-trained devices.

So, as we continue building our apps, let’s consider how to make them truly contextual apps – and in doing so, help make this new “smart” world friendlier, simpler, and a bit more human.