A Completely Connected World Depends on Loosely Coupled Architectures

This article from CNNMoney describes a completely connected world where not just every device, but literally every thing you own “will want to be your friend on the Facebook of things.”

At the Mobile World Congress in Barcelona, companies like IBM (IBM, Fortune 500), Qualcomm (QCOMM), AT&T (T, Fortune 500) and Ericsson showed off their vision of a not-too-distant future in which every item in your life, from your refrigerator to your fridge magnets, will soon connect to the Internet or communicate other Internet-connected gizmos.

Here’s how it would work: Electric devices like washing machines, thermostats and televisions will be manufactured with chips that allow them to wirelessly connect to the Internet. Non-electric items will either come with low-energy, Bluetooth-like transmitters that can send information to an Internet-connected hub or radio frequency identification (RFID) tags that can be sensed by Internet-connected near-field communication (NFC) readers.

From MWC 2011: Life in 2020 will be completely connected – Feb. 17, 2011
Referenced Fri Feb 18 2011 09:39:28 GMT-0700 (MST)

This is how these articles always are: “everything will have a network connection” and then they stop. News flash: giving something a network connection isn’t sufficient to make this network of things useful. I’ll admit the “Facebook of things” comment points to a strategy. IBM, or Qualcomm, or ATT, or someone else would love to build a big site that all our things connect to. Imagine being at the center of that. While it might be some IBM product manager’s idea of heaven, it sounds like distopian dyspepsia to me.

Ths reminds me of a May 2001 Scientific American article on the Semantic Web where Tim Berners-Lee, James Hendler, and Ora Lassila give the following scenario:

“The entertainment system was belting out the Beatles’ ‘We Can Work It Out’ when the phone rang. When Pete answered, his phone turned the sound down by sending a message to all the other local devices that had a volume control. His sister, Lucy, was on the line from the doctor’s office: …”

Sound familiar? How does the phone know what devices have volume controls? How does the phone know you want the volume to turn down? Why would you program your phone to turn down the volume on your stereo? Isn’t the more natural place to do that on the stereo? While I love the vision, the implementation and user experience is a nightmare.

The problem with the idea of a big Facebook of Things kind of site is the tight coupling that it implies. I have to take charge of my devices. I have to “friend” them. And remember, these are devices, so I’m going to be doing the work of managing them. I’m going to have to tell my stereo about my phone. I’m going to have to make sure I buy a stereo system that understands the “mute the sound” command that my phone sends. I’m going to have to tell my phone that it should send “mute the sound” commands to the phone and “pause the movie” commands to my DVR and “turn up the lights” to my home lighting system. No thanks.

The reason these visions fall short and end up sounding like nightmares instead of Disneyland is that we have a tough time breaking out of the request-response pattern of distributed devices that we’re all too familiar and comfortable with. This pattern leads us to the conclusion that the only way to make this work is to create directories of things, the commands they understand, etc. But the only way these visions will come to pass is with a new model that supports more loosely coupled modes of interaction between the thousands of things I’m likely to have connected.

Consider the preceding scenario from Sir Tim modified slightly.

“The entertainment system was belting out the Beatles’ ‘We Can Work It Out’ when the phone rang. When Pete answered, his phone broadcasts a message to all local devices indicating it has received a call. His stereo responded by turning down the volume. His DVR responded by pausing the program he was watching. His sister, Lucy, …”

In the second scenario, the phone doesn’t have to know anything about other local devices. The phone need only indicate that it has received a call. Each device can interpret that message however it sees fit or ignore it altogether. This significantly reduces the complexity of the overall system because individual devices are loosely coupled. The phone software is much simpler and the infrastructure to pass messages between devices is much less complex than an infrastructure that supports semantic discovery of capabilities and commands.

Events, the messages about things that have happened are the key to this simple, loosely coupled scenario. If we can build an open, ubiquitous eventing protocol similar to the open, ubiquitous request protocol we have in HTTP, the vision of a network of things can come to pass in a way that doesn’t require constant tweaking of connections and doesn’t give any one silo (company) control it. We’ve done this before with the Web. It’s time to do it again with the network of things. We don’t need a Facebook of Things. We need an Internet of Things.

I call this vision “The Live Web.” The term was first coined by Doc Searls’ son Allen to describe a Web where timeliness and context matter as much as relevance. I’m in the middle (literally half done) with a book I’m calling The Live Web: Putting Cloud Computing to Work for People . The book describes how events and event-based systems can more easily create the Internet of Things than the traditional request-response-style of building Web sites. Im excited for it to be done. Look for a summer ublishing date. In the meantime, if you’re interested I’d be happy to get your feedback on what I’ve got so far.

Tags:

events


mobile


context


kynetx


krl