Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 26 2014

Hurdles to the Internet of Things prove more social than technical

Last Saturday’s IoT Festival at MIT became a meeting-ground for people connecting the physical world. Embedded systems developers, security experts, data scientists, and artists all joined in this event. Although it was called a festival, it had a typical conference format with speakers, slides, and question periods. Hallway discussions were intense.

However you define the Internet of Things (O’Reilly has its own take on it, in our Solid blog site and conference), a lot stands in the way of its promise. And these hurdles are more social than technical.

Participants eye O'Reilly booksParticipants eye O'Reilly books

Participants eye O’Reilly books (Credit: IoT Festival)

Some of the social discussion we have to have before we get the Internet of Things rolling are:

  • What effects will all this data collection, and the injection of intelligence into devices, have on privacy and personal autonomy? And how much of these precious values are we willing to risk to reap the IoT’s potential for improving life, saving resources, and lowering costs?

  • Another set of trade-offs involve competing technical goals. For instance, power consumption can constrain features, and security measures can interfere with latency and speed. What is the priority for each situation in which we are deploying devices? How do we make choices based on our ultimate goals for these things?

  • How do we persuade manufacturers to build standard communication protocols into everyday objects? For those manufacturers using proprietary protocols and keeping the data generated by the objects on private servers, how can we offer them business models that makes sharing data more appealing than hoarding it?

  • What data do we really want? Some industries are already drowning in data. (Others, such as health care, have huge amounts of potential data locked up in inaccessible silos.) We have to decide what we need from data and collect just what’s useful. Collection and analysis will probably be iterative: we’ll collect some data to see what we need, then go back and instrument things to collect different data.

  • How much can we trust the IoT? We all fly on planes that depend heavily on sensors, feedback, and automated controls. Will we trust similar controls to keep self-driving vehicles from colliding on the highway at 65 miles per hour? How much can we take humans out of the loop?

  • Similarly, because hubs in the IoT are collecting data and influencing outcomes at the edges, they require a trust relationship among the edges, and between the edges and the entity who is collecting and analyzing the data.

  • What do we need from government? Scads of data that is central to the IoT–people always cite the satellite GIS information as an example–comes from government sources. How much do we ask the government for help, how much do we ask it to get out of the way of private innovators, and how much can private innovators feed back to government to aid its own efforts to make our lives better?

  • It’s certain that IoT will displace workers, but likely that it will also create more employment opportunities. How can we retrain workers and move them to the new opportunities without too much disruption to their lives? Will the machine serve the man, or the other way around?

We already have a lot of the technology we need, but it has to be pulled together. It must be commercially feasible at a mass production level, and robust in real-life environments presenting all their unpredictability. Many problems need to be solved at what the Jim Gettys (famous for his work on the X Window System, OLPC, and bufferbloat) called “layer 8 of the Internet”: politics.

Large conference hallLarge conference hall

Large conference hall. I am in striped olive green shirt at right side of photo near rear (Credit: IoT Festival)

The conference’s audience was well-balanced in terms of age and included a good number of women, although still outnumbered by men. Attendance was about 175 and most of the conference was simulcast. Videos should be posted soon at the agenda site. Most impressive to me was the relatively small attrition that occurred during the long day.

Isis3D, a sponsor, set up a booth where their impressive 3D printer ratcheted out large and very finely detailed plastic artifacts. Texas Instruments’ University Program organized a number of demonstrations and presentations, including a large-scale give away of LaunchPads with their partner, Anaren.

Isis3D demos their 3D printerIsis3D demos their 3D printer

Isis3D demos their 3D printer (Credit: IoT Festival)

Does the IoT hold water?

One application of the IoT we could all applaud is reducing the use of water, pesticide, and fertilizer in agriculture. This application also illustrates some of the challenges the IoT faces. AgSmarts installs sensors that can check temperature, water saturation, and other factors to report which fields need more or less of a given resource. The service could be extended in two ways:

  • Pull in more environmental data from available sources besides the AgSmarts devices. For instance, Dr. Shawana Johnson of Global Marketing Insight suggested that satellite photos (we’re back to that oft-cited GIS data) could show which fields are dry.

  • Close the loop by running all the data through software that adjusts the delivery of water or fertilizer with little or no human intervention. This is a big leap in sophistication, reviving my earlier question about trusting the IoT. Closing the loop would require real-time analysis of data from hundreds of locations. It’s worth noting that AgSmarts offers a page about Analytics, but all it says right now is “Coming Soon.”

Lights, camera, action

A couple other interesting aspects of the IoT are responsive luminaires (light fixtures) and unmanned aerial vehicles, which can track things on the ground through cameras and other sensors.

Consumer products to control indoor lights are already available. Lighting expert John Luciani reported that outdoor lights are usually coordinated through the Zigbee wireless protocol. Some applications for control over outdoor lighting include:

  • Allowing police in their cruisers to brighten the lights during an emergency.

  • Increasing power to LEDs gradually over time, to compensate for their natural dimming.

  • Identifying power supplies that are nearing the end of their life, because LEDs can last much longer than power supplies.

Lights don’t suffer from several of the major constraints that developers struggle with when using many IoT devices. Because luminaires need a lot of power to cast their light, they always have a power source that is easily high enough to handle their radio communications and other computing needs. Second, there is plenty of room for a radio antenna. Finally, recent ANSI standards have allowed a light fixture to contain more control wires.

Thomas Almholt of Texas Instruments reported that Texas electric utilities have been installing smart meters. Theoretically, these could help customers and utilities save money and reduce usage, because rates change every 15 minutes and most customers have no window on these changes. But the biggest benefit they discovered from the smart meters was quite unexpected: teams came out to houses when the meters started to act in bizarre ways, and prevented a number of fires from being starting by the rogue meters.

A presentation on UAVs and other drones (not all are air-borne) highlighted uses for journalism, law enforcement, and environmental tracking, along with the risks of putting too much power in the hands of those inside or outside of government who possess drones.

MIT’s SENSEable City Lab, as an example of a positive use, measures water quality in the nearby Charles River through drones, producing data at a much more fine-gained level than what could be measured before, spacially and temporally.

Kade Crockford of the Massachusetts ACLU laid out (through a video simulation as scary as it was amusing) the ways UAVs could invade city life and engage in creepy tracking. Drones can go more places than other surveillance technologies (such as helicopters) and can detect our movements without us detecting the drones. The chilling effects of having these robots virtually breathe down our necks may outweigh the actual abuses perpetrated by governments or other actors. This is an illustration of my earlier point about trust between the center and the edges. (Crockford led a discussion on privacy in the conference, but I can’t report on it because I felt I needed to attend a different concurrent session.)

Protocols such as the 802.15 family and Zigbee permit communication among devices, but few of these networks have the processing power to analyze the data they produce and take action on their own. To store their data, base useful decisions on it, and allow humans to view it, we need to bring their data onto the larger Internet.

One tempting solution is to use the cellular network in which companies have invested so much. But there are several challenges to doing so.

  • Cell phone charges can build up quickly.

  • Because of the shortage of phone numbers, an embedded device must use a special mobile subscriber ISDN number (MSISDN), which comes with limitations.

  • To cross the distance to cell towers, radios on the devices need to use much more power than they need to communicate over local wireless options such as WiFi and Bluetooth.

A better option, if the environment permits, is to place a wireless router near a cluster of devices. The devices can use a low-power local area network to communicate with the router, which is connected to the Internet by cable or fiber.

When a hub needs to contact a device to send it a command, several options allow the device to avoid wasting power keeping its radio on. The device can wake up regularly to poll the router and get the command in the router’s response. Alternatively, the router can wake the device when necessary.

Government and the IoT platform

One theme coming out of State of the Union addresses and other White House communications is the role of innovation in raising living standards. As I’ve mentioned, there was a bit of discussion at the conference about jobs and what IoT might do to them. Job creation is one of the four goals of the SmartAmerica Challenge run by the Presidential Innovation Fellows program. The other three are saving lives, creating new business opportunities, and improving the economy.

I was quite disappointed that climate change and other environmental protections were left off the SmartAmerica list. Although I applaud the four goals, I noticed that each pleases a powerful lobbying constituency (the health industry, chambers of commerce, businesses, and unions). The environment is the world’s most urgent challenge, and one that millions of people care about, but big money isn’t behind the cause.

Two people presenting the SmartAmerica Challenge suggested that most of the technologies enabling IoT have been developed already. What we need are open and easy-to-use architectures, and open standards that turn each layer of the system into a platform on which others are free to innovate. The call for open standards–which can also create open markets–also came from Bill Curtis of ARM.

Another leading developer in the computer field, Bob Frankston, advised us to “look for resources, not solutions.” I take this to mean we can create flexible hardware and software infrastructure that many innovators can use as platforms for small, nimble solutions. Frankston’s philosophy, as he put it to me later, is “removing the barriers to rapid evolution and experimentation, thus removing the need for elaborate scenarios.”

Bob Frankston speakingBob Frankston speaking

Bob Frankston speaking (Credit: IoT Festival)

Dr. Johnson claimed that government agencies can’t share a lot of their applications for their data because of security concerns, and predicted that each time a tier of government data is published, it will be greeted by an explosion of innovation from the private sector.

However, government can play another useful role. One speaker pointed out that it can guide the industry to use standards by requiring those standards in its own procurements.

Standards we should all learn

Both Almholt and Curtis laid out protocol stacks for the IoT, with a lot of overlap. Both agreed that many Internet protocols in widespread use were inappropriate for IoT. For instance, connections on the IoT tend to be too noisy and unreliable for TCP to work well. Another complication is that an 802.15.4 packet has a maximum size of 127 bytes, and IPv6 (the addressing system of choice for mobile devices) takes up at least 40. Clever reuse of data between packets can reduce this overhead.

Curtis said that, because most Internet standards are too complex for the constrained devices in the IoT, these devices tend to run proprietary protocols, creating data silos. He also called for devices that use less than 10 milliwatts of power. Almholt said that devices will start harvesting electricity from light, vibration, and thermal sources, thus doing without batteries altogether and running for long periods without intervention.

Some of the protocols that show promise for the IoT include:

  • 6LoWPAN or successors for network-layer transmission.

  • The Constrained Access Protocol (CoAP) for RESTful message exchange. By running over UDP instead of TCP and using binary headers instead of ASCII, this protocol achieves most of the benefits of HTTP but is more appropriate for small devices.

  • The Time Synchronized Mesh Protocol (TSMP) for coordinating data transmissions in an environment where devices contend for wireless bandwidth, thus letting devices shut down their radios and save power when they are not transmitting.

  • RPL for defining and changing routes among devices. This protocol sacrifices speed for robustness.

  • eDTLS, a replacement for the TLS security standard that runs over UDP.

Adoption of these standards would lead to a communication chasm between the protocol stack used within the LAN and the one used with the larger outside world. Therefore, edge devices would be devoted to translating between the two networks. One speaker suggested that many devices will not be provided with IP addresses, for security reasons.

Challenges that remain

Research with impacts on the IoT is flourishing. I’ll finish this article with a potpourri of problems and how people are addressing them:

  • Gettys reported on the outrageous state of device security. In Brazil, an enormous botnet has been found running just on the consumer hubs that people install in their homes and businesses for Internet access. He recommended that manufacturers adopt OpenWRT firmware to protect devices, and that technically knowledgeable individuals install it themselves where possible.

  • Although talking over the Internet is a big advance for devices, talking to each other over local networks at the radio layer (using for instance the 802.15.4 standards) is more efficient and enables more rapid responses to the events they monitor. Unfortunately, according Almholt, interoperability is a decade away.

  • Mobility can cause us to lose context. A recent day-to-day example involves the 911 emergency call system, which is supposed to report the caller’s location to the responder. When people gave up their landlines and called 911 from cell phones, it became much harder to pinpoint where they were. The same thing happen in a factory or hospital when a staffer logs in from a mobile device.

Whether we want the IoT to turn on devices while we sit on the couch (not necessarily an indulgence for the lazy, but a way to let the elderly and disabled live more independently) or help solve the world’s most pressing problems in energy and water management, we need to take the societal and technological steps to solve these problems. Then let the fun begin.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter — and to learn more about the Solid Conference coming to San Francisco in May, visit the Solid website.

Four short links: 26 February 2014

  1. Librarybox 2.0fork of PirateBox for the TP-Link MR 3020, customized for educational, library, and other needs. Wifi hotspot with free and anonymous file sharing. v2 adds mesh networking and more. (via BoingBoing)
  2. Chicago PD’s Using Big Data to Justify Racial Profiling (Cory Doctorow) — The CPD refuses to share the names of the people on its secret watchlist, nor will it disclose the algorithm that put it there. [...] Asserting that you’re doing science but you can’t explain how you’re doing it is a nonsense on its face. Spot on.
  3. Cloudwash (BERG) — very good mockup of how and why your washing machine might be connected to the net and bound to your mobile phone. No face on it, though. They’re losing their touch.
  4. What’s Left of Nokia to Bet on Internet of Things (MIT Technology Review) — With the devices division gone, the Advanced Technologies business will cut licensing deals and perform advanced R&D with partners, with around 600 people around the globe, mainly in Silicon Valley and Finland. Hopefully will not devolve into being a patent troll. [...] “We are now talking about the idea of a programmable world. [...] If you believe in such a vision, as I do, then a lot of our technological assets will help in the future evolution of this world: global connectivity, our expertise in radio connectivity, materials, imaging and sensing technologies.”

February 22 2014

The Industrial IoT isn’t the same as the consumer IoT

Reading Kipp Bradford’s recent article, The Industrial Internet of Things: The opportunity no one’s talking about, got me thinking about commonly held misconceptions about what the Industrial Internet of Things (IIoT) is — as well as what it’s not.

Misconception 1: The IIoT is the same as the consumer Internet of Things (IoT), except it’s located on a factory floor somewhere.

This misconception is easy to understand, given that both the IIoT and the consumer IoT have that “Internet of Things” term in common. Yes, the IIoT includes devices located in industrial settings: maybe a factory floor, or perhaps as part of a high-speed train system, or inside a hotel or restaurant, or a municipal lighting system, or within the energy grid itself.

But the industrial IoT has far more stringent requirements than the consumer IoT, including the need for no-compromise control, rock-solid security, unfailing reliability even in harsh (extremely hot or cold, dusty, humid, noisy, inconvenient) environments, and the ability to operate with little or no human intervention. And unlike more recently designed consumer-level devices, many of the billion or so industrial devices already operating on existing networks were put in place to withstand the test of time, often measured in decades.

The differences between the IIoT and IoT are not just a matter of slight degree or semantics. If your Fitbit or Nest device fails, it might be inconvenient. But if a train braking system fails, it could be a matter of life and death.

Misconception 2: The IIoT is always about, as Bradford describes it, devices that “push and pull status and command information from the networked world.”

The consumer IoT is synonymous with functions that affect human-perceived comfort, security, and efficiency. In contrast, many industrial networks have basic operating roles and needs that do not require, and in fact are not helped by, human intervention. Think of operations that must happen too quickly — or too reliably, or too frequently, or from too harsh or remote an environment — to make it practical to “push and pull status and command information” to or from any kind of centralized anything, be it an Internet server or the cloud.

A major goal for the IIoT, then, must be to help autonomous communities of devices to operate more effectively, peer to peer, without relying on exchanging data beyond their communities. One challenge now is that various communications protocols have been established for specific types of industrial devices (e.g., for building automation, or lighting, or transportation) that are incompatible with one another. Internet Protocol (IP) could serve as a unifying communications pathway within industrial communities of devices, improving peer-to-peer communications.

Misconception 3: The problem is that industrial device owners aren’t interested in, or actively resist, connecting our smart devices together.

If IP were extended all the way to industrial devices, another advantage is that they could also participate, as appropriate, beyond peer-to-peer communities. Individually, industrial devices generate the “small data” that, in the aggregate, combines to become the “big data” used for IoT analytics and intelligent control. IIoT devices that are IP-enabled could retain their ability to operate without human intervention, yet still receive input or provide small-data output via the IoT.

But unlike most consumer IoT scenarios, which involve digital devices that already have IP support built in or that can be IP enabled easily, typical IIoT scenarios involve pre-IP legacy devices. And unfortunately, IP enablement isn’t free. Industrial device owners need a direct economic benefit to justify IP enabling their non-IP devices. Alternatively, they need a way to gain the benefits of IP without giving up their investments in their existing industrial devices — that is, without stranding these valuable industrial assets.

Rather than seeing industrial device owners as barriers to progress, we should be looking for ways to help industrial devices become as connected as appropriate — for example, for improved peer-to-peer operation and to contribute their important small data to the larger big-data picture of the IoT.

So, what is the real IIoT opportunity no one’s talking about?

The real opportunity of the IIoT is not to pretend that it’s the same as the IoT, but rather to provide industrial device networks with an affordable and easy migration path to IP.

This approach is not an “excuse for trying to patch up, or hide behind, outdated technology” that Bradford talks about. Rather, it’s a chance to offer multi-protocol, multimedia solutions that recognize and embrace the special considerations and, yes, constraints of the industrial world — which include myriad existing protocols, devices installed for their reliability and longevity, the need for both wired and wireless connections in some environments, and so on. This approach will build bridges to the IIoT, so that any given community of devices can achieve its full potential — the IzoT platform I’ve been working on at Echelon aims to accomplish this.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter — and to learn more about the Solid Conference coming to San Francisco in May, visit the Solid website.

Slo-mo for the masses

Edgertronic EvolutionEdgertronic Evolution

Edgertronic version evolution, from initial prototype A through production-ready prototype C.

The connectivity of everything isn’t just about objects talking to each other via the Internet. It’s also about the accelerating democratization of formerly elite technology. Yes, it’s about putting powerful devices in touch with each other — but it’s also about putting powerful devices within the grasp of anyone who wants them. Case in point: the Edgertronic camera.

All kids cultivate particular interests growing up. For Michael Matter, Edgertronic’s co-inventor, it was high-speed flash photography. He got hooked when his father brought home a book of high-speed images taken by Harold Eugene “Doc” Edgerton, the legendary MIT professor of electrical engineering who pioneered stroboscopic photography.

“It was amazing stuff,” Matter recalls, “and I wanted to figure out how he did it. No, more than that I wanted to do it. I wanted to make those images. But then I looked into the cost of equipment, and it was thousands of dollars. I was 13 or 14, this was the 1970s, and my budget was pretty well restricted to my allowance.”

But Matter had a highly technical bent, and he was undaunted.

“I bought an off-the-shelf camera flash and put in a smaller value capacitor,” he recalls. “That allowed me to reduce the flash period from a millisecond to 20 or 30 microseconds. I had lower light output, of course, but I knew I could work with that. I got those short bursts I needed.”

Matter’s dad was an avid golfer, and he asked his son to take some shots of him teeing off — or rather, shots of the face of his trusty driver at the precise moment of ball impact. Matter took a spring from an old tape measure and jury-rigged a trigger for the flash.

“I’d put it ever-so-slightly in front of the ball,” he says, “and it worked pretty well. It’d stand up to four or five hits. We got some good images of the ball distorting, morphing into a kind of egg shape, on impact.”

Matter later matriculated at MIT, and he sent in images of those golf balls to Edgerton, along with a fan letter. He ended up taking a class from Edgerton, and his admiration for the idiosyncratic engineer only grew.

“He was the quintessential maker,” Matter says of his mentor. “Almost everything in his lab was slightly radioactive from all his field work photographing atom bomb and hydrogen bomb tests. He’d look at a problem and come up with a solution. Sometimes his solutions were crude and ugly, but they worked. That was part of his philosophy — better to have something crude and ugly that works than something elegant and expensive that doesn’t work as well.”

Edgerton had a camera capable of taking sub-microsecond exposures, but it was extremely bulky, and he let Matter tinker with it.

“It had huge vacuum tubes, and it had a notorious reputation around the lab for unreliability,” Matter said. ”I took it apart and was able to make some improvements on it. It worked better.”

And did Edgerton praise his protégé? Not at all.

“That wasn’t his style,” Matter said, “and I admired him for it. If you were in his lab, a certain level of competence was assumed.  That was better than praise.”

Matter spent the next 30 years designing electronic devices, including servers, computers, and consumer products. And he was good at what he did; he and business partner Juan Pineda were principal designers for the Apple Powerbook 500 series. But he never lost interest in high-speed photography, and he remained frustrated by the stratospherically high price of the equipment. Top-end high-speed cameras can cost $100,000 to $500,000 — even “cheap” models register in the mid five figures. But his conversations with Pineda inspired him to think about the technology in a highly disruptive way.

“Early in my work with Juan, he made an observation to me,” Matter recalls. “We were working on a mid-performance, low-price project at Apollo Computers. Another group we were talking with was focused on high-performance, high-end systems. Juan said, ‘You know, anybody can do a no-holds-barred design when money is no object. Sooner or later, you’ll get what you want. But the challenge is when you’re limited by expense. That’s when skill comes in.’”

That idea — relatively high performance at a low price point — stuck with Matter, and fused with his predilections for  deep tech tinkering. Ultimately, he was driven back to his childhood hobby. With Pineda, he spent two years skunk-working on high-speed videography. The Edgertronic was the result of all their brainstorming. At about $5,500, it is well within the means of anyone seriously interested in photography; professional-grade digital SLR cameras, after all, can sell for $10,000. And the fact that the Edgertronic runs Linux and has an Ethernet port makes it an effective liaison between the physical and virtual worlds.

Like all digital video cameras, the Edgertronic operates in a pretty straightforward fashion: light travels through a lens to a sensor, and from there is transmitted to a processor where images are translated as digital information, then stored in a memory component.

That said, the Edgertronic isn’t just another video camera. At the relatively low pixel resolution of 192×96, it can operate in excess of 17,000 frames per second (fps). At the maximum resolution of 1280×1024, the camera processes 500 fps — still impressive, given standard cinema speed is 18 fps. In an evaluation of the Edgertronic on tested.com, TV’s Mythbusters website, Norman Chan opined the Edgertronic’s “sweet spot” was 640×360 resolution at about 2,000 fps. On his review, Chan has posted a clip taken by an Edgertronic of a hammer smashing a light bulb at 5,000 fps. The video is crisp, and the glass shards fly about in a slow, stately and almost balletic manner:

In other words, the Edgertronic delivers on its promise: good quality high-speed video on a working person’s budget. How did they do it? Matter isn’t evasive, exactly, but he is loathe to discuss specifics. He emphasized the Edgertronic did not involve the “bleeding edge” of engineering — the ne plus ultra of videographic technology. Rather, he said, he and Pineda used existing components in new ways.

“We’ve done it before,” he observes. ”For example, we were once working on a graphics processing project, and we needed a chip that could do several multiples per second. Well, there wasn’t a chip like that out there made specifically for graphics processing. But Texas Instruments did have a $40 speech recognition chip that could do five multiples a second. Some people were using it for music synthesizers or mixing studios. Texas Instruments never envisioned it for work stations or graphics processing. But we did. And it worked.”

In his review, Chan reveals that Edgertronic uses an off-the-shelf 18x14mm CMOS sensor. While not dirt cheap — it accounts for about half the price of the camera — it costs exponentially less than any custom-designed sensor.

In any event, cheap sensors alone do not a functional $5,000 high-speed video camera make. As Matter intimates, the proprietary leavening in this particular technological loaf is engineering. But what about applications? Sure, you can record bullet flights in your home basement shooting range. But not many of us combine doomsday prep philosophy with an interest in high-speed photography.

“Our camera isn’t going to be used for slo-mo replays in the World Series or the Superbowl,” says Matter. “That’s not our intent. Obviously, it has application in the arts and filmmaking, and will appeal to amateurs and hobbyists for their personal projects. But we think industry and research will account for most of our orders.”

One example: production lines. Say somewhere in the bowels of your line, a widget isn’t being stuck to a doo-hickey in the most felicitous way possible. Some high-speed footage of the assembly process could tell you what’s wrong. But standard high-speed cameras are rather bulky, and most are not amenable to sticking into tight places. Plus, once again, they’re extremely pricey. At several hundred thousand dollars a pop, it could well make more sense to eschew the camera and troubleshoot the line by tearing it apart. Enter the Edgertronic.

“Unlike other high-speed cameras, ours are compact, no larger than an SLR camera,” says Matter, “so you can put them almost anywhere. Also, price won’t be an issue for even small companies. The same with research facilities. Say a small lab is fine-tuning a new industrial process. A high-speed camera could really help with that, but you couldn’t justify a $500,000 outlay. We think the Edgertronic will change that dynamic.”

Matter and Pineda thus promote their camera as a basic research and investigative tool. Edgertronics already have been purchased by researchers working on fluid dynamics, particle systems, and animal locomotion; by high schools and colleges for various subjects; and by labs conducting product torture testing.

“We think of it as a microscope for time,” says Matter. “We just want to help people see how things really work. When you slow things down, you see they often don’t function anything like you supposed. Technically-minded people get that. Our first batch of 60 cameras sold out quickly — one guy is on his third camera. We’re doubling up for our next production run. We’ve identified a need, and now we’re filling it.”


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter — and to learn more about the Solid Conference coming to San Francisco in May, visit the Solid website.

February 21 2014

Architecting the connected world

In the first two posts in this series, I examined the connection between information architecture and user interface design, and then looked closely at the opportunities and constraints inherent in information architecture as we’ve learned to practice it on the web. Here, I outline strategies and tactics that may help us think through these constraints and that may in turn help us lay the groundwork for an information architecture practice tailored and responsive to our increasingly connected physical environments.

The challenge at hand

NYU Media, Culture, and Communication professor Alexander Galloway examines the cultural and political impact of the Internet by tracing its history back to its postwar origins. Built on a distributed information model, “the Internet can survive [nuclear] attacks not because it is stronger than the opposition, but precisely because it is weaker. The Internet has a different diagram than a nuclear attack does; it is in a different shape.” For Galloway, the global distributed network trumped the most powerful weapon built to date not by being more powerful, but by usurping its assumptions about where power lies.

This “differently shaped” foundation is what drives many of the design challenges I’ve raised concerning the Internet of Things, the Industrial Internet and our otherwise connected physical environments. It is also what creates such depth of possibility. In order to accommodate this shape, we will need to take a wider look at the emergent shape of the world, and the way we interact with it. This means reaching out into other disciplines and branches of knowledge (e.g. in the humanities and physical sciences) for inspiration and to seek out new ways to create and communicate meaning.

The discipline I’ve leaned on most heavily in these last few posts is linguistics. From these explorations, a few tactics for information architecture for connected environments emerge: leveraging multiple modalities, practicing soft assembly, and embracing ambiguity.

Leverage multiple modalities

In my previous post, I examined the nature of writing as it relates to information architecture for the web. As a mode of communication – and of meaning making – writing is incredibly powerful, but that power does come with limitations. Language’s linearity is one of these limitations; its symbolic modality is another.

In linguistics, a sign’s “modality” refers to the nature of the relationship between a signifier and a signified, between the name of a thing and the thing itself. In Ferdinand de Saussure’s classic example (from the Course in General Linguistics), the thing “tree” is associated with the work “tree” only by arbitrary convention.

This symbolic modality, though by far the most common modality in language, isn’t the only way we make sense of the world. Linguist Charles Sanders Peirce identified two additional modes that offer alternatives to the arbitrary sign of spoken language: the iconic and the indexical modes.

Indexical modes link signifier and signified by concept association. Smoke signifies fire, thunder signifies a storm, footprints signify a past presence. The signifier “points” to the signified (yes, the reference is related to our index finger). Iconic modes, on the other hand, link signifier and signified by resemblance or imitation of salient features. A circle with two dots and lines in the right places signifies the human face to humans – our perceptual apparatus is tuned to recognize this pattern as a face (a pre-programmed disposition which easily becomes a source of amusement when comparing the power outlets of various regions).

Before you write off this scrutiny of modes as esoteric navel-gazing – or juvenile visual punning – let’s take an example many of us come into contact with daily: the Apple MacBook trackpad. Prior to the Mountain Lion operating system, moving your finger “down” the trackpad (i.e. from the space bar toward the edge of the computer body) caused a page on screen to scroll down – which is to say, the elements you perceived on the screen moved up (so you could see what had been concealed “below” the bottom edge of the screen).

The root of this spatial metaphor is enlightening: if you think back a few years to the scroll wheel mouse (or perhaps you’re still using one?), you can see where the metaphor originates. When you scroll the wheel toward you, it is as if the little wheel is in contact with a sheet of paper sitting underneath the mouse. You move the wheel toward you, the wheel moves the paper up. It is an easy mechanical model that we can understand based on our embodied relationship to physical things.

This model was translated as-is to the first trackpads — until another kinetic model replaced it: the touchscreen. On a touchscreen, you’re no longer mediating movement with a little wheel; you’re touching it and moving it around directly. Once this model became sufficiently widespread, it became readily accessible to users of other devices – such as the trackpad – and, eventually, became the default trackpad setting for Apple devices.

OutletsOutlets

Left: Danish electrical plugs, on Wikimedia Commons. Right: US electrical plugs, photo by Andy Fitzgerald.

Here we can see different modes at play. The trackpad isn’t strictly symbolic, nor is it iconic. Its relationship to the action it accomplishes is inferred by our embodied understanding of the physical world. This is signification in the indexical mode.

This thought exercise isn’t purely academic: by basing these kinds of interactions on mechanical models that we understand in a physical and embodied way (we know in our bodies what it is like to touch and move a thing with our fingers), we root them in a symbolic experience that is more deeply rooted in our understanding of the physical world. In the case of iconic signification, for example in recognizing faces in power outlets (and even dispositions – compare, for instance, the plug shapes of North America and Denmark), our meaning making is even more deeply rooted in our innate perceptual apparatus.

Practice soft assembly

Attention to modalities is important because the more deeply a pattern or behavior is rooted in embodied experience, the more durable it is – and the more reliably it can be leveraged in design. Stewart Brand’s concept of “pace layers” helps to explain why this is the case. Brand argues in How Buildings Learn that the structures that make up our experienced environment don’t all change at the same rate. Nature is very slow to change; culture changes more quickly. Commerce changes more quickly than the layers below it, and technology changes faster than all of these. Fashion, relative to these other layers, changes most quickly of all.

Pace LayersPace Layers

Illustration: Pace layers, by Preston Grubbs, illustrator at Deloitte Digital.

When we discuss meaning making, our pace layer model doesn’t describe rate of change; it describes depth of embodiment. Remembering the “ctrl-z to undo” function is a learned symbolic (hence arbitrary) association. Touching an icon on a touchscreen (or, indexically, on a trackpad) and sliding it up is an associative embodied action. If we get these actions right, we’ve tapped into a level of meaning making that is rooted in one of the slowest of the physical pace layers: the fundamental way in which we perceive our natural world.

This has obvious advantages to usability. Intuitive interfaces are instantly useable because they capitalize on the operational knowledge we already have. When an interface is completely new, in order to be intuitive it must “borrow” knowledge from another sphere of experience (what Donald Norman in The Psychology of Everyday Things refers to as “knowledge in the world”). The touchscreen interface popularized by the iPhone is a ready example of this. More recently, Nest Protect’s ”wave to hush” interaction provides an example that builds on learned physical interactions (waving smoke away from a smoke detector to try to shut it up) in an entirely new but instantly comprehensible way.

An additional and often overlooked advantage of tapping into deep layers of meaning making is that by leveraging more embodied associations, we’re able to design for systems that fit together loosely and in a natural way. By “natural” here, I mean associations that don’t need to be overwritten by an arbitrary, symbolic association in order to signify; associations that are rooted in our experience of the world and in our innate perceptual abilities. Our models become more stable and, ironically, more fluid at once: remapping one’s use of the trackpad is as simple as swapping out one easily accessible mental model (the wheel metaphor) for another (the touchscreen metaphor).

This loose coupling allows for structural alternatives to rigid (and often brittle) complex top-down organizational approaches. In Beyond the Brain, University of Lethbridge Psychology professor Louise Barrett uses this concept of “soft assembly” to explain how in both animals and robots “a whole variety of local control factors effectively exploit specific local (often temporary) conditions, along with the intrinsic dynamics of an animal’s body, to come up with effective behavior ‘on the fly.’” Barrett describes how soft assembly accounts for complex behavior in simple organisms (her examples include ants and pre-microprocessor robots), then extends those examples to show how complex instances of human and animal perception can likewise be explained by taking into account the fundamental constitutive elements of perception.

For those of us tasked with designing the architectures and interaction models of networked physical spaces, searching hard for the most fundamental level at which an association is understood (in whatever mode it is most basically communicated), and then articulating that association in a way that exploits the intrinsic dynamics of its environment, allows us to build physical and information structures that don’t have to be held together by force, convention, or rote learning, but which fit together by the nature of their core structures.

Embrace ambiguity

Alexander Galloway claims that the Internet is a different shape than war. I have been making the argument that the Internet of Things is a different shape than textuality. The systems that comprise our increasingly connected physical environments are, for the moment, often broader than we can understand fluidly. Embracing ambiguity – embracing the possibility of not understanding exactly how the pieces fit together, but trusting them to come together because they are rooted in deep layers of perception and understanding – opens up the possibility of designing systems that surpass our expectations of them. “Soft assembling” those systems based on cognitively and perceptually deep patterns and insight ensures that our design decision aren’t just guesses or luck. In a word, we set the stage for emergence.

Embracing ambiguity is hard. Especially if you’re doing work for someone else – and even more so if that someone else is not a fan of ambiguity (and this includes just about everyone on the client side of the design relationship). By all means, be specific whenever possible. But when you’re designing for a range of contexts, users, and uses – and especially when those design solutions are in an emerging space – specificity can artificially limit a design.

In practical terms, embracing ambiguity means searching for solutions beyond those we can easily identify with language. It means digging for the modes of meaning making that are buried in our physical interactions with the built and the natural environment. This allows us the possibility of designing for systems which fit together with themselves (even though we can’t see all the parts) and with us (even though there’s much about our own behavior we still don’t understand), potentially reshaping the way we perceive and act in our world as a result.

Architecting connected physical environments requires that we think differently about how people think and about how they understand the world we live in. Textuality is a tool – and a powerful tool – but it represents only one way of making sense of information environments. We can add additional approaches to designing for these environments by thinking beyond text and beyond language. We accomplish this by leveraging our embodied experience (and embodied understanding of the world) in our design process: a combination of considering modalities, practicing soft assembly, and embracing ambiguity will help us move beyond the limits of language and think through ways to architect physical environments that make this new context enriching and empowering.

As we’re quickly discovering, the nature of connected physical environments imbricates the digital and the analog in ways that were unimaginable a decade ago. We’re at a revolutionary information crossroads, one where our symbolic and physical worlds are coming together in an unprecedented way. Our temptation thus far has been to drive ahead with technology and to try to fit all the pieces together with the tried and true methods of literacy and engineering. Accepting that the shape of this new world is not the same as what we have known up until now does not mean we have to give up attempts to shape it to our common good. It does, however, mean that we will have to rethink our approach to “control” in light of the deep structures that form the basis of embodied human experience.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

Oobleck security

I’ve been thinking (and writing) a lot lately about the intersection of hardware and software, and how standing at that crossroads does not fit neatly into our mental models of how to approach the world. Previously, there was hardware and there was software, and the two didn’t really mix. When trying to describe my thinking to a colleague at work, the best way to describe the world was that it’s becoming “oobleck,” the mixture of cornstarch and water that isn’t quite a solid but isn’t quite a liquid, named after the Dr. Seuss book Bartholomew and the Oobleck.  (If you want to know how to make it, check out this video.)

One of the reasons I liked the metaphor of oobleck for the melding of hardware and software is that it can rapidly change on you when you’re not looking. It pours like a liquid, but it can “snap” like a solid. If you place the material under stress, as might happen if you set it on a speaker cone, it changes state and acts much more like a weird solid.

This “phase change” effect may also occur as we connect highly specialized embedded computers (really, “sensors”) to the Internet. As Bruce Schneier recently observed, patching embedded systems is hard. As if on cue, a security software company published a report that thousands of TVs and refrigerators may have been compromised to send spam. Embedded systems are easy to overlook from a security perspective because at the dawn of the Internet, security was relatively easy: install a firewall to sit between your network and the Internet, and carefully monitor all connections in and out of the network. Security was spliced into the protocol stack at the network layer. One of my earliest projects in understanding the network layer was working with early versions of Linux to configure a firewall with the then-new technology of address translation. As a result, my early career took place in and around network firewalls.

Most firewalls at the time worked at the network and transport layers of the protocol stack, and would operate on IP addresses and ports. A single firewall could shield an entire network of unpatched machines from attack, and in some ways, made it safe to use Windows on Internet-connected machines. Firewalls worked well in their heyday of the 1990s. Technology innovation took place inside the firewall, so the strict perimeter controls they imposed did not interfere with the use of technology.

By the mid-2000s, though, services like Skype and Google apps contended with firewalls as an obstacle to be traversed. (2001 even saw the publication of RFC 3093, a firewall “enhancement” protocol to allow routed IP packets through the firewall. Read it, but note the date before you dive into the details.) Staff inside the firewall needed to access services outside the firewall, and these services now included critical applications like Salesforce.com. Applications became firewall aware and started working around them. Had applications not evolved, it is likely the firewall would have throttled much of the innovation of the last decade.

To bring this back to embedded systems: what is the security model for a world with loads of sensors? (After all, remember that Bartholomew almost drowned in the Oobleck, and we’d like to avoid that fate.) As Kipp Bradford recently pointed out, when every HVAC compressor is connected to the Internet, that’s a lot of devices. By definition, real-time automation models assume continuous connectivity to gather data, and more importantly, send commands down to the infrastructure.

Many of these systems have proven to have poor security; for a recent example, consider this report from last fall about vulnerabilities in electrical and water systems. How do we add security to a sensor network, especially a network that is built around me as a person?Do I need to wear a firewall device to keep my sensors secure? Is the firewall an app that runs on my phone? (Oops, maybe I don’t want it running on my phone, given the architecture of most phone software…) Operating just at the network transport layer probably isn’t enough. Real-time data is typically transmitted using UDP, so many of the unwritten rules we have for TCP-based networking won’t apply. Besides, the firewall probably needs to operate at the API level to guard against any action I don’t want to be taken. What kind of security needs to be put around the data set? A strong firewall and cryptography that prevents my data in flight is no good if it’s easy to tap into the data store they all report to.

Finally, there’s the matter of user interface. Simply saying that users will have to figure it out won’t cut it. Default passwords — or even passwords as a concept — are insufficient. Does authentication have to become biometric? If so, what does that mean for registering devices the first time? As is probably apparent, I don’t have many answers yet.

Considering security with these types of deployments is like trying to deal with oobleck. It’s almost concrete, but just when you think it is solid enough to grasp, it turns into a liquid and runs through your fingers. With apologies to Shakespeare, this brave new world that has such machines in it needs a new security model. We need to defend against the classic network-based attacks of the past as well as new data-driven attacks or subversions of a distributed command-and-control system.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

February 19 2014

The home automation paradox

Editor’s note: This post originally appeared on Scott Jenson’s blog, Exploring the World Beyond Mobile. This lightly edited version is republished here with permission.

The level of hype around the “Internet of Things” (IoT) is getting a bit out of control. It may be the technology that crashes into Gartner’s trough of disillusionment faster than any other. But that doesn’t mean we can’t figure things out. Quite the contrary — as the trade press collectively loses its mind over the IoT, I’m spurred on further to try and figure this out. In my mind, the biggest barrier we have to making the IoT work comes from us. We are being naive as our overly simplistic understanding of how we control the IoT is likely going to fail and generate a huge consumer backlash.

But let’s backup just a bit. The Internet of Things is a vast sprawling concept. Most people refer to just the consumer side of things: smart devices for your home and office. This is more precisely referred to as “home automation,” but to most folks, that sounds just a bit boring. Nevertheless, when some writer trots out that tired old chestnut: “My alarm clock turns on my coffee machine!”, that is home automation.

But of course, it’s much more than just coffee machines. Door locks are turning on music, moisture sensors are turning on yard sprinklers, and motion sensors are turning on lights. The entire house will flower into responsive activities, making our lives easier, more secure and even more fun.

The luddites will always crawl out and claim this is creepy or scary in the same way that answering machines were dehumanizing. Which, of course, is entirely missing the point. Just because something is more mechanical doesn’t mean it won’t have an enormous and yet human impact. My answering machine, as robotic as it may be, allows me to get critical messages from my son’s school. That is definitely not creepy.

However, I am deeply concerned these home automation scenarios are too simplistic. As a UX designer, I know how quixotic and down right goofy humans can be. The simple rule-based “if this, then that” style scenarios trotted out are doomed to fail. Well, maybe fail is too strong of a word. They won’t fail as in a “face plant into burning lava” fail. In fact, I’ll admit that they might even work 90% of the time. To many people that may seem fine, but just try using a voice recognition system with a 10% failure rate. It’s the small mistakes that will drive you crazy.

I’m reminded of one of the key learnings of the artificial intelligence (AI) community. It was called Moravec’s Paradox:

It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.

Moravec’s paradox created two type of AI problems: HardEasy and EasyHard.

HardEasy problems were assumed to be very hard to accomplish, such as playing chess. The assumption was that you’d have to replicate human cunning and experience in order to play chess well. It turns out this was completely wrong, as a simple brute force approach was able to do quite well. This was a hard problem that turned out to be (relatively) easy.

The EasyHard problem is exactly the opposite: a problem that everyone expects to be simple but turns out to quite hard indeed. The classic example here is language translation. The engineers at the time expected the hardest problem was just finding a big enough dictionary. All you had to do was look up the words just plop them down in perfect order. Obviously, that problem is something we’re still working on today. An EasyHard problem is one that seems simple but never….quite….works…..the….way….you…..want.

I claim that home automation is an EasyHard problem. The engineer in all of us assumes it is going to be simple: walk in a room, turn on the lights. What’s the big deal? Now, I’ll admit, this rule does indeed work most of the time, but here are series of exceptions that break down:

Problem: I walk into the room and my wife is sleeping; turning on the lights wakes her up.
Solution: More sensors — detect someone on the bed.

Problem: I walk into the room and my dog is sleeping on the bed; my room lights don’t turn on.
Solution: Better sensors — detect human vs pets.

Problem: I walk into the room, my wife is watching TV on the bed. She wants me to hand her a book, but as the the room is dark, I can’t see it.
Solution: Read my mind.

Don’t misunderstand my intentions here. I’m not luddite! I do strongly feel that we are going to eventually get to home automation. My point is that as an EasyHard problem, we don’t treat home automation with the respect it deserves. Just because we can automate our homes doesn’t mean we’ll automate them correctly. The real work with home automation isn’t with the IoT connectivity; it’s the control system that will make it do the right thing at the right time.

Let’s take a look at my three scenarios above and discuss how they will impact our eventual solutions to home automation.

More Sensors

Almost every scenario today is built on a very fault-intolerant structure. A single sensor controls the lights. A single door knob alerts the house I’m coming in. This has the obvious  error condition that if that sensor fails, the entire action breaks down. But the second more likely case is that it just infers the wrong conclusion. A single motion sensor in my room assumes that I am the only thing that matters, my sleeping wife is a comfort casualty. I can guarantee you that as smart homes roll out, saying “sorry dear, that shouldn’t have happened” is going to wear very thin.

The solution, of course, is to have more sensors that can reason and know how many people are in a room. This isn’t exactly that hard, but it will take a lot more work as you need to build up a model of the house, populate it with proxies — creating, in effect, a simulation of your home. This will surely come, but it will just take a little time for it to become robust and tolerant of our oh-so-human capability to act in unexpected ways.

Better Sensors

This, too, should be soon in coming. There are already sensors that can tell the difference from humans and pets; they just aren’t widely used yet. This will feed into the software simulation of my house, knowing where people, pets and things are throughout the space. This is starting to sound a bit like an AI system, modeling my life and making decisions based on what it thinks is needed at the time. Again, not exactly impossible, but tricky stuff that will, over time, get better and better.

Read my mind

But at some point we reach a limit. When do you turn on the lights so I can find the book and when do I just muddle through because I don’t want to bother my wife? This is where the software has to have the “humility” to stop and just ask. I discussed this a bit in my UX grid of IoT post: background swarms of smart devices will do as much of the “easy stuff” as they can but will eventually need me to signal intent so they can cascade a complex set of actions that fulfill my goal.

Take the book example again. I walk into the room; the AI detects my wife on the bed. It could even detect the TV is on but still know she is not sleeping. But because it’s not clearly reasonable to turn on the lights to full brightness, it just turns on the low baseboard lighting so I can navigate. So far so good — the automatic system is being helpful but conservative. When I walk up to my wife and she asks for the book, I just have to say “lights” and the system turns on the lights, which could be a complex set of commands turning on five different lights at different intensities.

At some point, a button, an utterance, a gesture will be needed because human interaction is too subtle to be fully encapsulated by an AI. Humans can’t always do it; what makes us think computers can?

Summary

My point here is to emphatically support the idea of home automation. However, the UX designer in me is far too painfully aware that humans are messy, illogical beasts, and simplistic if/then rules are going to create a backlash against this technology. It isn’t until we take the coordinated control of these IoT devices seriously that we’ll start building more nuanced and error-tolerant systems. They will certainly be simplistic at first and still fail, but at least we’ll be on the right path. We must create systems that expect us to be human, not punish us for when we are.

February 18 2014

I, Cyborg

There is an existential unease lying at the root of the Internet of Things — a sense that we may emerge not less than human, certainly, but other than human.

Kelsey BresemanKelsey Breseman

Kelsey Breseman, engineer at Technical Machine.

Well, not to worry. As Kelsey Breseman, engineer at Technical Machine, points out, we don’t need to fret about becoming cyborgs. We’re already cyborgs: biological matrices augmented by wirelessly connected silicon arrays of various configurations. The problem is that we’re pretty clunky as cyborgs go. We rely on screens and mobile devices to extend our powers beyond the biological. That leads to everything from atrophying social skills as face-to-face interactions decline to fatal encounters with garbage trucks as we wander, texting and oblivious, into traffic.

So, if we’re going to be cyborgs, argues Breseman, let’s be competent, sophisticated cyborgs. For one thing, it’s now in our ability to upgrade beyond the screen. For another, being better cyborgs may make us — paradoxically — more human.

“I’m really concerned about how we integrate human beings into the growing web of technology,” says Breseman, who will speak at O’Reilly’s upcoming Solid conference in San Francisco in May. “It’s easy to get caught up in the ‘cool new thing’ mentality, but you can end up with a situation where the point for the technology is the technology, not the human being using it. It becomes closed rather than inclusive — an ‘app developers developing apps for app developers to develop apps’ kind of thing.”

Those concerns have led Breseman and her colleagues at Technical Machine to the development of the Tessel: an open-source Arduino-style microcontroller that runs JavaScript and allows hardware project prototyping. And not, Breseman emphasizes, the mere prototyping of ‘cool new things’ — rather, the prototyping of things that will connect people to the emerging Internet of Things in ways that have nothing to do with screens or smart phones.

“I’m not talking about smart watches or smart clothing,” explains Breseman. “In a way, they’re already passé. The product line hasn’t caught up with the technology. Think about epidermal circuits — you apply them to your skin in the same way you apply a temporary tattoo. They’ve been around for a couple of years. Something like that has so many potential applications — take the Quantified Self movement, for example. Smart micro devices attached right to the skin would make everything now in use for Quantified Self seem antiquated, trivial.”

Breseman looks to a visionary of the past to extrapolate the future: “In the late 1980s, Mark Weiser coined the term ‘ubiquitous computing‘ to describe a society where computers were so common, so omnipresent, that people would ultimately stop interfacing with them,” Breseman says. “In other words, computers would be everywhere, embedded in the environment. You wouldn’t rely on a specific device for information. The data would be available to you on an ongoing basis, through a variety of non-intrusive — even invisible — sources.”

Weiser described such an era as “… the age of calm technology, when technology recedes into the background of our lives…” That trope — calm technology — is extremely appealing, says Breseman.

“We could stop interacting with our devices, stop staring at screens, and start looking at each other, start talking to each other again,” she says. “I’d find that tremendously exciting.”

Breseman is concerned that the Internet of Things is seen only as a new and shiny buzz phrase. “We should be looking at it as a way to address our needs as human beings,” she says, “to connect people to the Internet more elegantly, not just as a source for more toys. Yes, we are now dependent on information technology. It has expanded our lives, and we don’t want to give it up. But we’re not applying it very well. We could do it so much better.”

Part of the problem has been the bifurcation of engineering into software and hardware camps, she says. Software engineers type into screens, and hardware engineers design physical things, and there have been few — if any — places that the twain have met. The two disciplines are poised to merge in the Internet of Things — but it won’t be an easy melding, Breseman allows. Each field carves different neural pathways, inculcates different values.

“Because of that, it has been really hard to figure out things that let people engage with the Internet in a physical sense,” Breseman says. “When we were designing Tessel, we discovered how hugely difficult it is to make an interactive Internet device.”

Still, Tessel and devices like it ultimately will become the machine tools of the Internet of Everything: the forges and lathes where the new infrastructure is built. That’s what Breseman hopes, anyway.

“What we would like,” she muses, “is for people to figure out their needs first and then order Tessels rather than the other way around. By that I mean you should first determine why and how connecting to the Internet physically would augment your life, make it better. Then get a Tessel to help you with your prototypes. We’ll see more and better products that way, and it keeps the emphasis where it belongs — on human beings, not the devices.”


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

February 12 2014

Searching for the software stack for the physical world

When I flip through a book on networking, one of the first things I look for is the protocol stack diagram. An elegant representation of the protocol stack can help you make sense of where to put things, separate out important mental concepts, and help explain how a technology is organized.

I’m no stranger to the idea of trying to diagram out a protocol space; I had a successful effort back when the second edition of my book on 802.11 was published. I’ve been a party to several awesome conversations recently about how to organize the broad space that’s referred to as the “Internet of Things.”IoT StackIoT Stack

Let’s start at the bottom, with the hardware layer, which is labeled Things. These are devices that aren’t typically thought of as computers. Sure, they wind up using a computing ecosystem, but they are not really general-purpose computers. These devices are embedded devices that interact with the physical world, such as sensors and motors. Primarily, they will either be the eyes and ears of the overall system, or the channel for actions on the physical world. They will be designed around low power consumption, and therefore will use a low throughput communication channel. If they communicate with a network, it will typically be through a radio interface, but the tyranny of limited power consumption means that the network interface will usually be limited in some way.

Things provide their information or are instructed to act on the world by connecting through a Network Transport layer. Networking allows reports from devices to be received and acted on. In this model, the network transport consists of a set of technologies that move data around, and is a combination of the OSI data link, networking, and transport layers. Mapping into technologies that we use, it would be TCP/IP on Wi-Fi for packet transport, with data carried over a protocol like REST.

The Data layer aggregates many devices into an overall collective picture. In the IoT world, users are interacting with the physical world and asking questions like “what is the temperature in this room?” That question isn’t answered by just one device (or if it is, the temperature of the room is likely to fluctuate wildly). Each component of the hardware layer at the bottom of the stack contributes a piece of the overall contextual picture. A light bulb can be on or off, but to determine the desired state, you might mix in overall power consumption of the building, how many people are present, the locations of those people, total demand on the electrical grid, and possibly even the preferences of the people in an area. (I once toured an FAA regional air traffic control center, and the groups of controllers that serve each sub-region of airspace could customize the lighting, ranging from normal lighting to quite dim lighting.)

The value of the base level of devices in this environment depends on how much unique context it can add to the overall picture and enable the operation of software that can operate on concepts of the physical world like room temperature or whether I am asleep. Looking back on it, the foundation for the data layer was being laid years ago, as is obvious reading section three of Tim O’Reilly’s “What is Web 2.0?” essay from 2005. In this world, you are either contributing to or working with context, or you are not doing very interesting work.

Sharing context widely requires APIs. If the real-world import of a piece of data is only apparent when it is combined with other sources of data, an application needs to be able to mash up several data sources to create its own unique picture of the world. APIs enable programmers to build context that represents what is important to users and build a cognitively significant aggregation. “Room temperature” may depend on getting data from temperature, humidity, and sunlight sensors, perhaps in several locations in a room.

In addition to reporting up, APIs need to enable control to flow down the stack. If “room temperature” is a complex state that depends on data from several sensors, it may require acting on several aspects of a climate control system to change: the heater, air conditioner, fan, and maybe even whether the blinds in a room are open or closed.

Designing APIs is hard; in addition to getting the data operations and manipulation right, you need to design for performance, security, and to enable applications over time. A good place to start is O’Reilly’s API strategy guide.

Finally, we reach the top of the stack when we get to Applications. Early applications allowed you to control the physical world like a remote control. Nice, but by no means the end of the story. One of the smartest technology analysts I know often challenges me by saying, “It’s great that you can report anomalous behavior. If you know something is wrong, do something about it!” The “killer app” in this world is automation: being able to work without explicit instructions from the user, or to continuously fine-tune and optimize the real world based on what has flowed up the stack.

Unlike my previous effort in depicting the world of 802.11, this stack is very much a work in progress. Please leave your thoughts in the comments.

February 08 2014

Four short links: 7 February 2014

  1. 12 Predictions About the Future of Programming (Infoworld) — not a bad set of predictions, except for the inane “squeezing” view of open source.
  2. Conceal (Github) — Facebook Android tool for apps to encrypt data and large files stored in public locations, for example SD cards.
  3. Dreamliner Softwareall three of the jet’s navigation computers failed at the same time. “The cockpit software system went blank,” IBN Live, an Indian television station, reported. The Internet of Rebooting Things.
  4. Contiki — open source connective OS for IoT.
  5. February 07 2014

    Why Solid, why now

    A few years ago at OSCON, one of the tutorials demonstrated how to click a virtual light switch in Second Life and have a real desk lamp light up in the room. Looking back, it was rather trivial, but it was striking at the time to see software people taking an interest in the “real world.” And what better metaphor for the collision of virtual and real than a connection between Second Life and the Portland Convention Center?

    In December 2012, our Radar team was meeting in Sebastopol and we were talking about trends in robotics, Maker DIY, Internet of Things, wearables, smart grid, industrial Internet, advanced manufacturing, frictionless supply chain, etc. We were trying to figure out where to put our focus among all of these trends when suddenly it was obvious (at least to Mike Loukides, who pointed it out): they are all more alike than different, and we could focus on all of them by looking at the relationships among them. The Solid program was conceived that day.

    We called it Solid because most of the people we work with are software people and we wanted to draw attention to the non-virtuality of all this stuff. This isn’t the stuff of some abstract “cyber” domain; these were all trends that lived at the nexus of software and very real physical hardware. A big part of this story is that hardware is getting less hard, and more like software in its malleability, but there remains a sheer physicality to this stuff that makes it different. Negroponte told us to forget atoms and focus on bits, and we did for 20 years. But now, the pendulum is swinging back and atoms matter again — except now they are atoms layered with bits, and that makes this different.

    Software and hardware have been essentially separate mediums wielded by the separate tribes of hackers/ coders/ programmers/ software engineers and capital E Engineers. Solid is about the beginning of something different, the collision of software and hardware into a new combined medium that will be wielded by people (or teams) that understand both, of bits and atoms uniting in the same intellectual space. What should we call it? Fluidware? That won’t be it, but I’d like eventually to come up with a term that conveys that a combination of software and less hard hardware is something distinct.

    Naturally, this impacts more than just the stuff that we produce from this new medium. It influences organizational models; the things engineers, designers, and hackers need to know to do their jobs; and the business models you can deliver with bit-enabled collections of atoms.

    With Solid, we envision an annual program of engagement that will span various media as well as multiple in-person events. We’re going to do Solid:Local events, which we started in Cambridge, MA, on February 6, and we’re going to anchor the program with the Solid Conference in San Francisco this year in May that is going to be very different from the other conferences we do.

    First off, the Solid Conference will be at Ft. Mason, a venue we chose for it’s beautiful views, and because it can host the kinds of industrial-scale things we’d like to demonstrate there. We still have a lot to do to bring this program together, but our guiding principle is to give our audience a visceral feel for this world of combined hardware and software. We’re going for something that feels less like a conference, and more like a miniature World’s Fair Exposition with talks.

    I believe that we are on the cusp of something as dramatic as the Industrial Revolution. In that time, a series of industrial expositions drew millions of people to London, Paris, Philadelphia and elsewhere to experience firsthand the industrialization of their world.

    Here in my home office outside of Philadelphia, I’m about 15 miles from the location of the 1876 Exposition that we’re using for inspiration. If you entered Machinery Hall during that long summer with its 1400 horsepower Corliss engine powering belt-driven machinery throughout the vast hall, you couldn’t help but leave with an appreciation for how the world was changing. We won’t be building a Machinery Hall, certainly not at that scale, but we are building a program with as much show as tell. We want to viscerally demonstrate the principals that we see at work here.

    We hope you can join us in San Francisco or at one of the many Solid:Locals we plan to put on. Share your stories about interesting intersections of hardware and software with us, too.

    February 05 2014

    Four short links: 5 February 2014

    1. sigma.js — Javascript graph-drawing library (node-edge graphs, not charts).
    2. DARPA Open Catalog — all the open source published by DARPA. Sweet!
    3. Quantified Vehicle Meetup — Boston meetup around intelligent automotive tech including on-board diagnostics, protocols, APIs, analytics, telematics, apps, software and devices.
    4. AT&T See Future In Industrial Internet — partnering with GE, M2M-related customers increased by more than 38% last year. (via Jim Stogdill)

    February 04 2014

    The Industrial Internet of Things

    WeishauptBurnerWeishauptBurner

    Photo: Kipp Bradford. This industrial burner from Weishaupt churns through 40 million BTUs per hour of fuel.

    A few days ago, a company called Echelon caused a stir when it released a new product called IzoT. You may never have heard of Echelon; for most of us, they are merely a part of the invisible glue that connects modern life. But more than 100 million products — from street lights to gas pumps to HVAC systems — use Echelon technology for connectivity. So, for many electrical engineers, Echelon’s products are a big deal. Thus, when Echelon began touting IzoT as the future of the Industrial Internet of Things (IIoT), it was bound to get some attention.

    Admittedly, the Internet of Things (IoT) is all the buzz right now. Echelon, like everyone else, is trying to capture some of that mindshare for their products. In this case, the product is a proprietary system of chips, protocols, and interfaces for enabling the IoT on industrial devices. But what struck me and my colleagues was how really outdated this approach seems, and how far it misses the point of the emerging IoT.

    Although there are many different ways to describe the IoT, it is essentially a network of devices where all the devices:

    1. Have local intelligence
    2. Have a shared API so they can speak with each other in a useful way, even if they speak multiple protocols
    3. Push and pull status and command information from the networked world

    In the industrial context, rolling out a better networking chip is just a minor improvement on an unchanged 1980s practice that requires complex installations, including running miles of wire inside walls. This IzoT would have been a breakthrough product back when I got my first Sony Walkman.

    This isn’t a problem confined to Echelon. It’s a problem shared by many industries. I just spent several days wandering the floor of the International Air Conditioning, Heating, and Refrigeration (AHR) trade show, a show about as far from CES as you can get: truck-sized boilers and cooling towers that look unchanged from their 19th-century origins dotted the floor. For most of the developed world, HVAC is where the lion’s share of our energy goes. (That alone is reason enough to care about it.) It’s also a treasure trove of data (think Nest), and a great place for all the obvious, sensible IoT applications like monitoring, control, and network intelligence. But after looking around at IoT products at AHR, it was clear that they came from Bizzaro World.[1]

    EmersonCopelandEmersonCopeland

    Photo: Kipp Bradford. Copeland alone has sold more than 100 million compressors. That’s a lot of “things” to get on the Internet.

    I spoke with an engineer showing off one of the HVAC industry-leading low-power wireless networking technologies for data centers. He told me his company’s new wireless sensor network system runs at 2.6 GHz, not 2.4 GHz (though the data sheet doesn’t confirm or deny that). Looking at the products that won industry innovation awards, I was especially depressed. Take, for example, this variable-speed compressor technology. When you put variable-speed motors into an air-conditioning compressor, you get a 25–35% efficiency boost. That’s a lot of energy saved! And it looks just like variable-speed technology that has been around for decades — just not in the HVAC world.

    Now here’s where things get interesting: variable-speed control demands a smart processor controlling the compressor motor, plus the intelligent sensors to tell the motor when to vary its speed. With all that intelligence, I should be able to connect it to my network. So, when will my air conditioner talk to my Nest so I can optimize my energy consumption? Never. Or at least not until industry standards and government regulations overlap just right and force them to talk to each other, just as recent regulatory conditions forced 40-year-old variable-motor control technology into 100-year-old compressor technology.

    Of course, like any inquisitive engineer, I had to ask the manufacturer of my high-efficiency boiler if it was possible to hook that up to my Nest to analyze performance. After he said, “Sure, try hooking up the thermostat wire,” and made fun of me for a few minutes, the engineer said, “Yeah, if you can build your own modbus-to-wifi bridge, you can access our modbus interface and get your boiler online. But do you really want some Russian hacker controlling your house heat?” It would be so easy for my Nest thermostat to have a useful conversation with my boiler beyond “on/off,” but it doesn’t.

    Don’t get me wrong. I really am sympathetic to engineering within the constraints of industrial (versus consumer) environments, having spent a period of my life designing toys and another period making medical and industrial products. I’ve had to use my engineering skills to make things as dissimilar as Elmo and dental implants. But constraints are no excuse for trying to patch up, or hide behind, outdated technology.

    The electrical engineers designing IoT devices for consumers have created exciting and transformative technologies like z-wave, Zigbee, BLE, wifi, and more, giving consumer devices robust and nearly transparent connectivity with increasingly easy installation. Engineers in the industrial world seem to be stuck making small technological tweaks that might enhance safety, reliability, robustness, and NSA-proof security of device networks. This represents an unfortunate increase in the bifurcation of the Internet of Things. It also represents a huge opportunity for those who refuse to submit to the notion that the IoT is either consumer or industrial, but never both.

    For HVAC, innovation in 2014 means solving problems from 1983 with 1984 technology because the government told them so. The general attitude can be summed up as: “We have all the market share and high barriers to entry, so we don’t care about your problems.” Left alone, this industry (like so many others) will keep building walls that prevent us from having real control over our devices and our data. That directly contradicts a key goal of the IoT: connecting our smart devices together.

    And that’s where the opportunity lies. There is significant value in HVAC and similar industries for any company that can break down the cultural and technical barriers between the Internet of Things and the Industrial Internet of Things. Companies that recognize the new business models created by well-designed, smart, interconnected devices will be handsomely rewarded. When my Nest can talk to my boiler, air conditioner, and Hue lights, all while analyzing performance versus weather data, third parties could sell comfort contracts, efficiency contracts, or grid stabilization contracts.

    As a matter of fact, we are already seeing a $16 billion market for grid stabilization services opened up by smart, connected heating devices. It’s easy to envision a future where the electric company pays me during peak load times because my variable-speed air conditioner slows down, my Hue lights dim imperceptibly, and my clothes dryer pauses, all reducing my grid load — or my heater charges up a bank of thermal storage bricks in advance of a cold front before I return home from work. Perhaps I can finally quantify the energy savings of the efficiency improvements that I make.

    My Sony Walkman already went the way of the dodo, but there’s still a chance to blow open closed industries like HVAC and bridge the IoT and the IIoT before my Beats go out of style.


    [1] I’m not sorry for the Super Friends reference.

    January 31 2014

    Drone on

    Jeff Bezos’ recent demonstration of a drone aircraft simulating delivery of an Amazon parcel was more stunt than technological breakthrough. We aren’t there yet. Yes, such things may well come to pass, but there are obstacles aplenty to overcome — not so much engineering snags, but cultural and regulatory issues.

    The first widely publicized application of modern drone aircraft — dropping Hellfire missiles on suspected terrorists — greatly skewed perceptions of the technology. On the one hand, the sophistication of such unmanned systems generated admiration from technophiles (and also average citizens who saw them as valuable adjuncts in the war against terrorism). On the other, the significant civilian casualties that were collateral to some strikes have engendered outrage. Further, the fear that drones could be used for domestic spying has ratcheted up our paranoia, particularly in the wake of Edward Snowden’s revelations of National Security Agency overreach.

    It’s as though drones have been painted with a giant red “D,” á la The Scarlet Letter. Manufacturers of the aircraft, not surprisingly, are desperate to rehabilitate the technology’s image, and properly so. Drones have too many exciting — even essential — civilian applications to restrict their use to war.

    That’s why drone proponents don’t want us to call them “drones.” They’d prefer we use the periphrastic term, “Unmanned Aerial Vehicle,” or at least the acronym — UAV. Or depending on whom you talk to, Unmanned Vehicle System (UVS).

    Will either catch on? Probably not. Think back: do we call it Star Wars or the Strategic Defense Initiative? Obamacare, or the Affordable Care Act? Popular culture has a way of reducing bombastic rubrics to pithy, evocative phrases. Chances are pretty good “drones” it is, and “drones” it’ll stay.

    And that’s not so bad, says Brendan Schulman, an attorney with the New York office of Kramer Levin Naftalis & Frankel.

    “People need to use terminology they understand,” says Schulman, who serves as the firm’s e-discovery counsel and specializes in commercial unmanned aviation law. ”And they understand ‘drones.’ I know that causes some discomfort to manufacturers and people who use drones for commercial purposes, but I have no trouble with the term. I think we can live with it.”

    Michael Toscano, the president and CEO of the Association of Unmanned Vehicle Systems International, feels differently: not surprisingly, he favors the acronym “UVS” over “drone.”

    “The term ‘drones’ evokes a hostile, military device,” says Toscano. “That isn’t what the broad applications of this technology are about. Commercial unmanned aircraft have a wide range of uses, from monitoring pipelines and power lines, to wildlife censuses, to agriculture. In Japan, they’ve been used for years for pesticide applications on crops. They’re far superior to manned aircraft for that — the Japanese government even partnered with Yamaha to develop a craft specifically designed for the job.”

    Nomenclature aside, drones are stuck in a time warp here in the United States. The Federal Aviation Administration (FAA) prohibits their general use pending development of final regulations; such rules have been in the works for years but have yet to be released. Still, the FAA does allow some variances on the ban: government agencies and universities, after thrashing through a maze of red tape, can obtain waivers for specific tasks. And for the most part, field researchers who have hands-on-joystick experience with the aircraft are wildly enthusiastic about them.

    “They give us superb data,” says Michael Hutt, the Unmanned Aircraft Systems Manager for the U.S. Geological Survey (USGS). USGS is currently using drones — specifically the fixed-wing RQ-11A Raven and the hovering RQ-16A T-Hawk — for everything from surveying abandoned mine sites to monitoring coal seam fires, estimating waterfowl populations, and evaluating pygmy rabbit habitat.

    Landsat [satellites] have been the workhorse for most of our projects,” continues Hutt, “and they’re pretty good as far as they go, but they have definite limitations. You’re able to take images only when a satellite passes overhead, obviously, and you’re limited to what you can see by the weather. Also, your resolution is only 30 meters. With the Raven or Hawk, we can fly anytime we want. And because we fly low and slow, our data is collected in resolutions of centimeters, not meters. It’s just a level of detail we’ve never had before.”

    And highly detailed data, of course, translates as highly accurate and reliable data. Hutt offers an example: a project to map fossil beds at White Sands National Monument.

    “The problem is that these beds aren’t static,” says Hutt. “They tend to erode, or they can get covered up with blowing sand. It has been difficult to tell just what we have, and where the fossils are precisely. But with our T-Hawks, were able to get down to 2.5-centimeter resolution. We know exactly where the fossils are — it doesn’t matter if a sand dune forms on top of them.”

    Even with current FAA restrictions, Hutt notes, universities are crowding into the drone zone.

    “Not long ago, maybe five or six universities were using them, or had applied for permits,” Hutt says. ”Now, there’s close to 200. The gathering of high-quality data is becoming easy and cheap. That’s because of aircraft advances, of course, and also because the payload packages — the sensors and cameras — are dropping dramatically in price. A few years ago, you needed a $500,000 mapping camera to obtain the information we’re now getting with an off-the-shelf $1,000 camera.”

    Unmanned aircraft may be easier to fly than F-22 fighter jets, but they impose their own challenges, observes Jeff Sloan, a top drone jock for USGS.

    “Dealing with wind is the major problem,” says Sloan. ”The aircraft we’re using are really quite small, and they get pushed around easily. Also, though they can get into tighter spots than any manned aircraft, steep terrain can get you pretty tense. You’re flying very low, and there’s not much room for error.”

    If and when the FAA gives the green light for broad commercial drone applications, a large cohort of potential pilots with the requisite skill sets will be ready and waiting.

    “We’ve found that the best pilots are those who’ve had a lot of video game experience,” laughs Sloan, “so it’s good to know my kids have a future.”

    But back to the FAA: a primary reason for the rules delay is public worry over safety and privacy. Toscano says safety concerns are mostly just that — foreboding more than real risk.

    “It’s going to take a while before you see general risk tolerance, but it’ll come,” Toscano said. ”The Internet started with real apprehension, and now, of course there’s general acceptance. There are risks with the Internet, but they’re widely understood now, and people realize there’s a trade-off for the benefits.”

    System vulnerability is tops among safety concerns, acknowledges Toscano. Some folks worry that a drone’s wireless links could be severed or jammed, and that the craft would start noodling around independently, with potentially dire results.

    “That probably could be addressed by programming the drone with a pre-determined landing spot,” says Toscano. ”It could default to a safe zone if something went wrong.”

    And what about privacy? Drones, says Toscano, don’t pose a greater threat to privacy than any other extant technology with potential eavesdropping applications.

    “The law is the critical factor here,” he insists. “The Fourth Amendment supports the expectation of privacy. If that’s violated by any means, it’s a crime, regardless of the technology responsible.”

    More to the point, says Toscano, we have to give unmanned aircraft a little time and space to develop: we’re still at Drones 1.0.

    “With any new technology, it’s the same story: crawl, walk, run,” Toscano says. ”What Bezos did was show what UVS technology would look like at a full run — but it’s important to remember, we’re really still crawling.”

    To get drones up on their hind legs and sprinting — or soaring — they’re going to need a helping hand from the FAA. And things aren’t going swimmingly there, says attorney Schulman.

    “They recently issued a road map on the probable form of the new rules,” Schulman says, “and frankly, it’s discouraging. It suggests the regulations are going to be pretty burdensome, particularly for the smaller UAVs, which are the kind best suited for civilian uses. If that’s how things work out, it will have a major chilling effect on the industry.”

    January 27 2014

    The new bot on the block

    Fukushima changed robotics. More precisely, it changed the way the Japanese view robotics. And given the historic preeminence of the Japanese in robotic technology, that shift is resonating through the entire sector.

    Before the catastrophic earthquake and tsunami of 2011, the Japanese were focused on “companion” robots, says Rodney Brooks, a former Panasonic Professor of Robotics at MIT, the founder and former technical officer of IRobot, and the founder, chairman and CTO of Rethink Robotics. The goal, says Brooks, was making robots that were analogues of human beings — constructs that could engage with people on a meaningful, emotional level. Cuteness was emphasized: a cybernetic, if much smarter, equivalent of Hello Kitty, seemed the paradigm.

    But the multiple core meltdown at the Fukushima Daiichi nuclear complex following the 2011 tsunami changed that focus abruptly.

    “Fukushima was a wake-up call for them,” says Brooks. “They needed robots that could do real work in highly radioactive environments, and instead they had robots that were focused on singing and dancing. I was with IRobot then, and they asked us for some help. They realized they needed to make a shift, and now they’re focusing on more pragmatic designs.”

    Pragmatism was always the guiding principle for Brooks and his companies, and is currently manifest in Baxter, Rethink’s flagship product. Baxter is a breakthrough production robot for a number of reasons. Equipped with two articulated arms, it can perform a multitude of tasks. It requires no application code to start up, and no expensive software to function. No specialists are required to program it; workers with minimal technical background can “teach” the robot right on the production line through a graphical user interface and arm manipulation. Also, Baxter requires no cage — human laborers can work safely alongside it on the assembly line.

    Moreover, it is cheap: about $25,000 per unit. It is thus the robotic equivalent of the Model T, and like the Model T, Baxter and its subsequent iterations will impose sweeping changes in the way people live and work.

    “We’re at the point with production robots where we were with mobile robots in the late 1980s and early 1990s,” says Brooks. “The advances are accelerating dramatically.”

    What’s the biggest selling point for this new breed of robot? Brooks sums it up in a single word: dignity.

    “The era of cheap labor for factory line work is coming to a close, and that’s a good thing,” he says. “It’s grueling, and it can be dangerous. It strips people of their sense of worth. China is moving beyond the human factory line — as people there become more prosperous and educated, they aspire to more meaningful work. Robots like Baxter will take up the slack out of necessity.”

    And not just for the assemblage of widgets and gizmos. Baxter-like robots will become essential in the health sector, opines Brooks — particularly in elder care. As the Baby Boom piglet continues its course through the demographic python, the need for attendants is outstripping supply. No wonder: the work is low-paid and demanding. Robots can fill this breach, says Brooks, doing everything from preparing and delivering meals to shuttling laundry, changing bedpans and mopping floors.

    “Again, the basic issue is dignity,” Brooks said. “Robots can free people from the more menial and onerous aspects of elder care, and they can deliver an extremely high level of service, providing better quality of life for seniors.”

    Ultimately, robots could be more app than hardware: the sexy operating system on Joaquin Phoenix’s mobile device in the recent film “Her” may not be far off the mark. Basically, you’ll carry a “robot app” on your smartphone. The phone can be docked to a compatible mechanism — say, a lawn mower, or car, or humanoid mannequin — resulting in an autonomous device ready to trim your greensward, chauffeur you to the opera, or mix your Mojitos.

    YDreams Robotics, a company co-founded by Brooks protégé Artur Arsenio, is actively pursuing this line of research.

    “It’s just a very efficient way of marketing robots to mass consumers,” says Arsenio. “Smartphones basically have everything you need, including cameras and sensors, to turn mere things into robots.”

    YDream has its first product coming out in April: a lamp. It’s a very fine if utterly unaware desk lamp on its own, says Artur, but when you connect it to a smartphone loaded with the requisite app, it can do everything from intelligently adjusting lighting to gauging your emotional state.

    “It uses its sensors to interface socially,” Artur says. “It can determine how you feel by your facial expressions and voice. In a video conference, it can tell you how other participants are feeling. Or if it senses you’re sad, it may Facebook your girlfriend that you need cheering up.”

    Yikes. That may be a bit more interaction than you want from a desk lamp, but get used to it. Robots could intrude in ways that may seem a little off-putting at first — but that’s a marker of any new technology. Moreover, says Paul Saffo, a consulting professor at Stanford’s School of Engineering and a technology forecaster of repute, the highest use of robots won’t be doing old things better. It will be doing new things, things that haven’t been done before, things that weren’t possible before the development of key technology.

    “Whenever we have new tech, we invariably try to use it to do old things in a new way — like paving cow paths,” says Saffo. “But the sooner we get over that — the sooner we look beyond the cow paths — the better off we’ll be. Right now, a lot of the thinking is, ‘Let’s have robots drive our cars, and look like people, and be physical objects.’

    But the most important robots working today don’t have physical embodiments, says Saffo — think of them as ether-bots, if you will. Your credit application? It’s a disembodied robot that gets first crack at that. And the same goes for your resume when you apply for a job.

    In short, robots already are embedded in our lives in ways we don’t think of as “robotic.” This trend will only accelerate. At a certain point, things may start feeling a little — well Singularity-ish. Not to worry — it’s highly unlikely Skynet will rain nuclear missiles down on us anytime soon. But the melding of robotic technology with dumb things nevertheless presents some profound challenges — mainly because robots and humans react on disparate time scales.

    “The real questions now are authority and accountability,” says Saffo. “In other words, we have to figure out how to balance the autonomy systems need to function with the control we need to ensure safety.”

    Saffo cites modern passenger planes like the Airbus 330 as an example.

    “Essentially they’re flying robots,” he says. “And they fly beautifully, conserving fuel to the optimal degree and so forth. But the design limits are so tight — if they go too fast, they can fall apart; if they go too slow, they stall. And when something goes wrong, the pilot has perhaps 50 kilometers to respond. At typical speeds, that doesn’t add up to much reaction time.”

    Saffo noted the crash of Air France Flight 447 in the mid-Atlantic in 2009 involved an Airbus 330. Investigations revealed the likely cause was turbulence complicated by the icing up of the plane’s speed sensors. This caused the autopilot to disengage, and the plane began to roll. The pilots had insufficient time to compensate, and the aircraft slammed into the water at 107 knots.

    “The pilot figured out what was wrong — but it was 20 seconds too late,” says Saffo. “To me, it shows we need to devote real effort to defining boundary parameters on autonomous systems. We have to communicate with our robots better. Ideally, we want a human being constantly monitoring the system, so he or she can intervene when necessary. And we need to establish parameters that make intervention even possible.”

    Rod Brooks will be speaking at the upcoming Solid Conference in May. If you are interested in robotics and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

    January 24 2014

    The lingering seduction of the page

    In an earlier post in this series, I examined the articulatory relationship between information architecture and user interface design, and argued that the tools that have emerged for constructing information architectures on the web will only get us so far when it comes to expressing information systems across diverse digital touchpoints. Here, I want to look more closely at these traditional web IA tools in order to tease out two things: (1) ways we might rely on these tools moving forward, and (2) ways we’ll need to expand our approach to IA as we design for the Internet of Things.

    First stop: the library

    Information Architecture for the World Wide WebInformation Architecture for the World Wide WebThe seminal text for Information Architecture as it is practiced in the design of online information environments is Peter Morville’s and Louis Rosenfeld’s Information Architecture for the World Wide Web, affectionately known as “The Polar Bear Book.”

    First published in 1998, The Polar Bear Book gave a name and a clear, effective methodology to a set of practices many designers and developers working on the web had already begun to encounter. Morville and Rosenfeld are both trained as professional librarians and were able to draw on this time-tested field in order to sort through many of the new information challenges coming out of the rapidly expanding web.

    If we look at IA as two faces of the same coin, The Polar Bear Book focuses on the largely top-down “Internet Librarian” side of information design. The other side of the coin approaches the problems posed by data from the bottom up. In Everything is Miscellaneous: The Power of the New Digital Disorder, David Weinberger argues that the fundamental problem of the “second order” (think “card catalogue”) organization typical of library sciences-informed approaches is that they fail to recognize the key differentiator of digital information: that it can exist in multiple locations at once, without any single location being the “home” position. Weinberger argues that in the “third order” of digital information practices, “understanding is metaknowledge.” For Weinberger, “we understand something when we see how the pieces fit together.”

    Successful approaches to organizing electronic data generally make liberal use of both top-down and bottom-up design tactics. Primary navigation (driven by top-down thinking) gives us a birds-eye view of the major categories on a website, allowing us to quickly focus on content related to politics, business, entertainment, technology, etc. The “You May Also Like” and “Related Stories” links come from work in the metadata-driven bottom-up space.

    On the web, this textually mediated blend of top-down and bottom-up is usually pretty successful. This is no surprise: the web is, after all, primarily a textual medium. At its core, HTML is a language for marking up text-based documents. It makes them interpretable by machines (browsers) so they can be served up for human consumption. We’ve accommodated images and sounds in this information ecology by marking them up with tags (either by professional indexers or “folksonomically,” by whomever cares to pitch in).

    There’s an important point here that often goes without saying: the IA we’ve inherited from the web is textual — it is based on the perception of the world mediated through the technology of writing; herin lies the limitation of the IA we know from the web as we begin to design for the Internet of Things.

    Reading brains

    We don’t often think of writing as “technology,” but inasmuch as technology constitutes the explicit modification of techniques and practices in order to solve a problem, writing definitely fits the bill. Language centers can be pinpointed in the brain — these are areas programmed into our genes that allow us to generate spoken language — but in order to read and write, our brains must create new connections not accounted for in our genetic makeup.

    In Proust and the Squid, cognitive neuroscientist Maryanne Wolf describes the physical, neurological difference between a reading and writing brain and a pre-literate linguistic brain. Wolf writes that, with the invention of reading “we rearranged the very organization of our brain.” Whereas we learn to speak by being immersed in language, learning to read and write is a painstaking process of instruction, practice, and feedback. Though the two acts are linked by a common practice of language, writing involves a different cognitive process than speaking. It is one that relies on the technology of the written word. This technology is not transmitted through our genes; it is transmitted through our culture.

    It is important to understand that writing is not simply a translation of speech. This distinction matters because it has profound consequences. Wolf writes that “the new circuits and pathways that the brain fashions in order to read become the foundation for being able to think in different, innovative ways.” As the ability to read becomes widespread, this new capacity for thinking differently, too, becomes widespread.

    Though writing constitutes a major leap past speech in terms of cognitive process, it shares one very important common trait with spoken language: linearity. Writing, like speech, follows a syntagmatic structure in which meaning is constituted by the flow of elements in order — and in which alternate orders often convey alternate meanings.

    When it comes to the design of information environments, this linearity is generally a foregone conclusion, a feature of the cognitive landscape which “goes without saying” and is therefore never called into question. Indeed, when we’re dealing primarily with text or text-based media, there is no need to call it into question.

    In the case of embodied experience in physical space, however, we natively bring to bear a perceptual apparatus which goes well beyond the linear confines of written and spoken language. When we evaluate an event in our physical environment — a room, a person, a meaningful glance — we do so with a system of perception orders of magnitude more sophisticated than linear narrative. JJ Gibson describes this as the perceptual awareness resulting from a “flowing array of stimulation.” When we layer on top of that the non-linear nature of dynamic systems, it quickly becomes apparent that despite the immense gains in cognition brought about by the technology of writing, these advances still only partially equip us to adequately navigate immersive, physical connected environments.

    The trouble with systems (and why they’re worth it)

    Thinking in Systems: A PrimerThinking in Systems: A Primer

    Photo: Andy Fitzgerald, of content from Thinking in Systems: A Primer, by Donella Meadows.

    I have written elsewhere in more detail about challenges posed to linguistic thinkers by systems. To put all of that in a nutshell, complex systems baffle us because we have a limited capacity to track system-influencing inputs and outputs and system-changing flows. As systems thinking pioneer Donella Meadows characterizes them in her book Thinking in Systems: A Primer, self-organizing, nonlinear, feedback systems are “inherently unpredictable” and “understandable only in the most general way.”

    According to Meadows, we learn to navigate systems by constructing models that approximate a simplified representation of the system’s operation and allow us to navigate it with more or less success. As more and more of our world — our information, our social networks, our devices, and our interactions with all of these — becomes connected, our systems become increasingly difficult (and undesirable) to compartmentalize. They also become less intrinsically reliant on linear textual mediation: our “smart” devices don’t need to translate their messages to each other into English (or French or Japanese) in order to interact.

    This is both the great challenge and the great potential of the Internet of Things. We’re beginning to interact with our built information environments not only in a classically signified, textual way, but also in a physical-being-operating-in-the-world kind of way. The text remains — and the need to interact with that textual facet with the tools we’ve honed on the web (i.e. traditional IA) remains. But as the information environments we’re tasked with representing become less textual and more embodied, the tools we use to represent them must likewise evolve beyond our current text-based solutions.

    Fumbling toward system literacy

    In order to rise to meet this new context, we’re going to need as many semiotic paths as we can find — or create. And in order to do that, we will have to pay close attention to the cognitive support structures that normally “go without saying” in our conceptual models.

    This will be hard work. The payoff, however, is potentially revolutionary. The threshold at   which we find ourselves is not merely another incremental step in technological advancement. The immersion in dynamic systems that the connected environment foreshadows holds the potential to re-shape the way we think — the way our brains are “wired” — much as reading and writing did. Though mediated by human-made, culturally transmitted technology (e.g. moveable type, or, in this case, Internet protocols), these changes hold the power to affect our core cognitive process, our very capacity to think.

    What this kind of “system literacy” might look like is as puzzling to me now as reading and writing must have appeared to pre-literate societies. The potential of being able to grasp how our world is connected in its entirety — people, social systems, economies, trade, climate, mobility, marginalization — is both mesmerizing and terrifying. Mesmerizing because it seems almost magical; terrifying because it hints at how unsophisticated and parochial our current perspective must look from such a vantage point.

    As information architects and interface designers, all of this means that we’re going to have to be nimble and creative in the way we approach design for these environments. We’re going to have to cast out beyond the tools and techniques we’re most comfortable with to find workable solutions to new problems of complexity. We aren’t the only ones working on this, but our role is an important one: engineers and users alike look to us to frame the rhetoric and usability of immersive digital spaces. We’re at a major transition in the way we conceive of putting together information environments. Much like Morville and Rosenfeld in 1998, we’re “to some degree all still making it up as we go along.” I don’t pretend to know what a fully developed information architecture for the Internet of Things might look like, but in the spirit of exploration, I’d like to offer a few pointers that might help nudge us in the right direction — a topic I’ll tackle in my next post.

    January 23 2014

    Podcast: Solid, tech for humans, and maybe a field trip?

    We were snowed in, but the phones still worked, so Jon Bruner, Mike Loukides, and I got together on the phone to have a chat. We start off talking about the results of the Solid call for proposals, but as is often the case, the conversation meandered from there.

    Here are some of the links we mention in this episode:

    You can subscribe to O’Reilly Radar podcast through iTunes or SoundCloud, or directly through our podcast’s RSS feed.

    January 15 2014

    Four short links: 15 January 2014

    1. Hackers Gain ‘Full Control’ of Critical SCADA Systems (IT News) — The vulnerabilities were discovered by Russian researchers who over the last year probed popular and high-end ICS and supervisory control and data acquisition (SCADA) systems used to control everything from home solar panel installations to critical national infrastructure. More on the Botnet of Things.
    2. mclMarkov Cluster Algorithm, a fast and scalable unsupervised cluster algorithm for graphs (also known as networks) based on simulation of (stochastic) flow in graphs.
    3. Facebook to Launch Flipboard-like Reader (Recode) — what I’d actually like to see is Facebook join the open web by producing and consuming RSS/Atom/anything feeds, but that’s a long shot. I fear it’ll either limit you to whatever circle-jerk-of-prosperity paywall-penetrating content-for-advertising-eyeballs trades the Facebook execs have made, or else it’ll be a leech on the scrotum of the open web by consuming RSS without producing it. I’m all out of respect for empire-builders who think you’re a fool if you value the open web. AOL might have died, but its vision of content kings running the network is alive and well in the hands of Facebook and Google. I’ll gladly post about the actual product launch if it is neither partnership eyeball-abuse nor parasitism.
    4. Map Projections Illustrated with a Face (Flowing Data) — really neat, wish I’d had these when I was getting my head around map projections.

    December 30 2013

    Four short links: 30 December 2013

    1. tooldiaga collection of methods for statistical pattern recognition. Implemented in C.
    2. Hacking MicroSD Cards (Bunnie Huang) — In my explorations of the electronics markets in China, I’ve seen shop keepers burning firmware on cards that “expand” the capacity of the card — in other words, they load a firmware that reports the capacity of a card is much larger than the actual available storage. The fact that this is possible at the point of sale means that most likely, the update mechanism is not secured. MicroSD cards come with embedded microcontrollers whose firmware can be exploited.
    3. 30c3 — recordings from the 30th Chaos Communication Congress.
    4. IOT Companies, Products, Devices, and Software by Sector (Mike Nicholls) — astonishing amount of work in the space, especially given this list is inevitably incomplete.

    December 26 2013

    Four short links: 26 December 2013

    1. Nest Protect Teardown (Sparkfun) — initial teardown of another piece of domestic industrial Internet.
    2. LogsThe distributed log can be seen as the data structure which models the problem of consensus. Not kidding when he calls it “real-time data’s unifying abstraction”.
    3. Mining the Web to Predict Future Events (PDF) — Mining 22 years of news stories to predict future events. (via Ben Lorica)
    4. Nanocubesa fast datastructure for in-memory data cubes developed at the Information Visualization department at AT&T Labs – Research. Nanocubes can be used to explore datasets with billions of elements at interactive rates in a web browser, and in some cases it uses sufficiently little memory that you can run a nanocube in a modern-day laptop. (via Ben Lorica)
    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!

    Schweinderl