Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 26 2014

February 22 2014

February 21 2014

February 19 2014

February 18 2014

February 12 2014

Searching for the software stack for the physical world

When I flip through a book on networking, one of the first things I look for is the protocol stack diagram. An elegant representation of the protocol stack can help you make sense of where to put things, separate out important mental concepts, and help explain how a technology is organized.

I’m no stranger to the idea of trying to diagram out a protocol space; I had a successful effort back when the second edition of my book on 802.11 was published. I’ve been a party to several awesome conversations recently about how to organize the broad space that’s referred to as the “Internet of Things.”IoT StackIoT Stack

Let’s start at the bottom, with the hardware layer, which is labeled Things. These are devices that aren’t typically thought of as computers. Sure, they wind up using a computing ecosystem, but they are not really general-purpose computers. These devices are embedded devices that interact with the physical world, such as sensors and motors. Primarily, they will either be the eyes and ears of the overall system, or the channel for actions on the physical world. They will be designed around low power consumption, and therefore will use a low throughput communication channel. If they communicate with a network, it will typically be through a radio interface, but the tyranny of limited power consumption means that the network interface will usually be limited in some way.

Things provide their information or are instructed to act on the world by connecting through a Network Transport layer. Networking allows reports from devices to be received and acted on. In this model, the network transport consists of a set of technologies that move data around, and is a combination of the OSI data link, networking, and transport layers. Mapping into technologies that we use, it would be TCP/IP on Wi-Fi for packet transport, with data carried over a protocol like REST.

The Data layer aggregates many devices into an overall collective picture. In the IoT world, users are interacting with the physical world and asking questions like “what is the temperature in this room?” That question isn’t answered by just one device (or if it is, the temperature of the room is likely to fluctuate wildly). Each component of the hardware layer at the bottom of the stack contributes a piece of the overall contextual picture. A light bulb can be on or off, but to determine the desired state, you might mix in overall power consumption of the building, how many people are present, the locations of those people, total demand on the electrical grid, and possibly even the preferences of the people in an area. (I once toured an FAA regional air traffic control center, and the groups of controllers that serve each sub-region of airspace could customize the lighting, ranging from normal lighting to quite dim lighting.)

The value of the base level of devices in this environment depends on how much unique context it can add to the overall picture and enable the operation of software that can operate on concepts of the physical world like room temperature or whether I am asleep. Looking back on it, the foundation for the data layer was being laid years ago, as is obvious reading section three of Tim O’Reilly’s “What is Web 2.0?” essay from 2005. In this world, you are either contributing to or working with context, or you are not doing very interesting work.

Sharing context widely requires APIs. If the real-world import of a piece of data is only apparent when it is combined with other sources of data, an application needs to be able to mash up several data sources to create its own unique picture of the world. APIs enable programmers to build context that represents what is important to users and build a cognitively significant aggregation. “Room temperature” may depend on getting data from temperature, humidity, and sunlight sensors, perhaps in several locations in a room.

In addition to reporting up, APIs need to enable control to flow down the stack. If “room temperature” is a complex state that depends on data from several sensors, it may require acting on several aspects of a climate control system to change: the heater, air conditioner, fan, and maybe even whether the blinds in a room are open or closed.

Designing APIs is hard; in addition to getting the data operations and manipulation right, you need to design for performance, security, and to enable applications over time. A good place to start is O’Reilly’s API strategy guide.

Finally, we reach the top of the stack when we get to Applications. Early applications allowed you to control the physical world like a remote control. Nice, but by no means the end of the story. One of the smartest technology analysts I know often challenges me by saying, “It’s great that you can report anomalous behavior. If you know something is wrong, do something about it!” The “killer app” in this world is automation: being able to work without explicit instructions from the user, or to continuously fine-tune and optimize the real world based on what has flowed up the stack.

Unlike my previous effort in depicting the world of 802.11, this stack is very much a work in progress. Please leave your thoughts in the comments.

February 08 2014

Four short links: 7 February 2014

  1. 12 Predictions About the Future of Programming (Infoworld) — not a bad set of predictions, except for the inane “squeezing” view of open source.
  2. Conceal (Github) — Facebook Android tool for apps to encrypt data and large files stored in public locations, for example SD cards.
  3. Dreamliner Softwareall three of the jet’s navigation computers failed at the same time. “The cockpit software system went blank,” IBN Live, an Indian television station, reported. The Internet of Rebooting Things.
  4. Contiki — open source connective OS for IoT.
  5. February 07 2014

    Why Solid, why now

    A few years ago at OSCON, one of the tutorials demonstrated how to click a virtual light switch in Second Life and have a real desk lamp light up in the room. Looking back, it was rather trivial, but it was striking at the time to see software people taking an interest in the “real world.” And what better metaphor for the collision of virtual and real than a connection between Second Life and the Portland Convention Center?

    In December 2012, our Radar team was meeting in Sebastopol and we were talking about trends in robotics, Maker DIY, Internet of Things, wearables, smart grid, industrial Internet, advanced manufacturing, frictionless supply chain, etc. We were trying to figure out where to put our focus among all of these trends when suddenly it was obvious (at least to Mike Loukides, who pointed it out): they are all more alike than different, and we could focus on all of them by looking at the relationships among them. The Solid program was conceived that day.

    We called it Solid because most of the people we work with are software people and we wanted to draw attention to the non-virtuality of all this stuff. This isn’t the stuff of some abstract “cyber” domain; these were all trends that lived at the nexus of software and very real physical hardware. A big part of this story is that hardware is getting less hard, and more like software in its malleability, but there remains a sheer physicality to this stuff that makes it different. Negroponte told us to forget atoms and focus on bits, and we did for 20 years. But now, the pendulum is swinging back and atoms matter again — except now they are atoms layered with bits, and that makes this different.

    Software and hardware have been essentially separate mediums wielded by the separate tribes of hackers/ coders/ programmers/ software engineers and capital E Engineers. Solid is about the beginning of something different, the collision of software and hardware into a new combined medium that will be wielded by people (or teams) that understand both, of bits and atoms uniting in the same intellectual space. What should we call it? Fluidware? That won’t be it, but I’d like eventually to come up with a term that conveys that a combination of software and less hard hardware is something distinct.

    Naturally, this impacts more than just the stuff that we produce from this new medium. It influences organizational models; the things engineers, designers, and hackers need to know to do their jobs; and the business models you can deliver with bit-enabled collections of atoms.

    With Solid, we envision an annual program of engagement that will span various media as well as multiple in-person events. We’re going to do Solid:Local events, which we started in Cambridge, MA, on February 6, and we’re going to anchor the program with the Solid Conference in San Francisco this year in May that is going to be very different from the other conferences we do.

    First off, the Solid Conference will be at Ft. Mason, a venue we chose for it’s beautiful views, and because it can host the kinds of industrial-scale things we’d like to demonstrate there. We still have a lot to do to bring this program together, but our guiding principle is to give our audience a visceral feel for this world of combined hardware and software. We’re going for something that feels less like a conference, and more like a miniature World’s Fair Exposition with talks.

    I believe that we are on the cusp of something as dramatic as the Industrial Revolution. In that time, a series of industrial expositions drew millions of people to London, Paris, Philadelphia and elsewhere to experience firsthand the industrialization of their world.

    Here in my home office outside of Philadelphia, I’m about 15 miles from the location of the 1876 Exposition that we’re using for inspiration. If you entered Machinery Hall during that long summer with its 1400 horsepower Corliss engine powering belt-driven machinery throughout the vast hall, you couldn’t help but leave with an appreciation for how the world was changing. We won’t be building a Machinery Hall, certainly not at that scale, but we are building a program with as much show as tell. We want to viscerally demonstrate the principals that we see at work here.

    We hope you can join us in San Francisco or at one of the many Solid:Locals we plan to put on. Share your stories about interesting intersections of hardware and software with us, too.

    February 05 2014

    Four short links: 5 February 2014

    1. sigma.js — Javascript graph-drawing library (node-edge graphs, not charts).
    2. DARPA Open Catalog — all the open source published by DARPA. Sweet!
    3. Quantified Vehicle Meetup — Boston meetup around intelligent automotive tech including on-board diagnostics, protocols, APIs, analytics, telematics, apps, software and devices.
    4. AT&T See Future In Industrial Internet — partnering with GE, M2M-related customers increased by more than 38% last year. (via Jim Stogdill)

    February 04 2014

    The Industrial Internet of Things

    WeishauptBurnerWeishauptBurner

    Photo: Kipp Bradford. This industrial burner from Weishaupt churns through 40 million BTUs per hour of fuel.

    A few days ago, a company called Echelon caused a stir when it released a new product called IzoT. You may never have heard of Echelon; for most of us, they are merely a part of the invisible glue that connects modern life. But more than 100 million products — from street lights to gas pumps to HVAC systems — use Echelon technology for connectivity. So, for many electrical engineers, Echelon’s products are a big deal. Thus, when Echelon began touting IzoT as the future of the Industrial Internet of Things (IIoT), it was bound to get some attention.

    Admittedly, the Internet of Things (IoT) is all the buzz right now. Echelon, like everyone else, is trying to capture some of that mindshare for their products. In this case, the product is a proprietary system of chips, protocols, and interfaces for enabling the IoT on industrial devices. But what struck me and my colleagues was how really outdated this approach seems, and how far it misses the point of the emerging IoT.

    Although there are many different ways to describe the IoT, it is essentially a network of devices where all the devices:

    1. Have local intelligence
    2. Have a shared API so they can speak with each other in a useful way, even if they speak multiple protocols
    3. Push and pull status and command information from the networked world

    In the industrial context, rolling out a better networking chip is just a minor improvement on an unchanged 1980s practice that requires complex installations, including running miles of wire inside walls. This IzoT would have been a breakthrough product back when I got my first Sony Walkman.

    This isn’t a problem confined to Echelon. It’s a problem shared by many industries. I just spent several days wandering the floor of the International Air Conditioning, Heating, and Refrigeration (AHR) trade show, a show about as far from CES as you can get: truck-sized boilers and cooling towers that look unchanged from their 19th-century origins dotted the floor. For most of the developed world, HVAC is where the lion’s share of our energy goes. (That alone is reason enough to care about it.) It’s also a treasure trove of data (think Nest), and a great place for all the obvious, sensible IoT applications like monitoring, control, and network intelligence. But after looking around at IoT products at AHR, it was clear that they came from Bizzaro World.[1]

    EmersonCopelandEmersonCopeland

    Photo: Kipp Bradford. Copeland alone has sold more than 100 million compressors. That’s a lot of “things” to get on the Internet.

    I spoke with an engineer showing off one of the HVAC industry-leading low-power wireless networking technologies for data centers. He told me his company’s new wireless sensor network system runs at 2.6 GHz, not 2.4 GHz (though the data sheet doesn’t confirm or deny that). Looking at the products that won industry innovation awards, I was especially depressed. Take, for example, this variable-speed compressor technology. When you put variable-speed motors into an air-conditioning compressor, you get a 25–35% efficiency boost. That’s a lot of energy saved! And it looks just like variable-speed technology that has been around for decades — just not in the HVAC world.

    Now here’s where things get interesting: variable-speed control demands a smart processor controlling the compressor motor, plus the intelligent sensors to tell the motor when to vary its speed. With all that intelligence, I should be able to connect it to my network. So, when will my air conditioner talk to my Nest so I can optimize my energy consumption? Never. Or at least not until industry standards and government regulations overlap just right and force them to talk to each other, just as recent regulatory conditions forced 40-year-old variable-motor control technology into 100-year-old compressor technology.

    Of course, like any inquisitive engineer, I had to ask the manufacturer of my high-efficiency boiler if it was possible to hook that up to my Nest to analyze performance. After he said, “Sure, try hooking up the thermostat wire,” and made fun of me for a few minutes, the engineer said, “Yeah, if you can build your own modbus-to-wifi bridge, you can access our modbus interface and get your boiler online. But do you really want some Russian hacker controlling your house heat?” It would be so easy for my Nest thermostat to have a useful conversation with my boiler beyond “on/off,” but it doesn’t.

    Don’t get me wrong. I really am sympathetic to engineering within the constraints of industrial (versus consumer) environments, having spent a period of my life designing toys and another period making medical and industrial products. I’ve had to use my engineering skills to make things as dissimilar as Elmo and dental implants. But constraints are no excuse for trying to patch up, or hide behind, outdated technology.

    The electrical engineers designing IoT devices for consumers have created exciting and transformative technologies like z-wave, Zigbee, BLE, wifi, and more, giving consumer devices robust and nearly transparent connectivity with increasingly easy installation. Engineers in the industrial world seem to be stuck making small technological tweaks that might enhance safety, reliability, robustness, and NSA-proof security of device networks. This represents an unfortunate increase in the bifurcation of the Internet of Things. It also represents a huge opportunity for those who refuse to submit to the notion that the IoT is either consumer or industrial, but never both.

    For HVAC, innovation in 2014 means solving problems from 1983 with 1984 technology because the government told them so. The general attitude can be summed up as: “We have all the market share and high barriers to entry, so we don’t care about your problems.” Left alone, this industry (like so many others) will keep building walls that prevent us from having real control over our devices and our data. That directly contradicts a key goal of the IoT: connecting our smart devices together.

    And that’s where the opportunity lies. There is significant value in HVAC and similar industries for any company that can break down the cultural and technical barriers between the Internet of Things and the Industrial Internet of Things. Companies that recognize the new business models created by well-designed, smart, interconnected devices will be handsomely rewarded. When my Nest can talk to my boiler, air conditioner, and Hue lights, all while analyzing performance versus weather data, third parties could sell comfort contracts, efficiency contracts, or grid stabilization contracts.

    As a matter of fact, we are already seeing a $16 billion market for grid stabilization services opened up by smart, connected heating devices. It’s easy to envision a future where the electric company pays me during peak load times because my variable-speed air conditioner slows down, my Hue lights dim imperceptibly, and my clothes dryer pauses, all reducing my grid load — or my heater charges up a bank of thermal storage bricks in advance of a cold front before I return home from work. Perhaps I can finally quantify the energy savings of the efficiency improvements that I make.

    My Sony Walkman already went the way of the dodo, but there’s still a chance to blow open closed industries like HVAC and bridge the IoT and the IIoT before my Beats go out of style.


    [1] I’m not sorry for the Super Friends reference.

    January 31 2014

    Drone on

    Jeff Bezos’ recent demonstration of a drone aircraft simulating delivery of an Amazon parcel was more stunt than technological breakthrough. We aren’t there yet. Yes, such things may well come to pass, but there are obstacles aplenty to overcome — not so much engineering snags, but cultural and regulatory issues.

    The first widely publicized application of modern drone aircraft — dropping Hellfire missiles on suspected terrorists — greatly skewed perceptions of the technology. On the one hand, the sophistication of such unmanned systems generated admiration from technophiles (and also average citizens who saw them as valuable adjuncts in the war against terrorism). On the other, the significant civilian casualties that were collateral to some strikes have engendered outrage. Further, the fear that drones could be used for domestic spying has ratcheted up our paranoia, particularly in the wake of Edward Snowden’s revelations of National Security Agency overreach.

    It’s as though drones have been painted with a giant red “D,” á la The Scarlet Letter. Manufacturers of the aircraft, not surprisingly, are desperate to rehabilitate the technology’s image, and properly so. Drones have too many exciting — even essential — civilian applications to restrict their use to war.

    That’s why drone proponents don’t want us to call them “drones.” They’d prefer we use the periphrastic term, “Unmanned Aerial Vehicle,” or at least the acronym — UAV. Or depending on whom you talk to, Unmanned Vehicle System (UVS).

    Will either catch on? Probably not. Think back: do we call it Star Wars or the Strategic Defense Initiative? Obamacare, or the Affordable Care Act? Popular culture has a way of reducing bombastic rubrics to pithy, evocative phrases. Chances are pretty good “drones” it is, and “drones” it’ll stay.

    And that’s not so bad, says Brendan Schulman, an attorney with the New York office of Kramer Levin Naftalis & Frankel.

    “People need to use terminology they understand,” says Schulman, who serves as the firm’s e-discovery counsel and specializes in commercial unmanned aviation law. ”And they understand ‘drones.’ I know that causes some discomfort to manufacturers and people who use drones for commercial purposes, but I have no trouble with the term. I think we can live with it.”

    Michael Toscano, the president and CEO of the Association of Unmanned Vehicle Systems International, feels differently: not surprisingly, he favors the acronym “UVS” over “drone.”

    “The term ‘drones’ evokes a hostile, military device,” says Toscano. “That isn’t what the broad applications of this technology are about. Commercial unmanned aircraft have a wide range of uses, from monitoring pipelines and power lines, to wildlife censuses, to agriculture. In Japan, they’ve been used for years for pesticide applications on crops. They’re far superior to manned aircraft for that — the Japanese government even partnered with Yamaha to develop a craft specifically designed for the job.”

    Nomenclature aside, drones are stuck in a time warp here in the United States. The Federal Aviation Administration (FAA) prohibits their general use pending development of final regulations; such rules have been in the works for years but have yet to be released. Still, the FAA does allow some variances on the ban: government agencies and universities, after thrashing through a maze of red tape, can obtain waivers for specific tasks. And for the most part, field researchers who have hands-on-joystick experience with the aircraft are wildly enthusiastic about them.

    “They give us superb data,” says Michael Hutt, the Unmanned Aircraft Systems Manager for the U.S. Geological Survey (USGS). USGS is currently using drones — specifically the fixed-wing RQ-11A Raven and the hovering RQ-16A T-Hawk — for everything from surveying abandoned mine sites to monitoring coal seam fires, estimating waterfowl populations, and evaluating pygmy rabbit habitat.

    Landsat [satellites] have been the workhorse for most of our projects,” continues Hutt, “and they’re pretty good as far as they go, but they have definite limitations. You’re able to take images only when a satellite passes overhead, obviously, and you’re limited to what you can see by the weather. Also, your resolution is only 30 meters. With the Raven or Hawk, we can fly anytime we want. And because we fly low and slow, our data is collected in resolutions of centimeters, not meters. It’s just a level of detail we’ve never had before.”

    And highly detailed data, of course, translates as highly accurate and reliable data. Hutt offers an example: a project to map fossil beds at White Sands National Monument.

    “The problem is that these beds aren’t static,” says Hutt. “They tend to erode, or they can get covered up with blowing sand. It has been difficult to tell just what we have, and where the fossils are precisely. But with our T-Hawks, were able to get down to 2.5-centimeter resolution. We know exactly where the fossils are — it doesn’t matter if a sand dune forms on top of them.”

    Even with current FAA restrictions, Hutt notes, universities are crowding into the drone zone.

    “Not long ago, maybe five or six universities were using them, or had applied for permits,” Hutt says. ”Now, there’s close to 200. The gathering of high-quality data is becoming easy and cheap. That’s because of aircraft advances, of course, and also because the payload packages — the sensors and cameras — are dropping dramatically in price. A few years ago, you needed a $500,000 mapping camera to obtain the information we’re now getting with an off-the-shelf $1,000 camera.”

    Unmanned aircraft may be easier to fly than F-22 fighter jets, but they impose their own challenges, observes Jeff Sloan, a top drone jock for USGS.

    “Dealing with wind is the major problem,” says Sloan. ”The aircraft we’re using are really quite small, and they get pushed around easily. Also, though they can get into tighter spots than any manned aircraft, steep terrain can get you pretty tense. You’re flying very low, and there’s not much room for error.”

    If and when the FAA gives the green light for broad commercial drone applications, a large cohort of potential pilots with the requisite skill sets will be ready and waiting.

    “We’ve found that the best pilots are those who’ve had a lot of video game experience,” laughs Sloan, “so it’s good to know my kids have a future.”

    But back to the FAA: a primary reason for the rules delay is public worry over safety and privacy. Toscano says safety concerns are mostly just that — foreboding more than real risk.

    “It’s going to take a while before you see general risk tolerance, but it’ll come,” Toscano said. ”The Internet started with real apprehension, and now, of course there’s general acceptance. There are risks with the Internet, but they’re widely understood now, and people realize there’s a trade-off for the benefits.”

    System vulnerability is tops among safety concerns, acknowledges Toscano. Some folks worry that a drone’s wireless links could be severed or jammed, and that the craft would start noodling around independently, with potentially dire results.

    “That probably could be addressed by programming the drone with a pre-determined landing spot,” says Toscano. ”It could default to a safe zone if something went wrong.”

    And what about privacy? Drones, says Toscano, don’t pose a greater threat to privacy than any other extant technology with potential eavesdropping applications.

    “The law is the critical factor here,” he insists. “The Fourth Amendment supports the expectation of privacy. If that’s violated by any means, it’s a crime, regardless of the technology responsible.”

    More to the point, says Toscano, we have to give unmanned aircraft a little time and space to develop: we’re still at Drones 1.0.

    “With any new technology, it’s the same story: crawl, walk, run,” Toscano says. ”What Bezos did was show what UVS technology would look like at a full run — but it’s important to remember, we’re really still crawling.”

    To get drones up on their hind legs and sprinting — or soaring — they’re going to need a helping hand from the FAA. And things aren’t going swimmingly there, says attorney Schulman.

    “They recently issued a road map on the probable form of the new rules,” Schulman says, “and frankly, it’s discouraging. It suggests the regulations are going to be pretty burdensome, particularly for the smaller UAVs, which are the kind best suited for civilian uses. If that’s how things work out, it will have a major chilling effect on the industry.”

    January 27 2014

    The new bot on the block

    Fukushima changed robotics. More precisely, it changed the way the Japanese view robotics. And given the historic preeminence of the Japanese in robotic technology, that shift is resonating through the entire sector.

    Before the catastrophic earthquake and tsunami of 2011, the Japanese were focused on “companion” robots, says Rodney Brooks, a former Panasonic Professor of Robotics at MIT, the founder and former technical officer of IRobot, and the founder, chairman and CTO of Rethink Robotics. The goal, says Brooks, was making robots that were analogues of human beings — constructs that could engage with people on a meaningful, emotional level. Cuteness was emphasized: a cybernetic, if much smarter, equivalent of Hello Kitty, seemed the paradigm.

    But the multiple core meltdown at the Fukushima Daiichi nuclear complex following the 2011 tsunami changed that focus abruptly.

    “Fukushima was a wake-up call for them,” says Brooks. “They needed robots that could do real work in highly radioactive environments, and instead they had robots that were focused on singing and dancing. I was with IRobot then, and they asked us for some help. They realized they needed to make a shift, and now they’re focusing on more pragmatic designs.”

    Pragmatism was always the guiding principle for Brooks and his companies, and is currently manifest in Baxter, Rethink’s flagship product. Baxter is a breakthrough production robot for a number of reasons. Equipped with two articulated arms, it can perform a multitude of tasks. It requires no application code to start up, and no expensive software to function. No specialists are required to program it; workers with minimal technical background can “teach” the robot right on the production line through a graphical user interface and arm manipulation. Also, Baxter requires no cage — human laborers can work safely alongside it on the assembly line.

    Moreover, it is cheap: about $25,000 per unit. It is thus the robotic equivalent of the Model T, and like the Model T, Baxter and its subsequent iterations will impose sweeping changes in the way people live and work.

    “We’re at the point with production robots where we were with mobile robots in the late 1980s and early 1990s,” says Brooks. “The advances are accelerating dramatically.”

    What’s the biggest selling point for this new breed of robot? Brooks sums it up in a single word: dignity.

    “The era of cheap labor for factory line work is coming to a close, and that’s a good thing,” he says. “It’s grueling, and it can be dangerous. It strips people of their sense of worth. China is moving beyond the human factory line — as people there become more prosperous and educated, they aspire to more meaningful work. Robots like Baxter will take up the slack out of necessity.”

    And not just for the assemblage of widgets and gizmos. Baxter-like robots will become essential in the health sector, opines Brooks — particularly in elder care. As the Baby Boom piglet continues its course through the demographic python, the need for attendants is outstripping supply. No wonder: the work is low-paid and demanding. Robots can fill this breach, says Brooks, doing everything from preparing and delivering meals to shuttling laundry, changing bedpans and mopping floors.

    “Again, the basic issue is dignity,” Brooks said. “Robots can free people from the more menial and onerous aspects of elder care, and they can deliver an extremely high level of service, providing better quality of life for seniors.”

    Ultimately, robots could be more app than hardware: the sexy operating system on Joaquin Phoenix’s mobile device in the recent film “Her” may not be far off the mark. Basically, you’ll carry a “robot app” on your smartphone. The phone can be docked to a compatible mechanism — say, a lawn mower, or car, or humanoid mannequin — resulting in an autonomous device ready to trim your greensward, chauffeur you to the opera, or mix your Mojitos.

    YDreams Robotics, a company co-founded by Brooks protégé Artur Arsenio, is actively pursuing this line of research.

    “It’s just a very efficient way of marketing robots to mass consumers,” says Arsenio. “Smartphones basically have everything you need, including cameras and sensors, to turn mere things into robots.”

    YDream has its first product coming out in April: a lamp. It’s a very fine if utterly unaware desk lamp on its own, says Artur, but when you connect it to a smartphone loaded with the requisite app, it can do everything from intelligently adjusting lighting to gauging your emotional state.

    “It uses its sensors to interface socially,” Artur says. “It can determine how you feel by your facial expressions and voice. In a video conference, it can tell you how other participants are feeling. Or if it senses you’re sad, it may Facebook your girlfriend that you need cheering up.”

    Yikes. That may be a bit more interaction than you want from a desk lamp, but get used to it. Robots could intrude in ways that may seem a little off-putting at first — but that’s a marker of any new technology. Moreover, says Paul Saffo, a consulting professor at Stanford’s School of Engineering and a technology forecaster of repute, the highest use of robots won’t be doing old things better. It will be doing new things, things that haven’t been done before, things that weren’t possible before the development of key technology.

    “Whenever we have new tech, we invariably try to use it to do old things in a new way — like paving cow paths,” says Saffo. “But the sooner we get over that — the sooner we look beyond the cow paths — the better off we’ll be. Right now, a lot of the thinking is, ‘Let’s have robots drive our cars, and look like people, and be physical objects.’

    But the most important robots working today don’t have physical embodiments, says Saffo — think of them as ether-bots, if you will. Your credit application? It’s a disembodied robot that gets first crack at that. And the same goes for your resume when you apply for a job.

    In short, robots already are embedded in our lives in ways we don’t think of as “robotic.” This trend will only accelerate. At a certain point, things may start feeling a little — well Singularity-ish. Not to worry — it’s highly unlikely Skynet will rain nuclear missiles down on us anytime soon. But the melding of robotic technology with dumb things nevertheless presents some profound challenges — mainly because robots and humans react on disparate time scales.

    “The real questions now are authority and accountability,” says Saffo. “In other words, we have to figure out how to balance the autonomy systems need to function with the control we need to ensure safety.”

    Saffo cites modern passenger planes like the Airbus 330 as an example.

    “Essentially they’re flying robots,” he says. “And they fly beautifully, conserving fuel to the optimal degree and so forth. But the design limits are so tight — if they go too fast, they can fall apart; if they go too slow, they stall. And when something goes wrong, the pilot has perhaps 50 kilometers to respond. At typical speeds, that doesn’t add up to much reaction time.”

    Saffo noted the crash of Air France Flight 447 in the mid-Atlantic in 2009 involved an Airbus 330. Investigations revealed the likely cause was turbulence complicated by the icing up of the plane’s speed sensors. This caused the autopilot to disengage, and the plane began to roll. The pilots had insufficient time to compensate, and the aircraft slammed into the water at 107 knots.

    “The pilot figured out what was wrong — but it was 20 seconds too late,” says Saffo. “To me, it shows we need to devote real effort to defining boundary parameters on autonomous systems. We have to communicate with our robots better. Ideally, we want a human being constantly monitoring the system, so he or she can intervene when necessary. And we need to establish parameters that make intervention even possible.”

    Rod Brooks will be speaking at the upcoming Solid Conference in May. If you are interested in robotics and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

    January 24 2014

    The lingering seduction of the page

    In an earlier post in this series, I examined the articulatory relationship between information architecture and user interface design, and argued that the tools that have emerged for constructing information architectures on the web will only get us so far when it comes to expressing information systems across diverse digital touchpoints. Here, I want to look more closely at these traditional web IA tools in order to tease out two things: (1) ways we might rely on these tools moving forward, and (2) ways we’ll need to expand our approach to IA as we design for the Internet of Things.

    First stop: the library

    Information Architecture for the World Wide WebInformation Architecture for the World Wide WebThe seminal text for Information Architecture as it is practiced in the design of online information environments is Peter Morville’s and Louis Rosenfeld’s Information Architecture for the World Wide Web, affectionately known as “The Polar Bear Book.”

    First published in 1998, The Polar Bear Book gave a name and a clear, effective methodology to a set of practices many designers and developers working on the web had already begun to encounter. Morville and Rosenfeld are both trained as professional librarians and were able to draw on this time-tested field in order to sort through many of the new information challenges coming out of the rapidly expanding web.

    If we look at IA as two faces of the same coin, The Polar Bear Book focuses on the largely top-down “Internet Librarian” side of information design. The other side of the coin approaches the problems posed by data from the bottom up. In Everything is Miscellaneous: The Power of the New Digital Disorder, David Weinberger argues that the fundamental problem of the “second order” (think “card catalogue”) organization typical of library sciences-informed approaches is that they fail to recognize the key differentiator of digital information: that it can exist in multiple locations at once, without any single location being the “home” position. Weinberger argues that in the “third order” of digital information practices, “understanding is metaknowledge.” For Weinberger, “we understand something when we see how the pieces fit together.”

    Successful approaches to organizing electronic data generally make liberal use of both top-down and bottom-up design tactics. Primary navigation (driven by top-down thinking) gives us a birds-eye view of the major categories on a website, allowing us to quickly focus on content related to politics, business, entertainment, technology, etc. The “You May Also Like” and “Related Stories” links come from work in the metadata-driven bottom-up space.

    On the web, this textually mediated blend of top-down and bottom-up is usually pretty successful. This is no surprise: the web is, after all, primarily a textual medium. At its core, HTML is a language for marking up text-based documents. It makes them interpretable by machines (browsers) so they can be served up for human consumption. We’ve accommodated images and sounds in this information ecology by marking them up with tags (either by professional indexers or “folksonomically,” by whomever cares to pitch in).

    There’s an important point here that often goes without saying: the IA we’ve inherited from the web is textual — it is based on the perception of the world mediated through the technology of writing; herin lies the limitation of the IA we know from the web as we begin to design for the Internet of Things.

    Reading brains

    We don’t often think of writing as “technology,” but inasmuch as technology constitutes the explicit modification of techniques and practices in order to solve a problem, writing definitely fits the bill. Language centers can be pinpointed in the brain — these are areas programmed into our genes that allow us to generate spoken language — but in order to read and write, our brains must create new connections not accounted for in our genetic makeup.

    In Proust and the Squid, cognitive neuroscientist Maryanne Wolf describes the physical, neurological difference between a reading and writing brain and a pre-literate linguistic brain. Wolf writes that, with the invention of reading “we rearranged the very organization of our brain.” Whereas we learn to speak by being immersed in language, learning to read and write is a painstaking process of instruction, practice, and feedback. Though the two acts are linked by a common practice of language, writing involves a different cognitive process than speaking. It is one that relies on the technology of the written word. This technology is not transmitted through our genes; it is transmitted through our culture.

    It is important to understand that writing is not simply a translation of speech. This distinction matters because it has profound consequences. Wolf writes that “the new circuits and pathways that the brain fashions in order to read become the foundation for being able to think in different, innovative ways.” As the ability to read becomes widespread, this new capacity for thinking differently, too, becomes widespread.

    Though writing constitutes a major leap past speech in terms of cognitive process, it shares one very important common trait with spoken language: linearity. Writing, like speech, follows a syntagmatic structure in which meaning is constituted by the flow of elements in order — and in which alternate orders often convey alternate meanings.

    When it comes to the design of information environments, this linearity is generally a foregone conclusion, a feature of the cognitive landscape which “goes without saying” and is therefore never called into question. Indeed, when we’re dealing primarily with text or text-based media, there is no need to call it into question.

    In the case of embodied experience in physical space, however, we natively bring to bear a perceptual apparatus which goes well beyond the linear confines of written and spoken language. When we evaluate an event in our physical environment — a room, a person, a meaningful glance — we do so with a system of perception orders of magnitude more sophisticated than linear narrative. JJ Gibson describes this as the perceptual awareness resulting from a “flowing array of stimulation.” When we layer on top of that the non-linear nature of dynamic systems, it quickly becomes apparent that despite the immense gains in cognition brought about by the technology of writing, these advances still only partially equip us to adequately navigate immersive, physical connected environments.

    The trouble with systems (and why they’re worth it)

    Thinking in Systems: A PrimerThinking in Systems: A Primer

    Photo: Andy Fitzgerald, of content from Thinking in Systems: A Primer, by Donella Meadows.

    I have written elsewhere in more detail about challenges posed to linguistic thinkers by systems. To put all of that in a nutshell, complex systems baffle us because we have a limited capacity to track system-influencing inputs and outputs and system-changing flows. As systems thinking pioneer Donella Meadows characterizes them in her book Thinking in Systems: A Primer, self-organizing, nonlinear, feedback systems are “inherently unpredictable” and “understandable only in the most general way.”

    According to Meadows, we learn to navigate systems by constructing models that approximate a simplified representation of the system’s operation and allow us to navigate it with more or less success. As more and more of our world — our information, our social networks, our devices, and our interactions with all of these — becomes connected, our systems become increasingly difficult (and undesirable) to compartmentalize. They also become less intrinsically reliant on linear textual mediation: our “smart” devices don’t need to translate their messages to each other into English (or French or Japanese) in order to interact.

    This is both the great challenge and the great potential of the Internet of Things. We’re beginning to interact with our built information environments not only in a classically signified, textual way, but also in a physical-being-operating-in-the-world kind of way. The text remains — and the need to interact with that textual facet with the tools we’ve honed on the web (i.e. traditional IA) remains. But as the information environments we’re tasked with representing become less textual and more embodied, the tools we use to represent them must likewise evolve beyond our current text-based solutions.

    Fumbling toward system literacy

    In order to rise to meet this new context, we’re going to need as many semiotic paths as we can find — or create. And in order to do that, we will have to pay close attention to the cognitive support structures that normally “go without saying” in our conceptual models.

    This will be hard work. The payoff, however, is potentially revolutionary. The threshold at   which we find ourselves is not merely another incremental step in technological advancement. The immersion in dynamic systems that the connected environment foreshadows holds the potential to re-shape the way we think — the way our brains are “wired” — much as reading and writing did. Though mediated by human-made, culturally transmitted technology (e.g. moveable type, or, in this case, Internet protocols), these changes hold the power to affect our core cognitive process, our very capacity to think.

    What this kind of “system literacy” might look like is as puzzling to me now as reading and writing must have appeared to pre-literate societies. The potential of being able to grasp how our world is connected in its entirety — people, social systems, economies, trade, climate, mobility, marginalization — is both mesmerizing and terrifying. Mesmerizing because it seems almost magical; terrifying because it hints at how unsophisticated and parochial our current perspective must look from such a vantage point.

    As information architects and interface designers, all of this means that we’re going to have to be nimble and creative in the way we approach design for these environments. We’re going to have to cast out beyond the tools and techniques we’re most comfortable with to find workable solutions to new problems of complexity. We aren’t the only ones working on this, but our role is an important one: engineers and users alike look to us to frame the rhetoric and usability of immersive digital spaces. We’re at a major transition in the way we conceive of putting together information environments. Much like Morville and Rosenfeld in 1998, we’re “to some degree all still making it up as we go along.” I don’t pretend to know what a fully developed information architecture for the Internet of Things might look like, but in the spirit of exploration, I’d like to offer a few pointers that might help nudge us in the right direction — a topic I’ll tackle in my next post.

    January 23 2014

    Podcast: Solid, tech for humans, and maybe a field trip?

    We were snowed in, but the phones still worked, so Jon Bruner, Mike Loukides, and I got together on the phone to have a chat. We start off talking about the results of the Solid call for proposals, but as is often the case, the conversation meandered from there.

    Here are some of the links we mention in this episode:

    You can subscribe to O’Reilly Radar podcast through iTunes or SoundCloud, or directly through our podcast’s RSS feed.

    January 15 2014

    Four short links: 15 January 2014

    1. Hackers Gain ‘Full Control’ of Critical SCADA Systems (IT News) — The vulnerabilities were discovered by Russian researchers who over the last year probed popular and high-end ICS and supervisory control and data acquisition (SCADA) systems used to control everything from home solar panel installations to critical national infrastructure. More on the Botnet of Things.
    2. mclMarkov Cluster Algorithm, a fast and scalable unsupervised cluster algorithm for graphs (also known as networks) based on simulation of (stochastic) flow in graphs.
    3. Facebook to Launch Flipboard-like Reader (Recode) — what I’d actually like to see is Facebook join the open web by producing and consuming RSS/Atom/anything feeds, but that’s a long shot. I fear it’ll either limit you to whatever circle-jerk-of-prosperity paywall-penetrating content-for-advertising-eyeballs trades the Facebook execs have made, or else it’ll be a leech on the scrotum of the open web by consuming RSS without producing it. I’m all out of respect for empire-builders who think you’re a fool if you value the open web. AOL might have died, but its vision of content kings running the network is alive and well in the hands of Facebook and Google. I’ll gladly post about the actual product launch if it is neither partnership eyeball-abuse nor parasitism.
    4. Map Projections Illustrated with a Face (Flowing Data) — really neat, wish I’d had these when I was getting my head around map projections.

    December 30 2013

    Four short links: 30 December 2013

    1. tooldiaga collection of methods for statistical pattern recognition. Implemented in C.
    2. Hacking MicroSD Cards (Bunnie Huang) — In my explorations of the electronics markets in China, I’ve seen shop keepers burning firmware on cards that “expand” the capacity of the card — in other words, they load a firmware that reports the capacity of a card is much larger than the actual available storage. The fact that this is possible at the point of sale means that most likely, the update mechanism is not secured. MicroSD cards come with embedded microcontrollers whose firmware can be exploited.
    3. 30c3 — recordings from the 30th Chaos Communication Congress.
    4. IOT Companies, Products, Devices, and Software by Sector (Mike Nicholls) — astonishing amount of work in the space, especially given this list is inevitably incomplete.

    December 26 2013

    Four short links: 26 December 2013

    1. Nest Protect Teardown (Sparkfun) — initial teardown of another piece of domestic industrial Internet.
    2. LogsThe distributed log can be seen as the data structure which models the problem of consensus. Not kidding when he calls it “real-time data’s unifying abstraction”.
    3. Mining the Web to Predict Future Events (PDF) — Mining 22 years of news stories to predict future events. (via Ben Lorica)
    4. Nanocubesa fast datastructure for in-memory data cubes developed at the Information Visualization department at AT&T Labs – Research. Nanocubes can be used to explore datasets with billions of elements at interactive rates in a web browser, and in some cases it uses sufficiently little memory that you can run a nanocube in a modern-day laptop. (via Ben Lorica)
    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!

    Schweinderl