Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 22 2014

February 21 2014

February 10 2014

More 1876 than 1995

Corliss_engineCorliss_engine

Photo: Wikimedia Commons. Corliss engine, showing valvegear (New Catechism of the Steam Engine, 1904).

Philadelphia’s Centennial Exposition of 1876 was America’s first World’s Fair, and was ostensibly held to mark the nation’s 100th birthday. But it heralded the future as much as it celebrated the past, showcasing the country’s strongest suit: technology.

The centerpiece of the Expo was a gigantic Corliss engine, the apotheosis of 40 years of steam technology. Thirty percent more efficient than standard steam engines of the day, it powered virtually every industrial exhibit at the exposition via a maze of belts, pulleys, and shafts. Visitors were stunned that the gigantic apparatus was supervised by a single attendant, who spent much of his time reading newspapers.

“This exposition was attended by 10 million people at a time when travel was slow and difficult, and it changed the world,” observes Jim Stogdill, general manager of Radar at O’Reilly Media, and general manager of O’Reilly’s upcoming Internet-of-Things-related conference, Solid.

“Think of a farm boy from Kansas looking at that Corliss engine, seeing what it could do, thinking of what was possible,” Stogdill continues. ”When he left the exposition, he was a different person. He understood what the technology he saw meant to his own work and life.”

The 1876 exposition didn’t mark the beginning of the Industrial Revolution, says Stogdill. Rather, it signaled its fruition, its point of critical mass. It was the nexus where everything — advanced steam technology, mass production, railroads, telegraphy — merged.

“It foreshadowed the near future, when the Industrial Revolution led to the rapid transformation of society, culturally as well as economically. More than 10,000 patents followed the exposition, and it accelerated the global adoption of the ‘American System of Manufacture.’ The world was never the same after that.”

In terms of the Internet of Things, we have reached that same point of critical mass. In fact, the present moment is more similar to 1876 than to more recent digital disruptions, Stogdill argues. “It’s not just the sheer physicality of this stuff,” he says. “It is also the breadth and speed of the change bearing down on us.”

While the Internet changed everything, says Stogdill, “its changes came in waves, with scientists and alpha geeks affected first, followed by the early adopters who clamored to try it. It wash’t until the Internet was ubiquitous that every Kansas farm boy went online. That 1876 Kansas farm boy may not have foreseen every innovation the Industrial Revolution would bring, but he knew — whether he liked it or not — that his world was changing.”

As the Internet subsumes physical objects, the rate of change is accelerating, observes Stogdill. “Today, stable wireless platforms, standardized software interface components and cheap, widely available sensors have made the connection of virtually every device — from coffee pots to cars — not only possible; they have made it certain.”

“Internet of Things” is now widely used to describe this latest permutation of digital technology; indeed, “overused” may be a more apt description. It teeters on the knife-edge of cliché. “The term is clunky,” Stogdill acknowledges, “but the buzz for the underlying concepts is deserved.”

Stogdill is quick to point out that this “Internet of Everything” goes far beyond the development of new consumer products. Open source sensors and microcontroller libraries already are allowing the easy integration of software interfaces with everything from weather stations to locomotives. Large, complicated systems — water delivery infrastructure, power plants, sewage treatment plants, office buildings — will be made intelligent by these software and sensor packages, allowing real-time control and exquisitely efficient operation. Manufacturing has been made frictionless, development costs are plunging, and new manufacturing-as-a-service frameworks will create new business models and drive factory production costs down and production up.

“When the digital age began accelerating,” Stogdill explains, “Nicholas Negroponte observed that the world was moving from atoms to bits — that is, the high-value economic sectors were transforming from industrial production to aggregating information.”

“I see the Internet of Everything as the next step,” he says. ”We won’t be moving back to atoms, but we’ll be combining atoms and bits, merging software and hardware. Data will literally grow physical appendages, and inform industrial production and public services in extremely powerful and efficient ways. Power plants will adjust production according to real-time demand, traffic will smooth out as driverless cars become commonplace. We’ll be able to track air and water pollution to an unprecedented degree. Buildings will not only monitor their environmental conditions for maximum comfort and energy efficiency, they’ll be able to adjust energy consumption so it corresponds to electricity availability from sustainable sources.”

Stogdill believes these converging phenomena have put us on the cusp of a transformation as dramatic as the Industrial Revolution.

“Everyone will be affected by this collision of hardware and software, by the merging of the virtual and real,” he says. “It’s really a watershed moment in technology and culture. We’re at one of those tipping points of history again, where everything shifts to a different reality. That’s what the 1876 exposition was all about. It’s one thing to read about the exposition’s Corliss engine, but it would’ve been a wholly different experience to stand in that exhibit hall and see, feel, and hear its 1,400 horsepower at work, driving thousands of machines. It is that sensory experience that we intend to capture with Solid. When people look back in 150 years, we think they could well say, ‘This is when they got it. This is when they understood.’”


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

February 07 2014

Why Solid, why now

A few years ago at OSCON, one of the tutorials demonstrated how to click a virtual light switch in Second Life and have a real desk lamp light up in the room. Looking back, it was rather trivial, but it was striking at the time to see software people taking an interest in the “real world.” And what better metaphor for the collision of virtual and real than a connection between Second Life and the Portland Convention Center?

In December 2012, our Radar team was meeting in Sebastopol and we were talking about trends in robotics, Maker DIY, Internet of Things, wearables, smart grid, industrial Internet, advanced manufacturing, frictionless supply chain, etc. We were trying to figure out where to put our focus among all of these trends when suddenly it was obvious (at least to Mike Loukides, who pointed it out): they are all more alike than different, and we could focus on all of them by looking at the relationships among them. The Solid program was conceived that day.

We called it Solid because most of the people we work with are software people and we wanted to draw attention to the non-virtuality of all this stuff. This isn’t the stuff of some abstract “cyber” domain; these were all trends that lived at the nexus of software and very real physical hardware. A big part of this story is that hardware is getting less hard, and more like software in its malleability, but there remains a sheer physicality to this stuff that makes it different. Negroponte told us to forget atoms and focus on bits, and we did for 20 years. But now, the pendulum is swinging back and atoms matter again — except now they are atoms layered with bits, and that makes this different.

Software and hardware have been essentially separate mediums wielded by the separate tribes of hackers/ coders/ programmers/ software engineers and capital E Engineers. Solid is about the beginning of something different, the collision of software and hardware into a new combined medium that will be wielded by people (or teams) that understand both, of bits and atoms uniting in the same intellectual space. What should we call it? Fluidware? That won’t be it, but I’d like eventually to come up with a term that conveys that a combination of software and less hard hardware is something distinct.

Naturally, this impacts more than just the stuff that we produce from this new medium. It influences organizational models; the things engineers, designers, and hackers need to know to do their jobs; and the business models you can deliver with bit-enabled collections of atoms.

With Solid, we envision an annual program of engagement that will span various media as well as multiple in-person events. We’re going to do Solid:Local events, which we started in Cambridge, MA, on February 6, and we’re going to anchor the program with the Solid Conference in San Francisco this year in May that is going to be very different from the other conferences we do.

First off, the Solid Conference will be at Ft. Mason, a venue we chose for it’s beautiful views, and because it can host the kinds of industrial-scale things we’d like to demonstrate there. We still have a lot to do to bring this program together, but our guiding principle is to give our audience a visceral feel for this world of combined hardware and software. We’re going for something that feels less like a conference, and more like a miniature World’s Fair Exposition with talks.

I believe that we are on the cusp of something as dramatic as the Industrial Revolution. In that time, a series of industrial expositions drew millions of people to London, Paris, Philadelphia and elsewhere to experience firsthand the industrialization of their world.

Here in my home office outside of Philadelphia, I’m about 15 miles from the location of the 1876 Exposition that we’re using for inspiration. If you entered Machinery Hall during that long summer with its 1400 horsepower Corliss engine powering belt-driven machinery throughout the vast hall, you couldn’t help but leave with an appreciation for how the world was changing. We won’t be building a Machinery Hall, certainly not at that scale, but we are building a program with as much show as tell. We want to viscerally demonstrate the principals that we see at work here.

We hope you can join us in San Francisco or at one of the many Solid:Locals we plan to put on. Share your stories about interesting intersections of hardware and software with us, too.

February 05 2014

Four short links: 5 February 2014

  1. sigma.js — Javascript graph-drawing library (node-edge graphs, not charts).
  2. DARPA Open Catalog — all the open source published by DARPA. Sweet!
  3. Quantified Vehicle Meetup — Boston meetup around intelligent automotive tech including on-board diagnostics, protocols, APIs, analytics, telematics, apps, software and devices.
  4. AT&T See Future In Industrial Internet — partnering with GE, M2M-related customers increased by more than 38% last year. (via Jim Stogdill)

February 04 2014

The Industrial Internet of Things

WeishauptBurnerWeishauptBurner

Photo: Kipp Bradford. This industrial burner from Weishaupt churns through 40 million BTUs per hour of fuel.

A few days ago, a company called Echelon caused a stir when it released a new product called IzoT. You may never have heard of Echelon; for most of us, they are merely a part of the invisible glue that connects modern life. But more than 100 million products — from street lights to gas pumps to HVAC systems — use Echelon technology for connectivity. So, for many electrical engineers, Echelon’s products are a big deal. Thus, when Echelon began touting IzoT as the future of the Industrial Internet of Things (IIoT), it was bound to get some attention.

Admittedly, the Internet of Things (IoT) is all the buzz right now. Echelon, like everyone else, is trying to capture some of that mindshare for their products. In this case, the product is a proprietary system of chips, protocols, and interfaces for enabling the IoT on industrial devices. But what struck me and my colleagues was how really outdated this approach seems, and how far it misses the point of the emerging IoT.

Although there are many different ways to describe the IoT, it is essentially a network of devices where all the devices:

  1. Have local intelligence
  2. Have a shared API so they can speak with each other in a useful way, even if they speak multiple protocols
  3. Push and pull status and command information from the networked world

In the industrial context, rolling out a better networking chip is just a minor improvement on an unchanged 1980s practice that requires complex installations, including running miles of wire inside walls. This IzoT would have been a breakthrough product back when I got my first Sony Walkman.

This isn’t a problem confined to Echelon. It’s a problem shared by many industries. I just spent several days wandering the floor of the International Air Conditioning, Heating, and Refrigeration (AHR) trade show, a show about as far from CES as you can get: truck-sized boilers and cooling towers that look unchanged from their 19th-century origins dotted the floor. For most of the developed world, HVAC is where the lion’s share of our energy goes. (That alone is reason enough to care about it.) It’s also a treasure trove of data (think Nest), and a great place for all the obvious, sensible IoT applications like monitoring, control, and network intelligence. But after looking around at IoT products at AHR, it was clear that they came from Bizzaro World.[1]

EmersonCopelandEmersonCopeland

Photo: Kipp Bradford. Copeland alone has sold more than 100 million compressors. That’s a lot of “things” to get on the Internet.

I spoke with an engineer showing off one of the HVAC industry-leading low-power wireless networking technologies for data centers. He told me his company’s new wireless sensor network system runs at 2.6 GHz, not 2.4 GHz (though the data sheet doesn’t confirm or deny that). Looking at the products that won industry innovation awards, I was especially depressed. Take, for example, this variable-speed compressor technology. When you put variable-speed motors into an air-conditioning compressor, you get a 25–35% efficiency boost. That’s a lot of energy saved! And it looks just like variable-speed technology that has been around for decades — just not in the HVAC world.

Now here’s where things get interesting: variable-speed control demands a smart processor controlling the compressor motor, plus the intelligent sensors to tell the motor when to vary its speed. With all that intelligence, I should be able to connect it to my network. So, when will my air conditioner talk to my Nest so I can optimize my energy consumption? Never. Or at least not until industry standards and government regulations overlap just right and force them to talk to each other, just as recent regulatory conditions forced 40-year-old variable-motor control technology into 100-year-old compressor technology.

Of course, like any inquisitive engineer, I had to ask the manufacturer of my high-efficiency boiler if it was possible to hook that up to my Nest to analyze performance. After he said, “Sure, try hooking up the thermostat wire,” and made fun of me for a few minutes, the engineer said, “Yeah, if you can build your own modbus-to-wifi bridge, you can access our modbus interface and get your boiler online. But do you really want some Russian hacker controlling your house heat?” It would be so easy for my Nest thermostat to have a useful conversation with my boiler beyond “on/off,” but it doesn’t.

Don’t get me wrong. I really am sympathetic to engineering within the constraints of industrial (versus consumer) environments, having spent a period of my life designing toys and another period making medical and industrial products. I’ve had to use my engineering skills to make things as dissimilar as Elmo and dental implants. But constraints are no excuse for trying to patch up, or hide behind, outdated technology.

The electrical engineers designing IoT devices for consumers have created exciting and transformative technologies like z-wave, Zigbee, BLE, wifi, and more, giving consumer devices robust and nearly transparent connectivity with increasingly easy installation. Engineers in the industrial world seem to be stuck making small technological tweaks that might enhance safety, reliability, robustness, and NSA-proof security of device networks. This represents an unfortunate increase in the bifurcation of the Internet of Things. It also represents a huge opportunity for those who refuse to submit to the notion that the IoT is either consumer or industrial, but never both.

For HVAC, innovation in 2014 means solving problems from 1983 with 1984 technology because the government told them so. The general attitude can be summed up as: “We have all the market share and high barriers to entry, so we don’t care about your problems.” Left alone, this industry (like so many others) will keep building walls that prevent us from having real control over our devices and our data. That directly contradicts a key goal of the IoT: connecting our smart devices together.

And that’s where the opportunity lies. There is significant value in HVAC and similar industries for any company that can break down the cultural and technical barriers between the Internet of Things and the Industrial Internet of Things. Companies that recognize the new business models created by well-designed, smart, interconnected devices will be handsomely rewarded. When my Nest can talk to my boiler, air conditioner, and Hue lights, all while analyzing performance versus weather data, third parties could sell comfort contracts, efficiency contracts, or grid stabilization contracts.

As a matter of fact, we are already seeing a $16 billion market for grid stabilization services opened up by smart, connected heating devices. It’s easy to envision a future where the electric company pays me during peak load times because my variable-speed air conditioner slows down, my Hue lights dim imperceptibly, and my clothes dryer pauses, all reducing my grid load — or my heater charges up a bank of thermal storage bricks in advance of a cold front before I return home from work. Perhaps I can finally quantify the energy savings of the efficiency improvements that I make.

My Sony Walkman already went the way of the dodo, but there’s still a chance to blow open closed industries like HVAC and bridge the IoT and the IIoT before my Beats go out of style.


[1] I’m not sorry for the Super Friends reference.

January 31 2014

Drone on

Jeff Bezos’ recent demonstration of a drone aircraft simulating delivery of an Amazon parcel was more stunt than technological breakthrough. We aren’t there yet. Yes, such things may well come to pass, but there are obstacles aplenty to overcome — not so much engineering snags, but cultural and regulatory issues.

The first widely publicized application of modern drone aircraft — dropping Hellfire missiles on suspected terrorists — greatly skewed perceptions of the technology. On the one hand, the sophistication of such unmanned systems generated admiration from technophiles (and also average citizens who saw them as valuable adjuncts in the war against terrorism). On the other, the significant civilian casualties that were collateral to some strikes have engendered outrage. Further, the fear that drones could be used for domestic spying has ratcheted up our paranoia, particularly in the wake of Edward Snowden’s revelations of National Security Agency overreach.

It’s as though drones have been painted with a giant red “D,” á la The Scarlet Letter. Manufacturers of the aircraft, not surprisingly, are desperate to rehabilitate the technology’s image, and properly so. Drones have too many exciting — even essential — civilian applications to restrict their use to war.

That’s why drone proponents don’t want us to call them “drones.” They’d prefer we use the periphrastic term, “Unmanned Aerial Vehicle,” or at least the acronym — UAV. Or depending on whom you talk to, Unmanned Vehicle System (UVS).

Will either catch on? Probably not. Think back: do we call it Star Wars or the Strategic Defense Initiative? Obamacare, or the Affordable Care Act? Popular culture has a way of reducing bombastic rubrics to pithy, evocative phrases. Chances are pretty good “drones” it is, and “drones” it’ll stay.

And that’s not so bad, says Brendan Schulman, an attorney with the New York office of Kramer Levin Naftalis & Frankel.

“People need to use terminology they understand,” says Schulman, who serves as the firm’s e-discovery counsel and specializes in commercial unmanned aviation law. ”And they understand ‘drones.’ I know that causes some discomfort to manufacturers and people who use drones for commercial purposes, but I have no trouble with the term. I think we can live with it.”

Michael Toscano, the president and CEO of the Association of Unmanned Vehicle Systems International, feels differently: not surprisingly, he favors the acronym “UVS” over “drone.”

“The term ‘drones’ evokes a hostile, military device,” says Toscano. “That isn’t what the broad applications of this technology are about. Commercial unmanned aircraft have a wide range of uses, from monitoring pipelines and power lines, to wildlife censuses, to agriculture. In Japan, they’ve been used for years for pesticide applications on crops. They’re far superior to manned aircraft for that — the Japanese government even partnered with Yamaha to develop a craft specifically designed for the job.”

Nomenclature aside, drones are stuck in a time warp here in the United States. The Federal Aviation Administration (FAA) prohibits their general use pending development of final regulations; such rules have been in the works for years but have yet to be released. Still, the FAA does allow some variances on the ban: government agencies and universities, after thrashing through a maze of red tape, can obtain waivers for specific tasks. And for the most part, field researchers who have hands-on-joystick experience with the aircraft are wildly enthusiastic about them.

“They give us superb data,” says Michael Hutt, the Unmanned Aircraft Systems Manager for the U.S. Geological Survey (USGS). USGS is currently using drones — specifically the fixed-wing RQ-11A Raven and the hovering RQ-16A T-Hawk — for everything from surveying abandoned mine sites to monitoring coal seam fires, estimating waterfowl populations, and evaluating pygmy rabbit habitat.

Landsat [satellites] have been the workhorse for most of our projects,” continues Hutt, “and they’re pretty good as far as they go, but they have definite limitations. You’re able to take images only when a satellite passes overhead, obviously, and you’re limited to what you can see by the weather. Also, your resolution is only 30 meters. With the Raven or Hawk, we can fly anytime we want. And because we fly low and slow, our data is collected in resolutions of centimeters, not meters. It’s just a level of detail we’ve never had before.”

And highly detailed data, of course, translates as highly accurate and reliable data. Hutt offers an example: a project to map fossil beds at White Sands National Monument.

“The problem is that these beds aren’t static,” says Hutt. “They tend to erode, or they can get covered up with blowing sand. It has been difficult to tell just what we have, and where the fossils are precisely. But with our T-Hawks, were able to get down to 2.5-centimeter resolution. We know exactly where the fossils are — it doesn’t matter if a sand dune forms on top of them.”

Even with current FAA restrictions, Hutt notes, universities are crowding into the drone zone.

“Not long ago, maybe five or six universities were using them, or had applied for permits,” Hutt says. ”Now, there’s close to 200. The gathering of high-quality data is becoming easy and cheap. That’s because of aircraft advances, of course, and also because the payload packages — the sensors and cameras — are dropping dramatically in price. A few years ago, you needed a $500,000 mapping camera to obtain the information we’re now getting with an off-the-shelf $1,000 camera.”

Unmanned aircraft may be easier to fly than F-22 fighter jets, but they impose their own challenges, observes Jeff Sloan, a top drone jock for USGS.

“Dealing with wind is the major problem,” says Sloan. ”The aircraft we’re using are really quite small, and they get pushed around easily. Also, though they can get into tighter spots than any manned aircraft, steep terrain can get you pretty tense. You’re flying very low, and there’s not much room for error.”

If and when the FAA gives the green light for broad commercial drone applications, a large cohort of potential pilots with the requisite skill sets will be ready and waiting.

“We’ve found that the best pilots are those who’ve had a lot of video game experience,” laughs Sloan, “so it’s good to know my kids have a future.”

But back to the FAA: a primary reason for the rules delay is public worry over safety and privacy. Toscano says safety concerns are mostly just that — foreboding more than real risk.

“It’s going to take a while before you see general risk tolerance, but it’ll come,” Toscano said. ”The Internet started with real apprehension, and now, of course there’s general acceptance. There are risks with the Internet, but they’re widely understood now, and people realize there’s a trade-off for the benefits.”

System vulnerability is tops among safety concerns, acknowledges Toscano. Some folks worry that a drone’s wireless links could be severed or jammed, and that the craft would start noodling around independently, with potentially dire results.

“That probably could be addressed by programming the drone with a pre-determined landing spot,” says Toscano. ”It could default to a safe zone if something went wrong.”

And what about privacy? Drones, says Toscano, don’t pose a greater threat to privacy than any other extant technology with potential eavesdropping applications.

“The law is the critical factor here,” he insists. “The Fourth Amendment supports the expectation of privacy. If that’s violated by any means, it’s a crime, regardless of the technology responsible.”

More to the point, says Toscano, we have to give unmanned aircraft a little time and space to develop: we’re still at Drones 1.0.

“With any new technology, it’s the same story: crawl, walk, run,” Toscano says. ”What Bezos did was show what UVS technology would look like at a full run — but it’s important to remember, we’re really still crawling.”

To get drones up on their hind legs and sprinting — or soaring — they’re going to need a helping hand from the FAA. And things aren’t going swimmingly there, says attorney Schulman.

“They recently issued a road map on the probable form of the new rules,” Schulman says, “and frankly, it’s discouraging. It suggests the regulations are going to be pretty burdensome, particularly for the smaller UAVs, which are the kind best suited for civilian uses. If that’s how things work out, it will have a major chilling effect on the industry.”

January 27 2014

The new bot on the block

Fukushima changed robotics. More precisely, it changed the way the Japanese view robotics. And given the historic preeminence of the Japanese in robotic technology, that shift is resonating through the entire sector.

Before the catastrophic earthquake and tsunami of 2011, the Japanese were focused on “companion” robots, says Rodney Brooks, a former Panasonic Professor of Robotics at MIT, the founder and former technical officer of IRobot, and the founder, chairman and CTO of Rethink Robotics. The goal, says Brooks, was making robots that were analogues of human beings — constructs that could engage with people on a meaningful, emotional level. Cuteness was emphasized: a cybernetic, if much smarter, equivalent of Hello Kitty, seemed the paradigm.

But the multiple core meltdown at the Fukushima Daiichi nuclear complex following the 2011 tsunami changed that focus abruptly.

“Fukushima was a wake-up call for them,” says Brooks. “They needed robots that could do real work in highly radioactive environments, and instead they had robots that were focused on singing and dancing. I was with IRobot then, and they asked us for some help. They realized they needed to make a shift, and now they’re focusing on more pragmatic designs.”

Pragmatism was always the guiding principle for Brooks and his companies, and is currently manifest in Baxter, Rethink’s flagship product. Baxter is a breakthrough production robot for a number of reasons. Equipped with two articulated arms, it can perform a multitude of tasks. It requires no application code to start up, and no expensive software to function. No specialists are required to program it; workers with minimal technical background can “teach” the robot right on the production line through a graphical user interface and arm manipulation. Also, Baxter requires no cage — human laborers can work safely alongside it on the assembly line.

Moreover, it is cheap: about $25,000 per unit. It is thus the robotic equivalent of the Model T, and like the Model T, Baxter and its subsequent iterations will impose sweeping changes in the way people live and work.

“We’re at the point with production robots where we were with mobile robots in the late 1980s and early 1990s,” says Brooks. “The advances are accelerating dramatically.”

What’s the biggest selling point for this new breed of robot? Brooks sums it up in a single word: dignity.

“The era of cheap labor for factory line work is coming to a close, and that’s a good thing,” he says. “It’s grueling, and it can be dangerous. It strips people of their sense of worth. China is moving beyond the human factory line — as people there become more prosperous and educated, they aspire to more meaningful work. Robots like Baxter will take up the slack out of necessity.”

And not just for the assemblage of widgets and gizmos. Baxter-like robots will become essential in the health sector, opines Brooks — particularly in elder care. As the Baby Boom piglet continues its course through the demographic python, the need for attendants is outstripping supply. No wonder: the work is low-paid and demanding. Robots can fill this breach, says Brooks, doing everything from preparing and delivering meals to shuttling laundry, changing bedpans and mopping floors.

“Again, the basic issue is dignity,” Brooks said. “Robots can free people from the more menial and onerous aspects of elder care, and they can deliver an extremely high level of service, providing better quality of life for seniors.”

Ultimately, robots could be more app than hardware: the sexy operating system on Joaquin Phoenix’s mobile device in the recent film “Her” may not be far off the mark. Basically, you’ll carry a “robot app” on your smartphone. The phone can be docked to a compatible mechanism — say, a lawn mower, or car, or humanoid mannequin — resulting in an autonomous device ready to trim your greensward, chauffeur you to the opera, or mix your Mojitos.

YDreams Robotics, a company co-founded by Brooks protégé Artur Arsenio, is actively pursuing this line of research.

“It’s just a very efficient way of marketing robots to mass consumers,” says Arsenio. “Smartphones basically have everything you need, including cameras and sensors, to turn mere things into robots.”

YDream has its first product coming out in April: a lamp. It’s a very fine if utterly unaware desk lamp on its own, says Artur, but when you connect it to a smartphone loaded with the requisite app, it can do everything from intelligently adjusting lighting to gauging your emotional state.

“It uses its sensors to interface socially,” Artur says. “It can determine how you feel by your facial expressions and voice. In a video conference, it can tell you how other participants are feeling. Or if it senses you’re sad, it may Facebook your girlfriend that you need cheering up.”

Yikes. That may be a bit more interaction than you want from a desk lamp, but get used to it. Robots could intrude in ways that may seem a little off-putting at first — but that’s a marker of any new technology. Moreover, says Paul Saffo, a consulting professor at Stanford’s School of Engineering and a technology forecaster of repute, the highest use of robots won’t be doing old things better. It will be doing new things, things that haven’t been done before, things that weren’t possible before the development of key technology.

“Whenever we have new tech, we invariably try to use it to do old things in a new way — like paving cow paths,” says Saffo. “But the sooner we get over that — the sooner we look beyond the cow paths — the better off we’ll be. Right now, a lot of the thinking is, ‘Let’s have robots drive our cars, and look like people, and be physical objects.’

But the most important robots working today don’t have physical embodiments, says Saffo — think of them as ether-bots, if you will. Your credit application? It’s a disembodied robot that gets first crack at that. And the same goes for your resume when you apply for a job.

In short, robots already are embedded in our lives in ways we don’t think of as “robotic.” This trend will only accelerate. At a certain point, things may start feeling a little — well Singularity-ish. Not to worry — it’s highly unlikely Skynet will rain nuclear missiles down on us anytime soon. But the melding of robotic technology with dumb things nevertheless presents some profound challenges — mainly because robots and humans react on disparate time scales.

“The real questions now are authority and accountability,” says Saffo. “In other words, we have to figure out how to balance the autonomy systems need to function with the control we need to ensure safety.”

Saffo cites modern passenger planes like the Airbus 330 as an example.

“Essentially they’re flying robots,” he says. “And they fly beautifully, conserving fuel to the optimal degree and so forth. But the design limits are so tight — if they go too fast, they can fall apart; if they go too slow, they stall. And when something goes wrong, the pilot has perhaps 50 kilometers to respond. At typical speeds, that doesn’t add up to much reaction time.”

Saffo noted the crash of Air France Flight 447 in the mid-Atlantic in 2009 involved an Airbus 330. Investigations revealed the likely cause was turbulence complicated by the icing up of the plane’s speed sensors. This caused the autopilot to disengage, and the plane began to roll. The pilots had insufficient time to compensate, and the aircraft slammed into the water at 107 knots.

“The pilot figured out what was wrong — but it was 20 seconds too late,” says Saffo. “To me, it shows we need to devote real effort to defining boundary parameters on autonomous systems. We have to communicate with our robots better. Ideally, we want a human being constantly monitoring the system, so he or she can intervene when necessary. And we need to establish parameters that make intervention even possible.”

Rod Brooks will be speaking at the upcoming Solid Conference in May. If you are interested in robotics and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

November 04 2013

Software, hardware, everywhere

Real and virtual are crashing together. On one side is hardware that acts like software: IP-addressable, controllable with JavaScript APIs, able to be stitched into loosely-coupled systems—the mashups of a new era. On the other is software that’s newly capable of dealing with the complex subtleties of the physical world—ingesting huge amounts of data, learning from it, and making decisions in real time.

The result is an entirely new medium that’s just beginning to emerge. We can see it in Ars Electronica Futurelab’s Spaxels, which use drones to render a three-dimensional pixel field; in Baxter, which layers emotive software onto an industrial robot so that anyone can operate it safely and efficiently; in OpenXC, which gives even hobbyist-level programmers access to the software in their cars; in SmartThings, which ties Web services to light switches.

The new medium is something broader than terms like “Internet of Things,” “Industrial Internet,” or “connected devices” suggest. It’s an entirely new discipline that’s being built by software developers, roboticists, manufacturers, hardware engineers, artists, and designers.

Ten years ago, building something as simple as a networked thermometer required some understanding of electrical engineering. Now it’s a Saturday-afternoon project for a beginner. It’s a shift we’ve already seen in programming, where procedural languages have become more powerful and communities have arisen to offer free help with programming problems. As the blending of hardware and software continues, the physical world will become democratized: the ranks of people who can address physical challenges from lots of different backgrounds will swell.

The outcome of all of this combining and broadening, I hope, will be a world that’s safer, cleaner, more efficient, and more accessible. It may also be a world that’s more intrusive, less private, and more vulnerable to ill-intentioned interference. That’s why it’s crucial that we develop a strong community from the new discipline.

Solid, which Joi Ito and I will present on May 21 and 22 next year, will bring members of the new discipline together to discuss this new medium at the blurred line between real and virtual. We’ll talk about design beyond the computer screen; software that understands and controls the physical world; new hardware tools that will become the building blocks of the connected world; frameworks for prototyping and manufacturing that make it possible for anyone to create physical devices; and anything else that touches both the concrete and abstract worlds.

Solid’s call for proposals is open to the public, as is the call for applications to the Solid Fellowships—a new program that comes with a stipend, a free pass to Solid, and help with travel expenses for students and independent innovators.

The business implications of the new discipline are just beginning to play out. Software companies are eyeing hardware as a way to extend their offerings into the physical world—think, for instance, of Google’s acquisition of Motorola and its work on a driverless car—and companies that build physical machines see software as a crucial component of their products. The physical world as a service, a business model that’s something like software as a service, promises to upend the way we buy and use machines, with huge implications for accessibility and efficiency. These types of service frameworks, along with new prototyping tools and open-source models, are making hardware design and manufacturing vastly easier.

A few interrelated concepts that I’ve been thinking about as we’ve sketched out the idea for Solid:

  • APIs for the physical world. Abstraction, modularity, and loosely-coupled services—the characteristics that make the Web accessible and robust—are coming to the physical world. Open-source libraries for sensors and microcontrollers are bringing easy-to-use and easy-to-integrate software interfaces to everything from weather stations to cars. Networked machines are defining a new physical graph, much like the Web’s information graph. These models are starting to completely reorder our physical environment. It’s becoming easier to trade off functionalities between hardware and software; expect the proportion of intelligence residing in software to increase over time.
  • Manufacturing made frictionless. Amazon’s EC2 made it possible to start writing and selling software with practically no capital investment. New manufacturing-as-a-service frameworks bring the same approach to building things, making factory work fast and capital-light. Development costs are plunging, and it’s becoming easier to serve niches with specialized hardware that’s designed for a single purpose. The pace of innovation in hardware is increasing as the field becomes easier for entrepreneurs to work in and financing becomes available through new platforms like Kickstarter. Companies are emerging now that will become the Amazon Web Services of manufacturing.
  • Software intelligence in the physical world. Machine learning and data-driven optimization have revolutionized the way that companies work with the Web, but the kind of sophisticated knowledge that Amazon and Netflix have accumulated has been elusive in the offline world. Hardware lets software reach beyond the computer screen to bring those kinds of intelligence to the concrete world, gathering data through networked sensors and exerting real-time control in order to optimize complicated systems. Many of the machines around us could become more efficient simply through intelligent control: a furnace can save oil when software, knowing that homeowners are away, turns down the thermostat; a car can save gas when Google Maps, polling its users’ smartphones, discovers a traffic jam and suggests an alternative route—the promise of software intelligence that works above the level of a single machine. The Internet stack now reaches all the way down to the phone in your pocket, the watch on your wrist, and the thermostat on your wall.
  • Every company is a software company. Software is becoming an essential component of big machines for both the builders and the users of those machines. Any company that owns big capital machines needs to get as much out of them as possible by optimizing their operation with software, and any company that builds machines must improve and extend them with layers of software in order to be competitive. As a result, a software startup with promising technology might just as easily be bought by a big industrial company as by a Silicon Valley software firm. This has important organizational, cultural, and competency impact.
  • Complex systems democratized. The physical world is becoming accessible to innovators at every level of expertise. Just as it’s possible to build a Web page with only a few hours’ learning, it’s becoming easier for anyone to build things, whether electronic or not. The result: realms like the urban environment that used to be under centralized control by governments and big companies are now open to innovation from anyone. New economic models and communities will emerge in the physical world just as they’ve emerged online in the last twenty years.
  • The physical world as a service. Anything from an Uber car to a railroad locomotive can be sold as a service, provided that it’s adequately instrumented and dispatched by intelligent software. Good data from the physical world brings about efficient markets, makes cheating difficult, and improves quality of service. And it will revolutionize business models in every industry as service guarantees replace straightforward equipment sales. Instead of just selling electricity, a utility could sell heating and cooling—promising to keep a homeowner’s house at 70 degrees year round. That sales model could improve efficiency and quality of life, bringing about incentive for the utility to invest in more efficient equipment and letting it take advantage of economies of scale.
  • Design after the screen. Our interaction with software no longer needs to be mediated through a keyboard and screen. In the connected world, computers gather data through multiple inputs outside of human awareness and intuit our preferences. The software interface is now a dispersed collection of conventional computers, mobile phones, and embedded sensors, and it acts back onto the world through networked microcontrollers. Computing happens everywhere, and it’s aware of physical-world context.
  • Software replaces physical complexity. A home security system is no longer a closed network of motion sensors and door alarms; it’s software connected to generic sensors that decides when something is amiss. In 2009, Alon Halevy, Peter Norvig, and Fernando Pereira wrote that having lots and lots of data can be more valuable than having the most elegant model. In the connected world, having lots and lots of sensors attached to some clever software will start to win out over single-purpose systems.

These are some rough thoughts about an area that we’ll all spend the next few years trying to understand. This is an open discussion, and we welcome thoughts on it from anyone.

August 20 2013

If This/Then That (IFTTT) and the Belkin WeMo

Like most good technologists, I am lazy.  In practice, this sometimes means that I will work quite hard with a computer to automate a task that, for all intents and purposes, just isn’t that hard.  In fits and starts for the past 10 years, I have been automating my house in various ways.  It makes my life easier when I am at home, though it does mean that friends who watch my house when I’m gone need to be briefed on how to use it.  If you are expecting to come into my house and use light switches and the TV as you do every place else, well, that’s why you need a personalized orientation to the house.

In this post, I’ll talk briefly about one of the most basic automation tasks I’ve carried out, which is about how the lights in my house are controlled.

The humble light switch was invented in the late 19th century, with the “modern” toggle switch following in the early 20th century.  The toggle switch has not changed in about 100 years because it does exactly what is needed and is well understood.  The only disadvantage to the toggle switch is that you have to touch it to operate it, and that means getting off the couch.

My current home is an older home that was previously a storefront.  All the plumbing is on one wall of the house, which I sometimes jokingly call “the wall of modernity.”  The telephone demarc is on the opposite side, though I’ve made some significant modifications to the telephone wiring from the days when I was using Asterisk more extensively.  As a large open space, there is not much built-in lighting, so most of the lights are freestanding floor lamps.

The initial “itch” that I was scratching with my first home automation project was a consistent forgetfulness to turn off lights when I went to bed — I would settle into bed up in the loft and then realize that I needed to get up and turn off a light.  At that point, remote controlled lights seemed pretty darn attractive.

My first lighting automation was done with Insteon, a superset of the venerable 1970s-era X10 protocol.  With Insteon controls, I could use an RF remote and toggle lights on and off from a few locations within the house.

With the hardware in place, I turned to finding software.  An RF remote is a good start, but I wanted to eliminate even that.  What I found is that the home automation community doesn’t really have off-the-shelf software that “just works” and lets you start doing things out of the box.  I used Mister House for a while, but at the time I was using it, the Insteon support was pretty new.  I worked with a friend writing some automation code — well to be honest, being a tester for him, but we were solving problems unique to our homes, not writing something that was ultimately going to be the answer for our parents.

Earlier this year, I was introduced to IFTTT, which stands for “If This, Then That.”  It is the closest thing I have seen to a generic software stack that can readily be used for home automation purposes.  IFTTT can be used for much more than just home automation, but I’m going to stick to that restricted use for the purpose of this post.

There is much to like about IFTTT, so I’ll focus on just a few attributes:

  • Programming without knowing programming.  I’ve taken enough computer science courses to know how to write a program, but I’m not good enough to be a professional programmer.  IFTTT lets you write rules — essentially, programs that control objects in the physical world — without the high bar of learning specialized syntax or all the system administration work that goes along with running a program.
  • Extending control beyond my home.  Traditional home automation has been based on sensors inside the home.  In order to respond to something, there needs to be a sensor to gather that data.  To turn the lights on after dark, you need a sensor that tells your house it’s dark.  To do something when it’s raining, you need a sensor that gets wet, and so forth.  As you’ll see, IFTTT lets you choose a location and use information from Internet services to assemble the context.
  • One of the cleanest and easiest designs I’ve seen.  I know everybody brags about design, but IFTTT is so simple that a colleague of mine in sales uses IFTTT extensively.  (Insert your own joke about the technical competence of sales representatives here.)

To act in the physical realm of my house, though, IFTTT requires devices that can take action.  The Belkin WeMo is a family of products, including a remote-controlled electrical outlet, light switch, and motion sensor.  The WeMo uses Wi-Fi to connect to my home network, so it can be controlled by any device on my network, or any service on the Internet.

So, let’s start out with a simple idea: I have a light, and I’d like to turn the light on when it’s dark.  In traditional home automation, I can easily do that by picking a time to turn the light on and using the same time every day.  If I pick a time that is appropriate for the winter, though, the light will come on too early in June.  If I pick a time that is right for June, I’ll have to get up and turn the lights on in December when it gets dark.  Or, using traditional home automation, I could find a source of sunset times, pull them in manually, and hope the data feed I’m using never changes.

That is, until IFTTT.  In this example, I’ll show you how to set up a WeMo to have a light that turns on at sunset.  IFTTT mini-programs are called “Recipes” and consist of a trigger and an action.  Building a recipe is a straightforward guided process that begins by picking the trigger:

IFTTT mini-programsIFTTT mini-programs

IFTTT’s interaction with the world is organized into “channels,” and more channels are being added on a routine basis.  The first step in creating a recipe is to choose the trigger:

choose the triggerchoose the trigger

Triggers are organized alphabetically.  I cut the screen shot off after F because I wanted to get down to W for Weather and WeMo.  The first time you select the weather channel, you’ll be prompted to set a location; I chose my home in San Francisco:

set a locationset a location

The weather channel has a number of components that you can choose to use as a trigger.  Some are expected, such as the weather forecast.  However, the weather channel also allows you to take actions when conditions change.  The weather data used by IFTTT is also rich enough to have pollen count and UV index, which are not always readily available from many forecasts.  For the purpose of our example, we’ll be using the “sunset” option, though it’s easy to imagine creating triggers to control lights when the sky becomes cloudy or turning on air filtration when the pollen count rises:

the sunset optionthe sunset option

Some triggers require additional configuration.  Sunset is straightforward because it’s known for the location that you chose:

additional configurationadditional configuration

Here at step three, the first part of the rule is done.  We have set up a rule that will fire every day at sunset.  IFTTT helpfully fills in the “this” part of our rule in a way that makes it obvious what we’re doing:

first part of the rule is donefirst part of the rule is done

The second part of writing a rule is to lay out the action (the “that”) to take when the trigger fires.  Once again, we choose a channel to take the action, choosing from the large number of channels that are available:

lay out the actionlay out the action

We’re trying to control lights plugged into a Belkin WeMo, so we’ll scroll down to “W” and pick the WeMo.  It has the actions that you might expect from what is essentially a power switch: turn off the outlet, turn on the outlet, blink the outlet, or toggle the outlet.  In our case, we want to turn on the outlet to turn on the light:

turn the outlet to turn the light onturn the outlet to turn the light on

From the menu of options, choose “turn on.”  IFTTT supports having multiple WeMo switches, each of which can be named.  A drop-down allows you to choose the switch being controlled by the rule so that many different devices can be controlled.  For example, a coffeepot might be turned on when an alarm goes off, different lights might be controlled by different rules, and so forth:

drop-down allows you to choose the switch being controlleddrop-down allows you to choose the switch being controlled

The last step in creating a rule is to finalize the action.  IFTTT helpfully displays the rule in full in a nice simple form.  Is there any doubt what the rule we’ve just created will do:

finalize the actionfinalize the action

In my personal setup, I use a rule that turns off the light at 10 p.m. as a reminder to go to bed.  There is not yet an easy way to say “turn off the light two hours after it turned on” because IFTTT doesn’t hold much state (yet).

One aspect of IFTTT that I recently appreciated was how easy it is to change the rule.  When going out of town, I decided to have the lights on all night so that pet sitters wouldn’t have to figure out all the ways in which lights could be turned on.  I simply deactivated the “turn off at 10 p.m.” rule that was like this:

deactivated the “turn off at 10 pm” ruledeactivated the “turn off at 10 pm” rule

I then replaced it with a rule that turned off the lights at sunrise.  (The lights in question are a low-power strand of LED lights.)

replaced it with a rule that turned off the lights at sunrisereplaced it with a rule that turned off the lights at sunrise

Could I have accomplished the same tasks in other home automation systems?  Absolutely, but it would have taken me much longer to get to my end goal, and I would have had to do significantly more testing to believe that my automation would behave as expected.  The combination of IFTTT and the WeMo makes setup much easier and more accessible.

August 06 2013

The end of integrated systems

I always travel with a pair of binoculars and, to the puzzlement of my fellow airline passengers, spend part of every flight gazing through them at whatever happens to be below us: Midwestern towns, Pennsylvania strip mines, rural railroads that stretch across the Nevada desert. Over the last 175 years or so, industrialized America has been molded into a collection of human patterns–not just organic New England villages in haphazard layouts, but also the Colorado farm settlements surveyed in strict grids. A close look at the rectangular shapes below reveals minor variations, though. Every jog that a street takes is a testament to some compromise between mankind and our environment that results in a deviation from mathematical perfection. The world, and our interaction with it, can’t conform strictly to top-down ideals.

We’re about to enter a new era in which computers will interact aggressively with our physical environment. Part of the challenge in building the programmable world will be finding ways to make it interact gracefully with the world as it exists today. In what you might call the virtual Internet, human information has been squeezed into the formulations of computer scientists: rigid relational databases, glyphs drawn from UTF-8, information addressed through the Domain Name System. To participate in it, we converse in the language and structures of computers. The physical Internet will need to understand much more flexibly the vagaries of human behavior and the conventions of the built environment.


A few weeks ago, I found myself explaining the premise of the programmable world to a stranger. He caught on when the conversation turned to driverless cars and intelligent infrastructure. “Aha!” he exclaimed, “They have these smart traffic lights in Los Angeles now; they could talk to the cars!”

The idea that centralized traffic computers will someday communicate wirelessly with vehicles, measuring traffic conditions and directing cars, was widely discussed a few years ago, but now it struck me as outdated. Computers can easily read traffic signals with machine vision the same way that humans do–by interpreting red, yellow, and green lights in the context of an intersection’s layout. On the other side, traffic signals often use cameras to analyze volumes and adjust to them. With these kinds of flexible-sensing technologies perfected, there will be no need for rigid, integrated systems to link cars to traffic lights. The example will repeat itself in other cases where software gathers data from sensors and controls physical devices.

The world is becoming programmable, with physical devices drawing intelligence from layers of software that can connect to widespread sensor networks and optimize entire systems. Cars drive themselves, thermostats silently adjust to our apparent preferences, wind turbines gird themselves for stormy gusts based on data collected from other turbines. These services are increasingly enabled not by rigid software infrastructure that connects giant systems end-to-end, but by ambient data-gathering, Web-like connections, and machine-learning techniques that help computers interact with the same user interfaces that humans rely on.

We’re headed for a re-hash of the API/Web scraping debate, and even more than on the Web, scraping is poised for an advantage since physical-world conventions have been under refinement for millennia. It’s remarkably easy to differentiate a men’s room door from a women’s room door, or to tell whether a door is locked by looking at the deadbolt, and software that can understand these distinctions is already available.

There will, of course, still be a very big place for integrated systems in the physical Internet. Environments built from scratch; machines that operate in formalized, well-contained environments; and systems whose first priority is reliability and clarity will all make good use of traditional APIs. Trains will always communicate with dispatchers through formal integrated systems, just as Twitter’s API is likely the best foundation for a Twitter app regardless of how sophisticated your Twitter-scraping software might be.

But building a dispatching system for a railroad is nothing like developing a car that could communicate, machine-to-machine, with any traffic light it might encounter. Governments would need to invest billions in retrofitting their signaling systems, cars would need to accommodate the multitude of standards that would emerge from the process, and in order for adoption to be practical, traffic-signal retrofits would need to be universal. It’s better to gently build systems layer by layer that can accept human convention as it already exists, and the technology to do that is emerging now.

A couple of other observations about the superiority of loosely-coupled services in the physical world:

  • Google Maps, Waze, and Inrix all harvest traffic data ambiently from connected devices that constantly report location and speed. The alternative, described before networked navigation devices were common, was to gather this data through connected infrastructure. State departments of transportation, spurred by the SAFETEA-LU transportation bill of 2005 (surely the most tortured acronym in Washington), invested heavily in “smart transportation” infrastructure like roadway sensors and digital overhead signs, with the aim of, for instance, directing traffic around congestion. Compared to any of the Web services, these systems are slow to react, have poor coverage, are inflexible, and are incredibly expensive. The Web services make collateral use of infrastructure–gathering data quietly from devices that were principally installed for other purposes.
  • A few weeks ago I spent a storm-drenched six hours in one of O’Hare Airport’s lower-ring terminals. As we waited for the storm to clear and flights to resume, we watched the data screen over the gate–the Virgil in our inferno–which periodically showed a weather map and updated our predicted departure time. It was clear to anyone with a smartphone, though, that the information it displayed, at the end of a long chain of data transmission and reformatting that went from weather services and the airline’s management system through the airport’s display system, was out of date. The flight updates arrived on the airline’s Web site 10 to 15 minutes before they appeared on the gate screen; the weather map, which showed our storm crossing the Mississippi River even as it appeared over the airfield, was a consistent hour behind. By following updates on the Internet, customers were able to subvert an otherwise-rigid and hierarchical information flow, finding data where it was freshest and most accurate among loosely-coupled services online. Software that can automatically seek out the highest-quality information, flexibly switching between sources, will in turn provide the highest-quality intelligence.

The API represents the world on the computers’ terms: human output squished constrained by programming conventions. Computers are on their way to participating in the world in human terms, accepting winding roads and handwritten signs. In other words, they’ll soon meet us halfway.

July 23 2013

Interactive map: bike movements in New York City and Washington, D.C.

From midnight to 7:30 A.M., New York is uncharacteristically quiet, its Citibikes–the city’s new shared bicycles–largely stationary and clustered in residential neighborhoods. Then things begin to move: commuters check out the bikes en masse in residential areas across Manhattan and, over the next two hours, relocate them to Midtown, the Flatiron district, SoHo, and Wall Street. There they remain concentrated, mostly used for local trips, until they start to move back outward around 5 P.M.

Washington, D.C.’s bike-share program exhibits a similar pattern, though, as you’d expect, the movement starts a little earlier in the morning. On my animated map, both cities look like they’re breathing–inhaling and then exhaling once over the course of 12 hours or so.

The map below shows availability at bike stations in New York City and the Washington, D.C. area across the course of the day. Solid blue dots represent completely-full bike stations; white dots indicate empty bike stations. Click on any station to see a graph of average availability over time. I’ve written a few thoughts on what this means about the program below the graphic.

We can see some interesting patterns in the bike share data here. First of all, use of bikes for commuting is evidently highest in the residential areas immediately adjacent to dense commercial areas. That makes sense; a bike commute from the East Village to Union Square is extremely easy, and that’s also the sort of trip that tends to be surprisingly difficult by subway. The more remote bike stations in Brooklyn and Arlington exhibit fairly flat availability profiles over the course of the day, suggesting that to the degree they’re used at all, it’s mostly for local trips.

A bit about the map: I built this by scraping the data feeds that underlie the New York and Washington real-time availability maps every 10 minutes and storing them in a database. (Here is New York’s feed; here is Washington’s.) I averaged availability by station in 10-minute increments over seven weekdays of collected data. The map uses JavaScript (mostly jQuery) to manipulate an SVG image–changing opacity of bike-share stations depending to represent availability and rendering a graph every time a station is clicked. I used Python and MySQL for the back-end work of collecting the data, aggregating it, and publishing it to a JSON file that the front-end code downloads and parses.

This map, by the way, is an extremely simple example of what’s possible when the physical world is instrumented and programmable. I’ve written about sensor-laden machinery in my research report on the industrial internet, and we plan to continue our work on the programmable world in the coming months.

June 14 2013

Radar podcast: the Internet of Things, PRISM, and defense technology that goes civilian

On this week’s podcast, Jim Stogdill, Roger Magoulas and I talk about things that have been on our minds lately: the NSA’s surveillance programs, what defense contractors will do with their technology as defense budgets dry up, and a Californian who isn’t doing what you think he’s doing with hydroponics.

The odd ad in The Economist that caught Jon's attention, from Dassault Systemes.The odd ad in The Economist that caught Jon's attention, from Dassault Systemes.

The odd ad in The Economist that caught Jon’s attention, from Dassault Systemes. Does this suggest that contractors, contemplating years of American and European austerity, are looking for ways to market defense technologies to the civilian world?

Because we’re friendly Web stewards, we provide links to the more obscure things that we talk about in our podcasts. Here they are.

If you enjoyed this podcast, be sure to subscribe on iTunes, on SoundCloud, or directly through our podcast RSS feed.

May 23 2013

Burning the silos

If I’ve seen any theme come up repeatedly over the past year, it’s getting product cycle times down. It’s not the sexiest or most interesting theme, but it’s everywhere: if it’s not on the front burner, it’s always simmering in the background.

Cutting product cycles to the bare minimum is one of the main themes of the Velocity Conference and the DevOps movement, where integration between developers and operations, along with practices like continuous deployment, allows web-native companies like Yahoo! to release upgrades to their web products many times a day. It’s no secret that many traditional enterprises are looking at this model, trying to determine what they can use or implement. Indeed, this is central to their long-term survival; companies as different from Facebook as GE and Ford are learning that they will need to become as agile and nimble as their web-native counterparts.

Integrating development and operations isn’t the only way to shorten product cycles. In his talk at Google IO, Braden Kowitz talked about shortening the design cycle: rather than build big, complete products that take a lot of time and money, start with something very simple and test it, then iterate quickly. This approach lets you generate and test lots of ideas, but be quick to throw away the ones that aren’t working. Rather than designing an Edsel, just to fail when the product is released, the shorter cycles that come from integrating product design with product development let you build iteratively, getting immediate feedback on what works and what doesn’t. To work like this, you need to break down the silos that separate engineers and designers; you need to integrate designers into the product team as early as possible, rather than at the last minute.

Data scientist DJ Patil makes a similar point in Building Data Science Teams. Data isn’t new, nor is data analysis. Our ability to collect data by the petabyte certainly is new, but it isn’t revolutionary in itself. What really characterizes data science is the way data is incorporated into the product development cycle. Data analysts are frustrated and ineffective when they live in silos, and all they do is design complex queries and generate reports. They need to be integrated into decision making and product design. A data group that’s isolated from the organization’s business isn’t going to have the understanding they need to create insights from data. Their role will be limited to confirming what some manager or other thinks, and that’s neither fulfilling or effective.

If you’ve ever optimized code, you’ve experience this phenomenon in a different way: neatly designed, modular code may be well-architected and easy to work on, but when you need to tune for performance, the biggest gains usually come by breaking down these carefully designed modules. The well-designed boundaries that characterize many modern software projects are the silos that need to be broken down for the sake of efficiency.

Organizational structure and culture are no different: boundaries make large organizations manageable. That’s ultimately why we start by developing an idea into a prototype, getting it working, then handing it off to a design team to make it look good. And when it looks good, we throw it over the wall to the operations group. The org chart looks nicer when all the designers report to a design manager, all the software developers report to a software manager, and the operations team is part of IT. But that kind of product development is inherently slow, inefficient, and prone to turf battles and misunderstandings between departments.

And whether you’re building products that weigh tons or products that weigh a few ounces, whether you call it Industrial Internet, Smarter Planet, Internet of Things, Sensor Networks, or some other name, the current wave that re-connects computing with the physical world is no different. You can’t separate that electrical engineering team that makes circuits from the mechanical engineering team that builds the moving parts, or from the software team that does the programming. All of these groups have to work together from the start.

DevOps, data science, and even software development are teaching us that nice organizations with well-defined boundaries may be easy to manage, but they don’t get the job done. Siloed development locks you into long and ineffective product cycles; they lead you to build products that are obsolete when they’re finished, or aren’t what the customer wants. The results are inflexible: they are hard to modify, extend, or fix, and it may take months, not days, to get from version 1.0 to 1.0.1.

Innovation is intensely collaborative. Whether we’re building products to help fix health care, or building jet engines, or just making better online web sites, the boundaries created by traditional management are just getting in the way. We can no longer afford that. It’s time to burn the silos.

May 09 2013

Where will software and hardware meet?

I’m a sucker for a good plant tour, and I had a really good one last week when Jim Stogdill and I visited K. Venkatesh Prasad at Ford Motor in Dearborn, Mich. I gave a seminar and we talked at length about Ford’s OpenXC program and its approach to building software platforms.

The highlight of the visit was seeing the scale of Ford’s operation, and particularly the scale of its research and development organization. Prasad’s building is a half-mile into Ford’s vast research and engineering campus. It’s an endless grid of wet labs like you’d see at a university: test tubes and robots all over the place; separate labs for adhesives, textiles, vibration dampening; machines for evaluating what’s in reach for different-sized people.

Prasad explained that much of the R&D that goes into a car is conducted at suppliers–Ford might ask its steel supplier to come up with a lighter, stronger alloy, for instance–but Ford is responsible for integrative research: figuring out how to, say, bond its foam insulation onto that new alloy.

In our more fevered moments, we on the software side of things tend to foresee every problem being reduced to a generic software problem, solvable with brute-force computing and standard machinery. In that interpretation, a theoretical Google car operating system–one that would drive the car and provide Web-based services to passengers–could commoditize the mechanical aspects of the automobile. If you’re not driving, you don’t care much about how the car handles; you just want a comfortable seat, functional air conditioning, and Web connectivity for entertainment. A panel in the dashboard becomes the only substantive point of interaction between a car and its owner, and if every car is running Google’s software in that panel, then there’s not much left to distinguish different makes and models.

When’s the last time you heard much of a debate on Dell laptops versus HP? As long it’s running the software you want, and meets minimum criteria for performance and physical quality, there’s not much to distinguish laptop makers for the vast majority of users. The exception, perhaps, is Apple, which consumers do distinguish from other laptop makers for both its high-quality hardware and its unique software.

That’s how I start to think after a few days in Mountain View. A trip to Detroit pushes me in the other direction: the mechanical aspects of cars are enormously complex. Even incremental changes take vast re-engineering efforts. Changing the shape of a door sill to make a car easier to get into means changing a car’s aesthetics, its frame, the sheet metal that gets stamped to make it, the wires and sensors embedded in it, and the assembly process that puts it together. Everything from structural integrity to user experience needs to be carefully checked before a thousand replicates start driving out of Ford’s plants every day.

So, when it comes to value added, where will the balance between software and machines emerge? Software companies and industrial firms might both try to shift the balance by controlling the interfaces between software and machines: if OpenXC can demonstrate that it’s a better way to interact with Ford cars than any other interface, Ford will retain an advantage.

As physical things get networked and instrumented, software can make up a larger proportion of their value. I’m not sure exactly where that balance will arise, but I have a hard time believing in complete commoditization of the machines beneath the software.

See our free research report on the industrial internet for an overview of the ways that software and machines are coming together.

May 07 2013

Four Short Links: 7 May 2013

  1. Raspberry Pi Wireless Attack ToolkitA collection of pre-configured or automatically-configured tools that automate and ease the process of creating robust Man-in-the-middle attacks. The toolkit allows your to easily select between several attack modes and is specifically designed to be easily extendable with custom payloads, tools, and attacks. The cornerstone of this project is the ability to inject Browser Exploitation Framework Hooks into a web browser without any warnings, alarms, or alerts to the user. We accomplish this objective mainly through wireless attacks, but also have a limpet mine mode with ettercap and a few other tricks.
  2. Industrial Robot with SDK For Researchers (IEEE Spectrum) — $22,000 industrial robot with 7 degrees-of-freedom arms, integrated cameras, sonar, and torque sensors on every joint. [...] The Baxter research version is still running a core software system that is proprietary, not open. But on top of that the company built the SDK layer, based on ROS (Robot Operation System), and this layer is open source. In addition, there are also some libraries of low level tasks (such as joint control and positioning) that Rethink made open.
  3. OtherMill (Kickstarter) — An easy to use, affordable, computer controlled mill. Take all your DIY projects further with custom circuits and precision machining. (via Mike Loukides)
  4. go-raft (GitHub) — open source implementation of the Raft distributed consensus protocol, in Go. (via Ian Davis)

April 01 2013

Four short links: 1 April 2013

  1. MLDemosan open-source visualization tool for machine learning algorithms created to help studying and understanding how several algorithms function and how their parameters affect and modify the results in problems of classification, regression, clustering, dimensionality reduction, dynamical systems and reward maximization. (via Mark Alen)
  2. kiln (GitHub) — open source extensible on-device debugging framework for iOS apps.
  3. Industrial Internet — the O’Reilly report on the industrial Internet of things is out. Prasad suggests an illustration: for every car with a rain sensor today, there are more than 10 that don’t have one. Instead of an optical sensor that turns on windshield wipers when it sees water, imagine the human in the car as a sensor — probably somewhat more discerning than the optical sensor in knowing what wiper setting is appropriate. A car could broadcast its wiper setting, along with its location, to the cloud. “Now you’ve got what you might call a rain API — two machines talking, mediated by a human being,” says Prasad. It could alert other cars to the presence of rain, perhaps switching on headlights automatically or changing the assumptions that nearby cars make about road traction.
  4. Unique in the Crowd: The Privacy Bounds of Human Mobility (PDF, Nature) — We study fifteen months of human mobility data for one and a half million individuals and find that human mobility traces are highly unique. In fact, in a dataset where the location of an individual is specified hourly, and with a spatial resolution equal to that given by the carrier’s antennas, four spatio-temporal points are enough to uniquely identify 95% of the individuals. We coarsen the data spatially and temporally to find a formula for the uniqueness of human mobility traces given their resolution and the available outside information. This formula shows that the uniqueness of mobility traces decays approximately as the 1/10 power of their resolution. Hence, even coarse datasets provide little anonymity. These findings represent fundamental constraints to an individual’s privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals. As Edd observed, “You are a unique snowflake, after all.” (via Alasdair Allan)

March 27 2013

The coming of the industrial internet

The big machines that define modern life — cars, airplanes, furnaces, and so forth — have become exquisitely efficient, safe, and responsive over the last century through constant mechanical refinement. But mechanical refinement has its limits, and there are enormous improvements to be wrung out of the way that big machines are operated: an efficient furnace is still wasteful if it heats a building that no one is using; a safe car is still dangerous in the hands of a bad driver.

It is this challenge that the industrial internet promises to address by layering smart software on top of machines. The last few years have seen enormous advances in software and computing that can handle gushing streams of data and build nuanced models of complex systems. These have been used effectively in advertising and web commerce, where data is easy to gather and control is easy to exert, and marketers have rejoiced.

Thanks to widespread sensors, pervasive networks, and standardized interfaces, similar software can interact with the physical world — harvesting data, analyzing it in context, and making adjustments in real-time. The same data-driven approach that gives us dynamic pricing on Amazon and customized recommendations on Foursquare has already started to make wind turbines more efficient and thermostats more responsive. It may soon obviate humans as drivers and help blast furnaces anticipate changes in electricity prices.

Electric furnace circa 1941

An electric furnace at the Allegheny Ludlum Steel Corp. in Brackenridge, Pa. Circa 1941.
Photo via: Wikimedia Commons.

Those networks and standardized interfaces also make the physical world broadly accessible to innovative people. In the same way that Google’s Geocoding API makes geolocation available to anyone with a bit of general programming knowledge, Ford’s OpenXC platform makes drive-train data from cars available in real-time to anyone who can write a few basic scripts. That model scales: Boeing’s 787 Dreamliner uses modular flight systems that communicate with each other over something like an Ethernet, with each component presenting an application programming interface (API) by which it can be controlled. Anyone with a brilliant idea for a new autopilot algorithm could (in theory) implement it without particular expertise in, say, jet engine operation.

For a complete description of the industrial internet, see our new research report on the topic. In short, we foresee that the industrial internet* will:

  • Draw data from wide sensor networks and optimize systems in real-time.
  • Replace both assets and labor with software intelligence.
  • Bring the software industry’s rapid development and short upgrade cycles to big machines.
  • Mask the underlying complexity of machines behind web-like APIs, making it possible for innovators without specialized training to contribute improvements to the way the physical world works.
  • Create a valuable flow of data that makes decision making easier and more accurate for the operators of big machines as well as for their clients and suppliers.

Our report draws on interviews with industry experts and software innovators to articulate a vision for the coming-together of software and machines. Download the full report for free here, and also read O’Reilly’s ongoing coverage of the industrial internet at oreil.ly/industrial-internet.

* We have adapted our style over the course of our industrial internet investigation. We now use lowercase internet to refer generically to a group of interconnected networks, and uppercase Internet to refer to the public Internet, which includes the World Wide Web.


This is a post in our industrial internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

March 08 2013

Security on the industrial Internet

Security must evolve along with the industrial Internet. The Stuxnet attack on Iran’s centrifuges in 2010 highlighted both the risks of web-borne attacks and the futility of avoiding them by disconnecting from the Internet (the worm spread, in part, using USB keys). Potential attackers range from small-time corporate spies to sophisticated government units that might use infrastructure disruption as a weapon.

Comparing industrial Internet security to consumer and enterprise web security is difficult; requirements, challenges, and approaches differ significantly. In industrial systems, stability is crucial, and isolating an infected system — or adding an air gap as a preventative measure — can be enormously costly. Some tools that are difficult to apply to the unstructured web are effective in industry, though: since industrial systems usually have known, simplified network structures with highly regular traffic patterns, anomaly detection and other machine-learning techniques hold great promise as ways to find and stop attacks. The addition of more computing power at the network level as companies connect their industrial systems will make these approaches more powerful.

Back in October, Eugene Kaspersky announced that his security firm is developing an industrial operating system — a “highly-tailored system,” one that “by design won’t be able to carry out any behind-the-scenes, undeclared activity.” Last fall, I interviewed Roel Schouwenberg (@Schouw), the researcher at Kaspersky Lab who is leading development on the new industrial OS. What follows is a lightly-edited transcript of our wide-ranging conversation.

Tell me a bit about how the OS project came about — does it have its origins in Kaspersky’s Stuxnet research?

Roel Schouwenberg: Eugene [Kaspersky] and a few others started talking about this a decade ago, actually. Eugene’s idea was that the only way to solve the malware problem would be to build something that was constructed with security in mind — what he called secure OS. That was just a concept for a while, and then Stuxnet came along and it became increasingly clear that the secure OS implementation would be best suited for the industrial control world, where you have this very specific set of circumstances where it would just work best.

If you work on consumer machines and say, “here is this completely different operating system, have fun with that,” that obviously doesn’t work, but in the industrial control world there are different sets of requirements that place a big emphasis on security above all else.

Saudi Aramco was in the news recently saying that they very strongly believe that the goal of the Shamoon malware was to sabotage production of oil. That didn’t happen, obviously, but the company was crippled. They said the object of Shamoon was to actually mess with the oil production and hurt the company and the global economy that way. Shamoon doesn’t come close in any way, shape or form to Stuxnet, but I think that the significance of Shamoon cannot be understated. It was a relatively simple piece of malware with very silly programming errors that was still very effective in wiping all those machines. Even though it didn’t affect oil production itself, Saudi Aramco really struggled to recover from that attack. I think that was maybe the most significant event in that part of the cyber war this year.

So, it was a sabotage attempt, not an espionage attempt?

Roel Schouwenberg: Right, it was a sabotage attempt pretty much from the get-go. Shamoon wiped the data off of any computer that it infected. The goal — or the hope of the attackers — was that Shamoon would be able to bridge the air gap and get onto the industrial control network, and then wipe machines on that network, or any Windows machine on the network, at least.

A lot of industrial systems have been designed without Internet connectivity in mind. Now managers want to connect their systems to the Internet for remote monitoring and emergency control, and even if it’s through a VPN to some control center, it still multiplies the number of entry points and complicates security immensely, right?

Roel Schouwenberg: There are a number of issues here. One of the issues is that actual air gaps really decrease productivity. A lot of people in the industrial world say their efficiency goes down by 20 or 30 percent if they really have no connectivity whatsoever. At that point, you have to employ sneakernets, and people are not so happy about that.

Actually, at one industrial control conference recently, people were telling me that it’s now possible to control certain systems with an app on your iPhone or on your iPad, which is obviously crazy when you think about it. The idea of managing a water or electrical facility with your smartphone is absolutely crazy, but from the fact that it’s now available, you can see that there’s demand for it.

It makes sense when you think about it. If there’s a huge blizzard or something like Sandy happens, you don’t want to go out into that kind of weather, and it would be safer and faster to do that kind of administration from home. But that obviously introduces very interesting security risks.

Is the security risk principally a malware risk introduced by the mobile device, or is it just a generic risk that comes from having more points of connection to the system?

Roel Schouwenberg: It’s both, really. The idea of somebody maybe even using their personal phone to manage critical infrastructure is obviously crazy, but it seems that’s becoming more or less mainstream. If all it takes to get access to that type of facility is to infect somebody’s smartphone, that will really be pretty easy because people don’t think about security on smartphones too much at this point. So, we expect more of these targeted attacks.

We have this situation where, as you pointed out, these systems were not designed with Internet connectivity in mind and now there should be connectivity, at least to the corporate network. I think that can be more or less manageable. But now, all of a sudden, these systems are Internet accessible — directly Internet accessible, and not even just that, but there are dedicated apps for it. That’s going to get messy real soon.

There’s a price to IT security in general. Right now, we have a security situation that is basically suboptimal. We’re at a reasonably stable place, but it’s not an easy fight, and all of a sudden you’re adding mobile and cloud and all of these convenience features into the equation and it becomes exponentially more complex.

As you’re working on the secure OS, what is the vision for that? Is it actually something like a SCADA or PLC operating system, or does it sit at a layer in the air gap?

Roel Schouwenberg: Right now, we’re not yet at a point where we can fully disclose what our implementation is going to be, but the idea is basically that the code is quote-unquote perfect. The level of quality that we need in the code is extremely high and is unlike anything that we’ve seen before. So, basically the goal really is for this OS to have no vulnerabilities, which is clearly a very, very high goal and very hard to achieve. The idea is basically to generically detect if some instruction or some command could be malicious, and to block it.

So, it would use machine learning in order to detect the baseline operation of something like a machine tool, and then understand when it’s been instructed to carry out an unusual operation?

Roel Schouwenberg: That would be one of the approaches that one would take for that, yes. But as I mentioned, I can’t go into any specifics at this point, as we are in the very early stage, and I think we are trying to approach some things from a very different perspective.

When we do have a finalized product, we will be sharing source code with whoever is interested. Obviously, we are talking about critical infrastructure here. The stakes could not be higher. We believe that transparency is extremely important, so we will share source code with governments so that they can confirm that the code is solid.

Big industrial firms are often very conservative in how they approach these things, and they have an old system that works. From what you hear, are they receptive to the idea of a completely new operating system?

Roel Schouwenberg: I think there’s been a lot of positive response, and a lot of people are interested. People really trust our expertise; they’ve read the articles that we’ve published in the last few years with regard to Stuxnet. I think many companies are interested in seeing what we’ll be able to come up with, and I think we’ll definitely be given a shot, if you will, especially if we’re sharing source code, and there will be true vetting of the code.

Do you imagine that it would take a generic enough form that a client could install it on its own?

Roel Schouwenberg: We are working with a number of partners to optimize integration. That is something that is needed, and we’re working on that right now.

What are the shortcomings in the industrial operating systems that are out there and being sold in new systems now?

Roel Schouwenberg: Basically, we are a security company. Everything we do and build is with security in mind, and in the industrial control world they are not there. It took Microsoft many years to become an acknowledged player in terms of security development and, and when you look at the types of vulnerabilities being discovered in SCADA and ICS software today, those are very basic issues. They’re basically at the same level that we saw 10 years ago for Windows. So, there is a very big gap between where most vendors are and where reality is, if you will.

How similar is an OS that runs in a railroad locomotive to an OS that runs in an automotive factory or an OS that runs in a dam? Do they all face the same problems at some generic level that you can approach the same way?

Roel Schouwenberg: I think most of the problems are generic enough that you can approach them in the same way.

Are there any generic approaches to this kind of security in place at the moment? Is there an industry standard that’s in use?

Roel Schouwenberg: There definitely are a number of rules and guidelines that we see, but they only have limited effects. Sometimes regulations make people jump through hoops that they really shouldn’t have to jump through. On the other hand, we see that there is a lot of leeway when it comes to the air gap. Sometimes air gap systems aren’t as air-gapped as they’re supposed to be, and that’s one of the challenges for these types of things.

You mentioned regulation. What are you facing in that respect?

Roel Schouwenberg: I’ve heard stories from some people in the field saying, “I don’t have a Windows environment, but because of regulations, all of a sudden I have to install a Windows virus scanner” or something along those lines. That’s just one example.

I think that’s definitely one of the other major struggles. Governments are trying to see how they can help with this problem, but environments are different, and it’s tough for them to come up with something that’s generic enough that it actually helps everybody and doesn’t slow people down. But with that in mind, I do expect that in the course of next year [2013] there will be some developments from the U.S. government, or governments elsewhere, that talk more specifically about regulation in the ICS space that would encourage people to be more secure.

To be more secure in a productive way, you think, or is the approach in the case of U.S. regulation counterproductive?

Roel Schouwenberg: I think the U.S. government realizes that just pushing through regulations that aren’t going to help is not going to be productive. From what I’ve seen so far, they are definitely trying to come up with a balanced way, but it’s a very complicated field.

I think one of the things we may see is some stricter enforcement of air gaps, but that is a very complicated question, and as I mentioned, a lot of the people I talk to say their efficiency takes a serious hit between a partially air-gapped system and a full air gap. That’s a very tough question for the government. Would the government want to more strongly enforce full air gaps and take that productivity hit? I think that would be very highly debated, and I’m not sure we’re going to see an answer there soon because taking a 20 or 30 percent efficiency hit is obviously rather substantial.

Is there a set of industry guidelines today that you think is sensible?

Roel Schouwenberg: There is a lot of common sense in the NIST guidelines, but we do hear that there are some environment-specific issues that cannot be addressed in the broad-guideline kind of way. There are different challenges that you may face being a water utility versus an electric utility. Those situations are not mentioned. So, the question will be whether we’ll see more specific guidelines for that moving forward, or whether this will be something that remains untouched. Right now, I don’t have an answer for that.

In the balance between security and productivity, one of the difficult elements is the human who gets frustrated and circumvents his employer’s security systems. Do you have a sense of the approach you might take there? You see this all the time in consumer security.

Roel Schouwenberg: I think that’s human nature. I’ve heard crazy stories of people bringing in their own computers and connecting them to the control network. Maybe they were bored at night and wanted to play a game on the network, and that wasn’t good enough, so they put in another network card and hooked up that server to the Internet. So, all of a sudden, the control network was directly accessible from the Internet. That stuff happens.

There’s only so much you can do when it comes to that. Obviously, education is important — explaining to people, “Look, you’re putting things in serious jeopardy here.” I think some companies have said it’s grounds for immediate dismissal, and that generally gets the job done.

This is also where the generic security approach becomes important, right? Because the idea is that the software never trusts a particular security measure that’s taken. It’s not aware that there’s an air gap, so it doesn’t trust the air gap.

Roel Schouwenberg: Right. When you build something with security in mind, then that is basically synonymous with saying, “Trust no one, trust nothing.” You can argue that the cause of nearly every security vulnerability that we can see is that some code is assumed to be trusted. When you say, “What if somebody tries to do this or that?” the response should not be, “Why would they do that?”

This is where you see that the industrial control world is really different from the IT world. There are not enough people who know a lot about both worlds. Incidents such as Stuxnet and now Shamoon are obviously destructive, but they are major catalysts when it comes to educating people. More people really need to see proof of things going wrong before they say, “OK, I’ll wear a safety belt, I’ll wear a helmet,” and so on. Both fortunately and unfortunately, moving forward we’ll see more such bad events. Hopefully they will help people be more aware of security.

Industrial controls are headed for the consumer area in fairly short order. Cars today have Microsoft operating systems with cellular connectivity that control just the radio and navigation system, but in something like the Google driverless car, the same operating system is also controlling brakes and acceleration.

Roel Schouwenberg: All this hardware around us is becoming soft. It’s all software these days.

A few months ago, I was dropping off my car at the garage and they said, “We updated the software for your transmission.” It’s absolutely crazy. They’d have had to use a cable for that, but at the same time more and more people have apps on their phones that can remotely start their cars or unlock them.

That’s the bigger picture in today’s world. Everything is becoming wired or, rather, wireless. The field, the scope, the domain of potential software vulnerabilities is just growing and growing. Ten years ago it was just your desktop. Now we have smartphones, cars, you name it. That’s part of why Stuxnet was a landmark event. There was just a handful of people looking into that before, and now a lot of people are interested in it.

All these systems that were basically living in their own little world based on security ideas from the 1970s and 80s are inadequate in 2012.

A car is a lot like an industrial control in a power plant where the security assumption is that if you have access to a port, then you have the authority to be there — you’ve been admitted to the plant floor. And that’s changing when you make it wireless. What should the approach there be?

Roel Schouwenberg: Segregation is very important. On some planes, there is a common link between the flight deck and the entertainment system for the passengers, which is all sorts of crazy.

If you deny people access in the first place, that makes things a lot better. A few years ago, we heard that a malicious music file that you burned on a CD and put in a car would cause a denial of service, not just against the radio but against the car’s entire electrical system.

That’s very clearly a case of non-proper segregation. The radio should not touch the critical systems. Now you have these up-and-coming vehicles, like Tesla, that are really more software than anything else, and the big question is, how secure are they? Right now, I’m not sure that there’s anything to answer that question, really. Which is maybe the reason why I went for a relatively simple car; it doesn’t have any wireless options, even though I was offered the feature where you can turn on your car with your smart phone. Maybe I can expense that.

There’s a human security flaw, too, with some of these wireless services where there’s a giant call center somewhere with hundreds of people switching on cars remotely or downloading their diagnostic information. It’s an extremely complicated environment, and it’s exactly what we were talking about earlier where the connectivity takes these systems that used to have just a few entry points and increases the number of entry points exponentially.

Roel Schouwenberg: I was on a plane the summer before last, waiting for the restroom and I looked around, and I suddenly noticed right next to the console that flight attendants use to turn on and off the lighting that there was a USB port. I can only imagine what that USB port could bring me. It’s hiding in plain sight. Obviously, I did not do anything to it, but I’m very curious.

One of O’Reilly’s authors, Alasdair Allan, was staying in a hotel last fall and managed to download the access records for his room’s electronic lock. He only read it, but the lock also probably had APIs for reprogramming the lock, resetting it, and things like that. People seem not to have given a great deal of thought to what it means to provide these entry points.

Roel Schouwenberg: At offensive security conferences, there are some people who look into those sorts of things, and generally the findings are quite scary; these systems are easily infiltrated because nobody ever designed them with security in mind. All these systems are increasingly electronic, increasingly interoperable, and security is not high on their priority lists.

If you want to take another route, look at TV. If you buy a fancy new Samsung, it comes with Skype and whatnot that integrate into the TV. And underneath Skype is Android. So, you’re looking at a device that’s going to be connected to the Internet for the next five or maybe 10 years. When is it going to receive security updates? You’re looking at millions of peripheral electronics. Treadmills, even, run Android now and are going to be without security updates for a very, very long time.

The great thing about Android is that it’s open and you can basically plug it into any device. But it’s so easy that it’s showing up in all sorts of applications where people haven’t thought about its environment very carefully. On a TV, maybe the worst thing that happens is that you miss your favorite TV show for a week. But looking at it from more of a cyber crime perspective, there’s definitely use for cybercriminals there. It’s obviously not critical infrastructure, but it’s something to think about.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE. This interview was edited and condensed.

March 05 2013

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl