Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 28 2013

New vision in old industry

Nathan Oostendorp thought he’d chosen a good name for his new startup: “Ingenuitas,” derived from Latin meaning “freely born” — appropriate, he thought, for a company that would be built on his own commitment to open-source software.

But Oostendorp, earlier a co-founder of Slashdot, was aiming to bring modern computer vision systems to heavy industry, where the Latinate name didn’t resonate. At his second meeting with a salty former auto executive who would become an advisor, Oostendorp says, “I told him we were going to call the company Ingenuitas, and he immediately said, ‘bronchitis, gingivitis, inginitis. Your company is a disease.’”

And so Sight Machine got its name — one so natural to Michigan’s manufacturers that, says CEO and co-founder Jon Sobel, visitors often say “I spent the afternoon down at Sight” in the same way they might say “down at Anderson” to refer to a tool-and-die shop called Anderson Machine.

Sight Machine is adapting the tools and formulations of the software industry to the much more conservative manufacturing sector. Changing its name was the first of several steps the company took to find cultural alignment with its clients — the demanding engineers who run giant factories that produce things like automotive bolts.

At its heart is something of a crossover group — programmers and designers who are comfortable with Silicon Valley-style fast innovation, but who have deep roots in Midwestern industry. Sight Machine’s founders quickly realized that they needed to sell their software as a simple, effective, and modular solution and downplay the stack of open-source and proprietary software, developed by young programmers working late hours, that might make tech observers take notice.

Sight Machine staff with a full-scale mockup of an auto-plant inspection station that they use to test their system.Sight Machine staff with a full-scale mockup of an auto-plant inspection station that they use to test their system.
Sight Machine staff in the Ann Arbor warehouse where they built a full-scale mockup of an auto-plant quality-control station to test their system

“Nate saw these big bottlenecks in the way things were being done” in industrial computer vision, says Sobel. “There were no high-level frameworks like Ruby on Rails, and everything is set up to be pass-fail; there are no higher-level analytics.” Sight Machine set out to build what they hope will become “Rails for vision.”

Sight Machine’s co-founders built much of SimpleCV, an open-source computer vision library that’s designed to be accessible to people who aren’t experts in the field. (O’Reilly has published a book on SimpleCV, written by four of Sight Machine’s principals.)

Sight Machine's software measures grain-flow characteristics in a fastener, using little more than a commercial flat-bed scanner.Sight Machine's software measures grain-flow characteristics in a fastener, using little more than a commercial flat-bed scanner.
Sight Machine’s software measures grain-flow characteristics in a fastener, using little more than a commercial flat-bed scanner.

Anthony Oliver, the company’s CTO and co-founder, worked on automation at a big car plant in Toledo, Ohio, before being laid off during the recession and deciding to switch tracks. “They had 200 cameras at the plant, but they were treating them like black boxes — after the picture was done, they’d throw it out. They weren’t collecting trending patterns, doing self-correction,” he says. The plant had bought computer vision systems from integrators that weren’t making much data available for higher-level analytics.

“There’s a huge disconnect [in heavy industry] from the Internet way of doing things,” says co-founder Kurt DeMaagd, also a Slashdot co-founder. “Process engineers are very data-driven, but they haven’t tried these new tools, and they’re not working in real-time.” Oliver says the goal was to build a system that would raise immediate flags, “as opposed to saying, ‘hey, a week ago we were having quality-control problems.’”

Their system puts a clear emphasis on the value of software rather than hardware (though a few of the industrial-quality components, in particular the CCD cameras they use, remain expensive). It can stand on its own as a module in an automated factory; no system integrator necessary. And, in the modern form, it emphasizes data retention and high-level analytics.

Sight Machine’s first client manufactures bolts — fasteners, as they’re called by industrial insiders — checking the integrity of their steel at the beginning and end of each batch by slicing one open and scanning it on a cheap flatbed scanner. Software discerns the dimensions of the steel’s grain (compression lines that form when the head of the bolt is pounded out) and provides an instantaneous quantitative measure of quality. The previous method had involved employees looking at bolts through microscopes.

Swarming in a tank at a fish farm. Sight Machine's object: to signal the feeding system to stop putting food in the tank once the fish had eaten enough to be satisfied. Photo: courtesy Sight MachineSwarming in a tank at a fish farm. Sight Machine's object: to signal the feeding system to stop putting food in the tank once the fish had eaten enough to be satisfied. Photo: courtesy Sight Machine
Swarming in a tank at a fish farm. Sight Machine’s object: to signal the feeding system to stop putting food in the tank once the fish had eaten enough to be satisfied. Photo: courtesy Sight Machine.

Another client, a fish farm in Michigan, uses Sight Machine’s software to manage feeding — determining when the fish have had their fill by measuring changes in movement and switching off the farm’s auto feeders. Another proposal for an Ann Arbor-based deli and mail-order house would use Sight Machine software to watch for bunch-ups in the production line and tell managers to send help to, say, the jam-labeling station if the employee there is having trouble filling orders fast enough.

And in a live demonstration that I saw at the company’s industrial-park development space, Sight Machine software detected a misapplied badge on the back of an SUV. That’s a problem that could be corrected quickly and easily at an automaker’s quality-control checkpoint, but if a car that doesn’t have four-wheel drive makes it to a dealer’s lot with a chrome “4×4″ badge glued to the tailgate, the logistics and inventory costs of fixing the problem will be substantial.

Cameras photograph a car as it rolls through an inspection station, and signal in real-time whether trim features like badges, taillights, and rims are correct. Photo: Jon BrunerCameras photograph a car as it rolls through an inspection station, and signal in real-time whether trim features like badges, taillights, and rims are correct. Photo: Jon Bruner
Cameras photograph a car as it rolls through an inspection station, and signal in real-time whether trim features like badges, taillights, and rims are correct. Photo: Jon Bruner.

In addition to a real-time check like this one, Sight Machine's software can also aggregate results for higher-level analysis. Engineers can, for instance, call up photos of every yellow car with a bent antenna. Photo: courtesy Sight Machine.In addition to a real-time check like this one, Sight Machine's software can also aggregate results for higher-level analysis. Engineers can, for instance, call up photos of every yellow car with a bent antenna. Photo: courtesy Sight Machine.
In addition to a real-time check like this one, Sight Machine’s software can also aggregate results for higher-level analysis. Engineers can, for instance, call up photos of every yellow car with a bent antenna.
Photo: courtesy Sight Machine.

Industrial firms tend to be conservative in adopting new systems, for a reason: the costs of a plant outage are huge (consider that a large auto assembly plant might produce more than 60 vehicles per hour — an outage of just one minute is equivalent to one car’s worth of lost production). They also tend to have enormous amounts of capital tied up in big, integrated production systems, making changes costly.

Sight Machine’s founders have approached both of those obstacles carefully. The company’s developers work in an industrial park in Ann Arbor, Mich., where they built a mockup of an auto-plant inspection station to test their software under factory conditions. Several of them have worked in heavy manufacturing, including automotive and defense, and the company has its roots at the decidedly machine-oriented Maker Works, a maker space across the street that offers Michigan’s industrial prototypers access to plasma cutters and CNC mills.

Early prototypes got some upgrades to meet the expectations of plant managers used to heavy-duty equipment. Sight Machine’s system uses costly CCD cameras instead of cheaper consumer-grade cameras. Design director Kyle Lawson says, to make the company’s first camera mount he “took a Logitech webcam stand, broke it down, and filled it with pennies to make it feel industrial.” (Now he fabricates heavy-duty camera mounts himself at Maker Works.)

As for the problem of weaving a new assembly line component into an old plant, Sight Machine’s software is free-standing and can be provided as a service or licensed to run locally — minimal integration with plant controls needed. That’s an important consideration for a cautious assembly-line manager who’s experimenting with a startup’s software.

Software and industry are inching closer; the industrial Internet will make it easier for innovators to turn physical-world problems into software problems, and then solve them using rich open-source tools and pervasive networks. Along the way, I think we’ll see lots of stories like Sight Machine’s.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

February 15 2013

Masking the complexity of the machine

The Internet has thrived on abstraction and modularity. Web services hide their complexity behind APIs and standardized protocols, and these clean interfaces make it easy to turn them into modules of larger systems that can take advantage of the most intelligent solution to each of many problems.

The Internet revolutionized the software-software interface; the industrial Internet will revolutionize the software-machine interface and, in doing so, will make machines more accessible. I’m using “access” very broadly here — interfaces will make machines accessible to innovators who aren’t necessarily experts in physical machinery, in the same way that the Google Maps API makes interactive mapping an accessible feature to developers who aren’t expert cartographers and front-end developers. And better access for people who write software means wider applications for those machines.

I’ve recently encountered a couple of widely different examples that illustrate this idea. These come from very different places — an aerospace manufacturer that has built strong linkages between airplanes and software, and an advanced enthusiast who has built new controllers for a pair of industrial robots — but they both involve the development of interfaces that make machines accessible.

The Centaur, built by Aurora Flight Sciences, is an optionally-piloted aircraft: it can be flown remotely, as a drone, or by a certified pilot sitting in the plane, which satisfies U.S. restrictions against domestic drone use. Customers include defense agencies and scientists, who might need a technician onboard to monitor equipment in some cases but in others send the plane on long trips well beyond a human’s comfort and safety limitations.

John Langford, Aurora’s founder, described his company’s work to me and in the process offered a terrific characterization of what the industrial Internet does: “We’re masking the complexity of the machine.”

The intelligence that Aurora layers onto its planes reduces the entire flight process to an API. The Centaur can even be flown from the pilot’s seat in the plane through the remote-operator control. In other words, Aurora has so comprehensively captured the mechanism of flight in its software that a pilot might as well fly the airplane he’s sitting in through the digital pipeline rather than directly through the flight deck’s physical links.

A highly-evolved interface between airplane and its software means that the software can draw insight from the plane, reading control settings as well as sensors to improve its piloting performance. “An experienced human pilot might have [flown] 10,000 to 20,000 hours,” says Langford. “We already have operating systems that have hundreds of thousands of flying hours on them. Every anomaly gets built into the memory of the system. As the systems learn, you only have to see something once in order to know how to respond. The [unmanned aircraft] has flight experience that no human pilot will ever build up in his lifetime.”

The simplified interface between humans and the Centaur’s combined machinery and software might eventually make flight vastly more accessible. “What we think the robotic revolution really does is remove operating an air vehicle from the priesthood that it’s part of today, and makes it accessible to people with lower levels of training,” he says.

Trammell Hudson's PUMA robotic arm setup at NYC Resistor, with laptop running kinematics library, homemade controller stack, and robot.Trammell Hudson's PUMA robotic arm setup at NYC Resistor, with laptop running kinematics library, homemade controller stack, and robot.

Trammell Hudson's PUMA robotic arm setup at NYC Resistor, with laptop running kinematics library, homemade controller stack, and robot.

I saw a different kind of revolutionary accessibility at work when I visited Trammell Hudson at NYC Resistor, a hardware collective in Brooklyn. I came across Hudson through a blog post he wrote detailing his rehabilitation of a pair of industrial robots — reverse-engineering their controls and building his own new controller stack in place of the PLCs that had operated them before they were salvaged from a factory with wire cutters.

“The arm itself has no smarts — just motors and quadrature encoders,” he says. (Even the arm’s current position is stored in the controller’s memory, not the robot’s.) Hudson had to write his own smarts for the robot, from scratch — intelligence that, when the robot was new, resided in purpose-built controllers the size of mini-fridges but that today can be built from open-source software libraries and run on an inexpensive microprocessor.

The robot’s kinematics — the spatial intelligence that decides how to get the robot’s hand from one place to another by repositioning six different joints — run on Hudson’s laptop. He’s interested in building those mathematical models directly into a controller that could be built from widely-available parts by anyone else with a similar robot, which could give second lives to thousands of high-quality industrial automation components by taking discarded machines and assigning new intelligence to them.

“The hardware itself is very durable,” Hudson told me. “The software is where the interesting things are happening, and the controllers age very rapidly.

Hudson’s remarkable feat of Saturday-afternoon electrical engineering was made possible by open-source microcontrollers, software libraries, and hardware interfaces (and, naturally, his own ingenuity). But he told me the most important factor in the success of his project was the rise of an online community that has an extraordinarily specialized and sophisticated understanding of electronics. “The ease of finding information now is incredible,” he said. “Some guy posted the correct voltage for releasing the arm’s brake, and I was able to find it in a few minutes and avoid damaging anything.”

“We went through a white-collar dark ages in the 1980s,” Hudson said. “People stopped building things. No one took shop class.” Now hardware components, abstracted and modularized, have become accessible to anyone with a technical mindset, who can improve the physical world by writing more intelligence onto it.

In an earlier reverse-engineering project, Hudson wrote his own firmware, which became Magic Lantern, for Canon’s 5D Mark II digital SLR camera. “I have a 4 by 5 [inch] camera from the 1890s — with my Canon 5D Mark II attached to the back,” he says. “The hardware on the old camera is still working fine, but the software on the 5D is way better than chemical film.”


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

February 11 2013

Frozen turkeys are thermal batteries

I went to San Diego two weeks ago for DistribuTECH as part of our ongoing investigation into the industrial Internet. DistribuTECH is a very large conference for electric utility operators in the U.S. and while I was there ran into Keyvan Cohanim of Enbala Power Networks. We had an interesting conversation, the upshot of which was my realization that given the magic of absolute values, as far as the grid is concerned, slowly warming frozen turkeys are thermal batteries.

Enbala’s business is conceptually simple. They use information to optimize the match between electrical supply and demand to help utilities avoid capital expenditure in under-utilized peak-load generation assets. Then they share those supply side savings with the participating loads. The deal is simple, let Enbala control your loads within your process constraints, and you’ll earn additional revenue. At the risk of gross over-simplification, they are sort of like an Uber or AirBnB of the electrical grid, but made interesting by the complexity of constraints and the fact that it all has to happen in real time.

They’ve done a video that explains it, and it’s actually pretty good. But in short, they improve on simplistic and disruptive load shedding and load curtailing approaches by modeling a load’s internal processes and constraints, instrumenting them, and then taking control of major loads within those process constraints. They do this such that the load owners won’t even notice it’s happening. Things still stay cold enough, or full enough, or whatever enough, but within the process margins they might be chilled, filled, or whatever’d a little bit earlier, later, faster or slower to better match hundreds of loads across the grid with current supply conditions.

For example, if the wind picks up and some turbines get to spinning, Enbala can go around to their participating loads and start some pumps to fill some tanks, or start chiller compressors to cool things down. Conversely, if generation is taxed, before starting an expensive peak load plant, utilities will ask Enbala to signal a downshift in pump speeds, delay compressor starts, etc. This is where slowly warming frozen turkeys may be allowed to get a little bit closer to their maximum allowed temperature. And not running a compressor in this case is identical to starting a generator, or perhaps more accurately, discharging battery that was charged at the lower base-load per-kilowatt rate.

Which made me think, who needs giant expensive inertial batteries to stabilize a mixed grid of base load and sustainable power options, when you can just freeze some turkeys?

I like this story because it highlights two important and related principles of the industrial Internet. That information can be traded for capital expenditures and assets, and that those optimizations get better as more stuff participates in the optimization.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

February 07 2013

DIY robotic hands and wells that text (industrial Internet links)

Two makers come together to make a robotic hand for a boy in South Africa (TechCrunch) — The maker movement is adjacent to the industrial Internet, and it’s growing fast as a rich source of innovative thinking wherever machines and software meet. In this case, Ivan Owen and Richard Van As built a robotic hand for a South African five-year-old who was born missing fingers on his right hand. Owen is an automation technician and Van As is a tradesman. They did their work on a pair of donated MakerBots — evidence that design for machines and the physical world at large is more accessible than ever to bright enthusiasts from lots of different backgrounds. The designers even open-sourced their work; the hand’s CAD files are available at Thingiverse. Owen and Van As are running a Fundly campaign; more information is available at their Web site.

WellDone — Utilities in the developed world use remote monitoring widely to keep far-flung equipment running smoothly, but their model is tough to apply in places where communications infrastructure is thin, though. This initiative has adapted the philosophy of the industrial Internet to the infrastructure that’s available: SMS text messaging. WellDone is installing water-flow sensors at local wells that send flow data by SMS to a cloud database. The system will alert local technicians when it detects anomalies in water flows, and the information it gathers will inform future data-driven development projects.

Manufacturing’s Next Chapter (AtlanticLIVE) — I’m visiting this conference in Washington, D.C. today; it’s also being live-streamed at The Atlantic‘s Web site. At 2:35pm Eastern Time and at 3:25pm, panelists will talk about the effect of technology on industry and the rise of advanced manufacturing.

Electricity Data Browser (U.S. Energy Information Administration) — The EIA has made its vast database of detailed electricity statistics available through an integrated interactive portal. The EIA has also built an API that opens more than 400,000 data series available to developers and analysts.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Four short links: 7 February 2013

  1. Tridium Niagara (Wired) — A critical vulnerability discovered in an industrial control system used widely by the military, hospitals and others would allow attackers to remotely control electronic door locks, lighting systems, elevators, electricity and boiler systems, video surveillance cameras, alarms and other critical building facilities, say two security researchers. cf the SANS SCADA conference.
  2. Santa Fe Institute Course: Introduction to Complexity — 11 week course on understanding complex systems: dynamics, chaos, fractals, information theory, self-organization, agent-based modeling, and networks. (via BoingBoing)
  3. Terms of Service Changes — a site that tracks changes to terms of service. (via Andy Baio)
  4. 3D Printing a Replacement Hand for a 5 Year Old Boy (Ars Technica) — the designs are on Thingiverse. For more, see their blog.

February 05 2013

Go to Washington, build the industrial Internet

The White House has issued its call for the second round of Presidential Innovation Fellows, and it includes an invitation to spend a 6- to 12-month “tour of duty” in Washington, building the industrial Internet — or, more precisely, helping the National Institute of Standards and Technology find ways to connect proprietary intelligent machines to each other securely through standardized communication layers.

NIST is looking for two fellows — one with a background in information technology and the other from physical engineering — reflecting the convergence of those fields in the industrial Internet, where challenges move fluidly back and forth between software and hardware.

Shyam Sunder, director of the engineering laboratory at NIST, proposed the fellowships as a way to coordinate the broad public and private research efforts that are going into the industrial Internet. The President’s Council of Advisors on Science and Technology had identified cyber-physical systems as a national priority for federal research and development in 2007 and 2010, and the field was part of the mandate of the Advanced Manufacturing Partnership announced in 2011.

At the same time, private-sector work on the industrial Internet has accelerated in domains like automotive technology, manufacturing, utilities and logistics, says Sunder. “They all have, as their core, networking and information technology being integrated within engineered physical systems. They all have a strong emphasis on sensors, controls and processors that are networked and somehow have to be organized.”

Those domains are blurring, with many functions that used to be handled by specialized hardware now handled in more generically-build software. “This convergence is what motivates our work,” says Sunder. “There’s value to looking at these in a coherent fashion.”

NIST has been collaborating with industry and academics for a year now, staging a workshop in March 2012 and an executive round-table in June. Last fall NIST broadened the group of agencies working on industrial Internet development to include, among others, the National Science Foundation.

The call for fellows is broad. “We have cast a wide net for two kinds of people that I think are crucial here,” says Sunder. “One brings the networking and IT perspective in things like interoperability, cybersecurity, and systems integration; and then, of course, the other one brings the physical systems perspective in areas such as controls and sensing.”

And, says Sunder, the environment in which the fellows will work is specialized and will require careful coordination with industry. “Industrial control systems traditionally have been very tightly controlled, tightly-coupled systems. So when you deal with the challenge of the industrial Internet, even though stacks might look similar [to those used in the open Internet], the details of those stacks can be incredibly challenging to define because you have to balance the openness and the security aspects in very interesting and clever ways. Industry is on the front line of these challenges, and by bringing bridges with industry, we can help elucidate what those frameworks might be in all these areas.”

The standards-setting deals with what Sunder calls an “open connection layer,” which sits on top of proprietary intelligence and contains communications between machines—cars exchanging information between each other, for example, or with highway infrastructure. “It’s the non-proprietary interactions that you have to manage for,” he says. “The economic potential and the public safety potential and the public benefit potential of these new technologies will depend on the ability to deal with non-proprietary interaction.”


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

January 31 2013

Hacking robotic arms, predicting flight arrival times, manufacturing in America, tracking Disney customers (industrial Internet links)

Flight Quest (GE, powered by Kaggle) — Last November GE, Alaska Airlines, and Kaggle announced the Flight Quest competition, which invites data scientists to build models that can accurately predict when a commercial airline flight touches down and reaches its gate. Since the leaderboard for the competition was activated on December 18, 2012, entrants have already beaten the benchmark prediction accuracy by more than 40%, and there are still two weeks before final submissions are due.

Robot Army (NYC Resistor) — A pair of robotic arms, stripped from their previous application with wire cutters, makes its way across the Manhattan Bridge on a bicycle and into the capable hands of NYC Resistor, a hardware-hacker collective in Brooklyn. There, Trammell Hudson installed new microcontrollers and brought them back into working condition.

The Next Wave of Manufacturing (MIT Technology Review) — This month’s TR special feature is on manufacturing, with special mention of the industrial Internet and its application in factories, as well as a worthwhile interview with the head of the Reshoring Initiative.

At Disney Parks, a Bracelet Meant to Build Loyalty (and Sales) (The New York Times) — A little outside the immediate industrial Internet area, but relevant nevertheless to the practice of measuring every component of an enormous system to look for things that can be improved. In this case, those components are Disney theme park visitors, who will soon use RFID wristbands to pay for concessions, open hotel doors, and get into short lines for amusement rides. Disney will use the resulting data to model consumer behavior in its parks.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

January 25 2013

The driverless-car liability question gets ahead of itself

Megan McArdle has taken on the question of how liability might work in the bold new world of driverless cars. Here’s her framing scenario:

Imagine a not-implausible situation: you are driving down a brisk road at 30 mph with a car heading towards you in the other lane at approximately the same speed. A large ball rolls out into the street, too close for you to brake. You, the human, knows that the ball is likely to be followed, in seconds, by a small child; you slam on the brakes (perhaps giving yourself whiplash) or swerve, at considerable risk of hitting the other car.

What should a self-driving car do?  More to the point, if you hit the kid, or the other car, who gets sued?

The lawyer could go after you, with your piddling $250,000 liability policy and approximately 83 cents worth of equity in your home. Or he could go after the automaker, which has billions in cash, and the ultimate responsibility for whatever decision the car made. What do you think is going to happen?

The implication is that the problem of concentrated liability might make automakers reluctant to take the risk of introducing driverless cars.

I think McArdle is taking a bit too much of a leap here. Automakers are accustomed to having the deepest pockets within view of any accident scene. Liability questions raised by this new kind of intelligence will have to be worked out — maybe by forcing drivers to take on the liability for their cars’ performance via their insurance companies, and insurance companies in turn certifying types of technology that they’ll insure. By the time driverless cars become a reality they’ll probably be substantially safer than human drivers, so the insurance companies might be willing to accept the tradeoff and everyone will benefit.

(Incidentally, I’m told by people who have taken rides in Google’s car that the most unnerving part of it is that it drives like your driver’s ed teacher told you to — at exactly the speed limit, with full stops at stop signs and conservative behavior at yellow lights.)

But we’ll probably get the basic liability testing out of the way before a car like Google’s hits the road in large numbers. First will come a wave of machine vision-based driver-assist technologies like automatic cruise control on highways (similar to some kinds of technology that have been around for years). These features present liability issues similar to those in a fully driverless car — can an automaker’s driving judgment be faulted in an accident? — but in a somewhat less fraught context.

The interesting question to me is how the legal system might handle liability for software that effectively drives a car better than any human possibly could. In the kind of scenario that McArdle outlines, a human driver would take intuitive action to avoid an accident — action that will certainly be at least a little bit sub-optimal. Sophisticated driving software could do a much better job at taking the entire context of the situation into account, evaluating several maneuvers, and choosing the one that maximizes survival rates through a coldly rational model.

That doesn’t solve the problem of liability chasing deep pockets, of course, but that’s a problem with the legal system, not the premise of a driverless car. One benefit that carmakers might enjoy is that driverless cars could store black box-type recordings, with detailed data on the context in which every decision was made, in order to show in court that the car’s software acted as well as it possibly could have.

In that case, driverless cars might present a liability problem for anyone who doesn’t own one — a human driver who crashes into a driverless car will find it nearly impossible to show he’s not at fault.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Reposted byRK RK

January 24 2013

The bicycle barometer, SCADA security, the smart city in a disaster (industrial Internet links)

The Bicycle Barometer (@richardjpope) — Richard Pope, a project manager at Gov.uk, built what he calls a barometer for his bike commute: it uses weather and transit data to compute a single value that expresses the relative comfort of a bike commute versus a train commute, and displays it on a dial. It’s a clever way of combining two unrelated datasets and then applying algorithmic intelligence. As more data from a sensor-laden world becomes available, we’ll need better tools like this one for reducing it to useful, simple, informed prescriptions.

Scada Security Predictions: 2013 (IndustryWeek) — Tofino Security founder Eric Byres predicts that 2013 will be the year that tablets start to show up on the plant floor. “We won’t see a full invasion of iDevices on the plant floor in 2013,” he writes, “but the wall will be breached.” Security researchers I’ve spoken with usually say that iOS is a remarkably secure platform, but connecting more devices to industrial control systems means more endpoints that make the job of securing an industrial system much more complicated.

Adaptation (The New Yorker, subscription required for full article) — Some smart-city systems, especially targeted communications and infrastructure monitoring, have become important elements of disaster preparedness.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Broadening the value of the industrial Internet

The industrial Internet makes data available at levels of frequency, accuracy and breadth that managers have never seen before, and the great promise of this data is that it will enable improvements to the big networks from which it flows. Huge systems can be optimized by taking into account the status of every component in real time; failures can be preempted; deteriorating performance can be detected and corrected.

But some of this intelligence can be a hard sell to those being monitored and measured, who worry that hard-learned discretion might be overridden by an engineer’s idealistic notion of how things should work. In professional settings, workers worry about loss of agency, and they imagine that they’ll be micro-managed on any minor variation from normal. Consumers worry about loss of privacy.

The best applications of the industrial Internet handle this conflict by generating value for those being monitored as well as those doing the monitoring — whether by giving workers the information they need to improve the metrics that their managers see, or by giving consumers more data to make better decisions.

Fort Collins (Colo.) Utilities, for instance, expects to recoup its $36 million investment in advanced meters in 11 years through operational savings. The meters obviate the need for meter readers, and the massive amounts of data they produce is useful for detecting power failures and predicting when transformers will need to be replaced before a damaging explosion occurs.

But that data isn’t just useful to those doing the monitoring — it’s turned around to generate value for the households that have advanced meters. Utility customers can see detailed graphs of their electricity and water usage, and integrated software from Aclara uses a statistical model to break out electricity usage into categories like heating, cooling and lighting. It then makes customized recommendations for energy savings by referring to county building records to find the age of a building and its construction type. The utility will even send a staff member to big commercial clients to consult on energy savings.

I spent some time chatting about this idea with Stephen Liguori, GE’s executive director for global innovation and new models, who taught a class at Columbia University’s business school last fall on marketing through business applications. (Disclosure: GE sponsors O’Reilly’s industrial Internet coverage.) Over the course of the six-week class, Liguori’s students developed proposals for industrial Internet applications, composed business plans, and submitted feedback from potential clients in a handful of industries. Liguori walked me through some of his class’ projects and introduced me to his students.

In one project, students designed an application that monitors power plants, providing real-time operating statistics as well as summaries that help managers optimize maintenance investment. It was clear that managers above the level of an individual power plant would want an application like this, but buy-in from lower levels might be harder to achieve; managers are often reluctant to submit themselves to measurement by their bosses.

The best way to build support, Liguori’s students found, was to offer everyone involved useful information, so they added diagnostic features to the app that help plant managers isolate problems with their machinery, as well as contextual documentation that helps workers get to solutions faster. “Once [the plant staff] saw that they would be getting more data out of this, they bought into it,” says Rebby Bliss, one of the students developing the project.

Students working on the other project approached a problem at a large Northeastern hospital: its doctors refer many of their patients to independent radiology clinics rather than the hospital’s own radiology department. Both provide similar quality of care and diagnostics, but doctors find that it’s easier to schedule their patients into the outside clinics.

The students’ response is an application that provides direct access to the hospital’s radiology department scheduling service and to the results of its scans. The hospital’s goal of increasing referrals to its own radiology department might be achieved simply by improving access to data — inbound and outbound — for physicians.

Both applications take into account information overload and the need for good queuing and prioritization. “What we found is that everyone at these facilities is getting tons of alerts,” says Liguori. “When you’re getting 800 alerts a day, you might as well be getting no alerts at all.”


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

January 23 2013

Four short links: 23 January 2013

  1. These Glasses Thwart Facial Recognition Software (Slate) — good idea, but don’t forget to put a stone in your shoe to thwart gait recognition too.
  2. opsec for Hackers (Slideshare) — how boring and unexciting most of not getting caught is.
  3. DHS Warns Password Cracker Targeting Industrial Networks (Nextgov) — Security consultants recently concluded that there are about 7,200 Internet-facing critical infrastructure devices, many of which use default passwords. Wake me when you stop boggling. Welcome to the Internet of Insecure Things (it’s basically the Internet we already have, but Borat can pwn your hydro dam and your fridge is telling Chinese milspec hackers when you midnight snack).
  4. The Evolution of Steve Mann’s Apparatus (Beta Knowledge) — wearable computing went from “makes you look like a robot who will never get laid” to “looks like sunglasses and promiscuity is an option”.

January 18 2013

Seeing peril — and safety — in a world of connected machines

I’ve spent the last two days at Digital Bond’s excellent S4 conference, listening to descriptions of dramatic industrial exploits and proposals for stopping them. A couple of years ago Stuxnet captured the imagination of people who foresee a world of interconnected infrastructure brought down by cybercriminals and hostile governments. S4 — which stands for SCADA Security Scientific Symposium — is where researchers convene to talk about exactly that sort of threat, in which malicious code makes its way into low-level industrial controls.

It is modern industry’s connectedness that presents the challenge: not only are industrial firms highly interconnected — allowing a worm to enter an engineer’s personal computer as an e-mail attachment and eventually find its way into a factory’s analytical layer, then into its industrial controls, bouncing around through print servers and USB drives — but they’re increasingly connected to the Internet as well.

Vendors counter that the perfect alignments of open doors that security researchers expose are extremely rare and require unusual skill and inside knowledge to exploit. And the most catastrophic visions — in which malicious code shuts down and severely damages a large city’s water system or an entire electrical grid — assume in many cases a level of interconnection that’s still theoretical.

In any case, industrial security appears to be advancing quickly. Security firms are able to make particularly effective use of anomaly detection and other machine-learning-based approaches to uncover malicious efforts, since industrial processes tend to be highly regular and information flows tightly prescribed. These approaches will continue to improve as the networks that feed information back to analytical layers become more sophisticated and computing power makes its way deeper into industrial systems.

The efforts of industrial security researchers seem to be paying off. In his keynote talk, Digital Bond founder Dale Peterson noted that the exposure of new vulnerabilities has slowed recently and wondered whether security might be subject to something of apredator-prey cycle, in which weak defenses in industrial controls attract hackers, which draws the attention of security researchers, who in turn drive away the hackers by closing vulnerabilities.

If that’s the case, then we’re looking at a gradual victory for the industrial Internet — as long as we don’t reach the last phase of the predator-prey cycle, in which security researchers, feeling they’ve vanquished their enemies, move on to a different challenge.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

January 17 2013

The software-enabled cars of the near-future (industrial Internet links)

OpenXC (Ford Motor) — Ford has taken a significant step in turning its cars into platforms for innovative developers. OpenXC goes beyond the Ford Developer Program, which opens up audio and navigation features, and lets developers get their hands on drivetrain and auto-body data via the on-board diagnostic port. Once you’ve built the vehicle interface from open-source parts, you can use outside intelligence — code running on an Android device — to analyze vehicle data.

Of course, as outside software gets closer to the drivetrain, security becomes more important. OpenXC is read-only at the moment, and it promises “proper hardware isolation to ensure you can’t ‘brick’ your $20,000 investment in a car.”

Still, there are plenty of sophisticated data-machine tieups that developers could build with read-only access to the drivetrain: think of apps that help drivers get better fuel economy by changing their acceleration or, eventually, apps that optimize battery cycles in electric vehicles.

Drivers with Full Hands Get a Backup: The Car (New York Times) — John Markoff takes a look at automatic driver aides — tools like dynamic cruise control and collision-avoidance warnings that represent something of a middle ground between driverless cars and completely manual vehicles. Some features like these have been around for years, many of them using ultrasonic proximity sensors. But some of these are special, and illustrative of an important element of the industrial Internet: they rely on computer vision like Google’s driverless car. Software is taking over some kinds of machine intelligence that had previously resided in specialized hardware, and it’s creating new kinds of intelligence that hadn’t existed in cars at all.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Related:

January 11 2013

Defining the industrial Internet

We’ve been collecting threads on what the industrial Internet means since last fall. More case studies, company profiles and interviews will follow, but here’s how I’m thinking about the framework of the industrial Internet concept. This will undoubtedly continue to evolve as I hear from more people who work in the area and from our brilliant readers.

The crucial feature of the industrial Internet is that it installs intelligence above the level of individual machines — enabling remote control, optimization at the level of the entire system, and sophisticated machine-learning algorithms that can work extremely accurately because they take into account vast quantities of data generated by large systems of machines as well as the external context of every individual machine. Additionally, it can link systems together end-to-end — for instance, integrating railroad routing systems with retailer inventory systems in order to anticipate deliveries accurately.

In other words, it’ll look a lot like the Internet — bringing industry into a new era of what my colleague Roger Magoulas calls “promiscuous connectivity.”

Optimization becomes more efficient as the size of the system being optimized grows (in theory). Your software can take into account lots of machines, learning from a much larger training set and then optimizing both within the machine and for the group of machines working together. Think of a wind farm. There are certain optimizations you need to make at the machine level: the turbine turns itself to face into the wind, the blades adjust themselves through every cycle in order to account for flex and compression, and the machine shuts down during periods of dangerously high wind.

System-wide optimization means that when you can operate each turbine in a way that minimizes air disruption to other turbines (these things create wake, just like an airplane, that can disrupt the operation of nearby turbines). When you need to increase or decrease power output across the whole farm, you can do it across lots of machines in a way that minimizes wear (i.e., curtail each machine by 5% or cut off 5% of your machines, or something in between depending on differential output and the impact of different speeds on machine wear). And by gathering data from thousands of machines, you can develop highly-detailed optimization plans.

By tying machines together, the industrial Internet will encourage “platformization.” Cars have several control systems, and until very recently they’ve been linked by point-to-point connections: when you move the windshield-wiper lever, it actuates a switch that’s connected to a small PLC that operates the windshield wipers. The brake pedal is part of the chassis-control system, and it’s connected by cable or hydraulics to the brake pads, with an electronic assist somewhere in the middle. The navigation system and radio are part of the same telematics platform, but that platform is not linked to, say, the steering wheel.

The car as enabled by the industrial Internet will be a platform — a bus, in the computing sense — built by the car manufacturer, with other systems communicating with each other through the platform. The brake pedal is an actuator that sends a “brake” signal to the car’s brake controller. The navigation system is able to operate the steering wheel and has access to the same brake controller. Some of these systems will be driven by third-party-built apps that sit on top of the platform.

This will take some time to happen in cars because it takes 10 or 15 years to renew the American auto fleet, because cars are maintained by a vast network of independent mechanics that need change to happen slowly, and because car development works incrementally.

But it’s already happening in commercial aircraft, which often come from clean-sheet designs (as with the Boeing 787 and Airbus A350), and which are maintained under very different circumstances than passenger cars. In Bombardier’s forthcoming C-series midsize jet, for instance, the jet engines do nothing but propel the plane and generate electricity (they don’t generate hydraulic pressure or compress air for the cabin; these are handled by electrically-powered compressors). The plane acts as a giant hardware platform on which all sorts of other systems sit: the landing-gear switch communicates with the landing gear through the aircraft’s bus, rather than by direct connection to the landing gear’s PLC.

The security implications of this sort of integration — in contrast to effectively air-gapped isolation of systems — are obvious. The industrial Internet will need its own specially-developed security mechanisms, which I’ll look into in another post.

The industrial Internet makes it much easier to deploy and harvest data from sensors, which goes back to the system-wide intelligence point above. If you’re operating a wind farm, it’s useful to have wind-speed sensors distributed across the country in order to predict and anticipate wind speeds and directions. And because you’re operating machine-learning algorithms at the system-wide level, you’re able to work large-scale sensor datasets into your system-wide optimization.

That, in turn, will help the industrial Internet take in previously-uncaptured data that’s made newly useful. Venkatesh Prasad, from Ford, pointed out to me that the windshield wipers in your car are a sort of human-actuated rain API. When you turn on your wipers, you’re acting as a sensor — you see water on your windshield, in a quantity sufficient to cause you to want your wipers on, and you set your wipers to a level that’s appropriate to the amount of water on your windshield.

In isolation, all you’re doing is turning on your windshield wipers. But if your car is networked, then it can send a signal to a cloud-based rain-detection service that geocorrelates your car with nearby cars whose wipers are on and makes an assumption about the presence of rain in the area and its intensity. That service could then turn on wipers in other cars nearby or do more sophisticated things — anything from turning on their headlights to adjusting the assumptions that self-driving cars make about road adhesion.

This is an evolving conversation, and I want to hear from readers. What should be included in the definition of the industrial Internet? What examples define, for you, the boundaries of the field?


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Related

Defining the industrial Internet

We’ve been collecting threads on what the industrial Internet means since last fall. More case studies, company profiles and interviews will follow, but here’s how I’m thinking about the framework of the industrial Internet concept. This will undoubtedly continue to evolve as I hear from more people who work in the area and from our brilliant readers.

The crucial feature of the industrial Internet is that it installs intelligence above the level of individual machines — enabling remote control, optimization at the level of the entire system, and sophisticated machine-learning algorithms that can work extremely accurately because they take into account vast quantities of data generated by large systems of machines as well as the external context of every individual machine. Additionally, it can link systems together end-to-end — for instance, integrating railroad routing systems with retailer inventory systems in order to anticipate deliveries accurately.

In other words, it’ll look a lot like the Internet — bringing industry into a new era of what my colleague Roger Magoulas calls “promiscuous connectivity.”

Optimization becomes more efficient as the size of the system being optimized grows (in theory). Your software can take into account lots of machines, learning from a much larger training set and then optimizing both within the machine and for the group of machines working together. Think of a wind farm. There are certain optimizations you need to make at the machine level: the turbine turns itself to face into the wind, the blades adjust themselves through every cycle in order to account for flex and compression, and the machine shuts down during periods of dangerously high wind.

System-wide optimization means that when you can operate each turbine in a way that minimizes air disruption to other turbines (these things create wake, just like an airplane, that can disrupt the operation of nearby turbines). When you need to increase or decrease power output across the whole farm, you can do it across lots of machines in a way that minimizes wear (i.e., curtail each machine by 5% or cut off 5% of your machines, or something in between depending on differential output and the impact of different speeds on machine wear). And by gathering data from thousands of machines, you can develop highly-detailed optimization plans.

By tying machines together, the industrial Internet will encourage “platformization.” Cars have several control systems, and until very recently they’ve been linked by point-to-point connections: when you move the windshield-wiper lever, it actuates a switch that’s connected to a small PLC that operates the windshield wipers. The brake pedal is part of the chassis-control system, and it’s connected by cable or hydraulics to the brake pads, with an electronic assist somewhere in the middle. The navigation system and radio are part of the same telematics platform, but that platform is not linked to, say, the steering wheel.

The car as enabled by the industrial Internet will be a platform — a bus, in the computing sense — built by the car manufacturer, with other systems communicating with each other through the platform. The brake pedal is an actuator that sends a “brake” signal to the car’s brake controller. The navigation system is able to operate the steering wheel and has access to the same brake controller. Some of these systems will be driven by third-party-built apps that sit on top of the platform.

This will take some time to happen in cars because it takes 10 or 15 years to renew the American auto fleet, because cars are maintained by a vast network of independent mechanics that need change to happen slowly, and because car development works incrementally.

But it’s already happening in commercial aircraft, which often come from clean-sheet designs (as with the Boeing 787 and Airbus A350), and which are maintained under very different circumstances than passenger cars. In Bombardier’s forthcoming C-series midsize jet, for instance, the jet engines do nothing but propel the plane and generate electricity (they don’t generate hydraulic pressure or compress air for the cabin; these are handled by electrically-powered compressors). The plane acts as a giant hardware platform on which all sorts of other systems sit: the landing-gear switch communicates with the landing gear through the aircraft’s bus, rather than by direct connection to the landing gear’s PLC.

The security implications of this sort of integration — in contrast to effectively air-gapped isolation of systems — are obvious. The industrial Internet will need its own specially-developed security mechanisms, which I’ll look into in another post.

The industrial Internet makes it much easier to deploy and harvest data from sensors, which goes back to the system-wide intelligence point above. If you’re operating a wind farm, it’s useful to have wind-speed sensors distributed across the country in order to predict and anticipate wind speeds and directions. And because you’re operating machine-learning algorithms at the system-wide level, you’re able to work large-scale sensor datasets into your system-wide optimization.

That, in turn, will help the industrial Internet take in previously-uncaptured data that’s made newly useful. Venkatesh Prasad, from Ford, pointed out to me that the windshield wipers in your car are a sort of human-actuated rain API. When you turn on your wipers, you’re acting as a sensor — you see water on your windshield, in a quantity sufficient to cause you to want your wipers on, and you set your wipers to a level that’s appropriate to the amount of water on your windshield.

In isolation, all you’re doing is turning on your windshield wipers. But if your car is networked, then it can send a signal to a cloud-based rain-detection service that geocorrelates your car with nearby cars whose wipers are on and makes an assumption about the presence of rain in the area and its intensity. That service could then turn on wipers in other cars nearby or do more sophisticated things — anything from turning on their headlights to adjusting the assumptions that self-driving cars make about road adhesion.

This is an evolving conversation, and I want to hear from readers. What should be included in the definition of the industrial Internet? What examples define, for you, the boundaries of the field?


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Related

January 07 2013

Industrial Internet links: smart cities return, pilotless commercial aircraft, and more

Mining the urban data (The Economist) — The “smart city” hype cycle has moved beyond ambitious top-down projects and has started to produce useful results: real-time transit data in London, smart meters in Amsterdam. The next step, if Singapore has its way, may be real-time optimization of things like transit systems.

This is your ground pilot speaking (The Economist) — Testing is underway to bring drone-style remotely-piloted aircraft into broader civilian use. One challenge: building in enough on-board intelligence to operate the plane safely if radio links go down.

How GE’s over $100 billion investment in ‘industrial internet’ will add $15 trillion to world GDP (Economic Times) — A broad look at what the industrial Internet means in the context of big data, including interviews with Tim O’Reilly, DJ Patil and Kenn Cukier. (Full disclosure: GE and O’Reilly are collaborating on an industrial Internet series.)

Defining a Digital Network for Building-to-Cloud Efficiency (GreentechEnterprise) — “Eventually, the building will become an IT platform for managing energy a bit like we manage data today. But to get there, you don’t just have to make fans, chillers, lights, backup generators, smart load control circuits and the rest of a building’s hardware smart enough to act as IT assets. A platform — software that ties these disparate devices into the multiple, overlapping technical and economic models that help humans decide how to manage their building — is also required.”


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

December 20 2012

The industrial Internet from a startup perspective

I don’t remember when I first met Todd Huffman, but for the longest time I seemed to run into him in all kinds of odd places, but mostly in airport waiting areas as our nomadic paths intersected randomly and with surprising frequency. We don’t run into each other in airports anymore because Todd has settled in San Francisco to build 3Scan, his startup at the nexus of professional maker, science as a service, and the industrial Internet. My colleague Jon Bruner has been talking to airlines, automobile manufacturers, and railroads to get their industrial Internet stories. I recently caught up with Todd to see what the industrial Internet looks like from the perspective of an innovative startup.

First off, I’m sure he wouldn’t use the words “industrial Internet” to describe what he and his team are doing, and it might be a little bit of a stretch to categorize 3Scan that way. But I think they are an exemplar of many of the core principles of the meme and it’s interesting to think about them in that frame. They are building a device that produces massive amounts of data; a platform to support its complex analysis, distribution, and interoperation; and APIs to observe its operation and remotely control it.

Do a Google image search for “pathologist” and you’ll find lots and lots of pictures of people in white lab coats sitting in front of microscopes. This is a field whose primary user interface hasn’t changed in 200 years. This is equally true for a wide range of scientific research. 3Scan is setting out to change that by simplifying the researcher’s life while making 3D visualization and numerical analysis of the features of whole tissue samples readily available.

At the heart of the system is an illuminated diamond knife blade capable of making one-micron-thick tissue slices coupled with a scanning objective. A cubic inch of tissue, such as a mouse brain, sliced into one-micron slices will take seven hours to scan and will produce about a terabyte of data. The design of the knife is licensed from Texas A&M brain networks lab where Todd spent some of his graduate student years, and 3Scan is building it into a cost effective platform by taking advantage of technologies from other industries. For example, the tissue stage was adapted from an aerospace application, and the first version of their sample viewer is based on OpenLayers, an open source geo-spatial platform. Also, the cost to store and process the data has dropped by multiple orders of magnitude since Todd first started working on it in 2005.

Mouse brain vascular networkMouse brain vascular network

Mouse brain vascular network

What makes this interesting from an industrial Internet perspective is the way they are combining an instrumented machine (the microscope) with powerful data analytics and remote access to enable a service. The samples are volumetric and high resolution (1 cubic micron) instead of the traditional 2D slides prepared for microscopy, and because they are digital they are available for 3D viewing, algorithmic inspection, categorization, and analysis as well as traditional 2D viewing. This is a very complex “thing” that is getting on the Internet in a meaningful way, producing data, exposing APIs, and allowing for remote interaction and control.

Researchers will be able to follow their sample through the entire process from receipt to scan to the application of analytical algorithms. They won’t have to develop the competencies to run a machine that would otherwise have to be dumbed down for grad student deployment, and they will be able to add their own algorithms into the processing pipeline without having to manage the analytical infrastructure.

Longer term one can imagine the data and analytical pipeline breaking away from this single device and serving as a general analytical platform for a range of machines that provide different forms of volumetric imaging. Or, a single sample might be processed by different machines across different scientific service providers (i.e. imaging, DNA sequencing, etc.) and because each of them is operating as a network service rather than as a standalone device, the range of analytical methods can continue to grow.

Talking to Todd I felt like I was getting a glimpse here at the future of connected instruments.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Related:

December 19 2012

Three lessons for the industrial Internet

The map of the industrial Internet is still being drawn, which means the decisions we’re making about it now will determine the extent to which it shapes our world.

With that as a backdrop, Tim O’Reilly (@timoreilly) used his presentation at the recent Minds + Machines event to urge the industrial Internet’s architects to apply three key lessons from the Internet’s evolution. These three characteristics gave the Internet its ability to be open, to scale and to adapt — and if these same attributes are applied to the industrial Internet, O’Reilly believes this growing domain has the ability to “change who we are.”

Full video and slides from O’Reilly’s talk are embedded at the end of this piece. You’ll find a handful of insights from the presentation outlined below.

Lesson 1: Simplicity

“Standardize as little as possible, but as much as is needed so the system is able to evolve,” O’Reilly said.

To illustrate this point, O’Reilly drew a line between the simplicity and openness of TCP/IP, the creation and growth of the World Wide Web, and the emergence of Google.

“The Internet is fundamentally permission-less,” O’Reilly said. “Those of us who were early pioneers on the web, all we had to do was download the software and start playing. That’s how the web grew organically. So much more came from that.”

A nice side-effect of this model is that you can easily determine its success. “A new platform can be said to succeed when your customers and partners build new features before you do,” O’Reilly noted.

The Hourglass Architecture of the InternetThe Hourglass Architecture of the Internet

“The IP protocol did the smallest, necessary thing: it specified the format of the data that would be exchanged between machines. Everything else could vary, from the transport protocols and transport medium all the way to the kinds of applications and services that were exchanging that data. Jonathan Zittrain refers to this as the ‘hourglass architecture’ of the Internet.”
— Tim O’Reilly, “Lessons for the Industrial Internet,” slide 6

(The “simplicity” segment begins at the 1:43 mark in the accompanying video.)

Lesson 2: Generativity

“Create an architecture of participation that leads to unexpected innovations and discoveries, and builds a new ecosystem of companies that add value to the network,” O’Reilly explained.

O’Reilly pointed to two examples relevant to this lesson:

  1. Google Maps — Online mapping services were already common when Google launched its Maps service in 2005. So how did Google push to the front of the line? When hackers started mashing up Maps data with external sources, Google embraced those efforts and launched APIs. The result is a mapping platform that’s achieved ubiquity through accessibility. It’s unlikely anyone within Google could have anticipated the innovations that would come once the doors were thrown open.
  2. Apple’s App Store / the rise of Android — The first iPhone didn’t launch with an App Store. Third-party development at that point was limited to web apps. But Apple plotted a new course when it saw jailbreaking grow, and the company now oversees a marketplace with more than 700,000 applications. However, O’Reilly pointed to the rise of Android as evidence that Apple didn’t completely embrace a participatory architecture. “Apple didn’t learn the lesson well enough,” he said.

O’Reilly also noted that the industrial Internet’s considerable upside makes the need for participatory architecture vital. “I think this industrial Internet idea is so powerful and so right, that everybody is going to want to get on board. It’s going to be really important to figure out how you create an open systems approach to this. That doesn’t mean you can’t create enormous value for yourselves, but it’s super important to think about that aspect of it.”

HousingMaps Google Maps mashupHousingMaps Google Maps mashup

“When Paul Rademacher reverse-engineered the format of Google’s new mapping app to create the first map mashup, HousingMaps.com, Google could have branded him a ‘hacker’ and tried to shut him down. Instead, they responded by opening up free APIs for developers. Other, more closed platforms were left in the dust, and Google Maps became the preferred mapping platform for the web.”
— Tim O’Reilly, “Lessons for the Industrial Internet,” slide 11

(This “generativity” segment begins at the 3:01 mark in the accompanying video.)

Lesson 3: Robustness

Build “the ability to tolerate failure and degrade gracefully rather than catastrophically,” O’Reilly said.

“Graceful failure” may sound like an excuse a parent uses to soothe a child’s bruised ego, but it’s actually a fundamental building block of the Internet.

As an example, O’Reilly said that one of the key innovations Tim Berners-Lee constructed when he developed the World Wide Web was the ability for anyone to create a hypertext link that didn’t resolve. A user would bump up against a 404 message if a link hit a dead end, yet everything would still function and the user could flow around the obstacle without the entire system crumbling.

Now, you may think an innocuous 404 failure is a far cry from the failure of a massive chunk of industrial machinery. That’s not necessarily the case. O’Reilly told the story of a Boeing engineer who addressed catastrophic metal fatigue in airplanes through a form of graceful failure. “The right answer wasn’t to eliminate all the cracks,” O’Reilly said. “You had to figure out how to live with them.”

Bottom line: Whether we’re talking about hypertext or airliner materials, an embrace of robustness and graceful failure expands a domain’s possibilities. “One of the big lessons from the Internet is if you don’t know how to fail, you’ll never be able to scale,” O’Reilly said.

De Havilland's Comet and Boeing's graceful failureDe Havilland's Comet and Boeing's graceful failure

“While it may seem that this philosophy of the Internet is inappropriate for the highly engineered systems of the industrial Internet, I’ll remind you of the failure of de Havilland’s Comet in 1954 and the rise of Boeing as the dominant provider of commercial aircraft. Over the course of three years, three Comets fell out of the sky for initially unexplained reasons. It eventually became clear that the problem was metal fatigue. De Havilland tried to eliminate all cracks; Boeing learned to live with them.”
— Tim O’Reilly, “Lessons for the Industrial Internet,” slide 18

(This “robustness” segment begins at the 6:40 mark in the accompanying video.)

Full video: “Minds + Machines 2012: ‘Closing the Loop’ – Lessons of Data for the Industrial Internet”

Additional insights and discussion are contained in this video. The first 13 minutes features O’Reilly’s presentation, which is then followed by a panel discussion between O’Reilly, Paul Maritz of EMC, DJ Patil of Greylock Partners, Hilary Mason of bitly, and Matt Reilly of Accenture Management Consulting.

Slides: “Lessons for the Industrial Internet”


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Related industrial Internet coverage

Reposted byRK RK

December 18 2012

Interoperating the industrial Internet

One of the most interesting points made in GE’s “Unleashing the Industrial Internet” event was GE CEO Jeff Immelt’s statement that only 10% of the value of Internet-enabled products is in the connectivity layer; the remaining 90% is in the applications that are built on top of that layer. These applications enable decision support, the optimization of large scale systems (systems “above the level of a single device,” to use Tim O’Reilly’s phrase), and empower consumers.

Given the jet engine that was sitting on stage, it’s worth seeing how far these ideas can be pushed. Optimizing a jet engine is no small deal; Immelt said that the engine gained an extra 5-10% efficiency through software, and that adds up to real money. The next stage is optimizing the entire aircraft; that’s certainly something GE and its business partners are looking into. But we can push even harder: optimize the entire airport (don’t you hate it when you’re stuck on a jet waiting for one of those trucks to push you back from the gate?). Optimize the entire air traffic system across the worldwide network of airports. This is where we’ll find the real gains in productivity and efficiency.

So it’s worth asking about the preconditions for those kinds of gains. It’s not computational power; when you come right down to it, there aren’t that many airports, aren’t that many flights in the air at one time. There are something like 10,000 flights in the air at one time, worldwide; and in these days of big data, and big distributed systems, that’s not a terribly large number. It’s not our ability to write software; there would certainly be some tough problems to solve, but certainly nothing as difficult as, say, searching the entire web and returning results in under a second.

But there is one important prerequisite for software that runs above the level of a single machine, and that’s interoperability. That’s something the inventors of the Internet understood early on; nothing became a standard unless at least two independent, interoperable implementations existed. The Interop conference didn’t start as a trade show, it started as a technical exercise where everyone brought their experimental hardware and software and worked on it until it played well together.

If we’re going to build useful applications on top of the industrial Internet, we must ensure, from the start, that the components we’re talking about interoperate. It’s not just a matter of putting HTTP everywhere. Devices need common, interoperable data representations. And that problem can’t be solved just by invoking XML: several years of sad experience has proven that it’s certainly possible to be proprietary under the aegis of “open” XML standards.

It’s a hard problem, in part because it’s not simply technical. It’s also a problem of business culture, and the desire to extract as much monetary value from your particular system as possible. We see the consumer Internet devolving into a set of walled gardens, with interoperable protocols but license agreements that prevent you from moving data from one garden into another. Can the industrial Internet do better? It takes a leap of faith to imagine manufacturers of industrial equipment practicing interoperability, at least in part because so many manufacturers have already developed their own protocols and data representations in isolation. But that’s what our situation demands. Should a GE jet engine interoperate with a jet engine from Pratt and Whitney? What would that mean, what efficiencies in maintenance and operations would that entail? I’m sure that any airline would love a single dashboard that would show the status of all its equipment, regardless of vendor. Should a Boeing aircraft interoperate with Airbus and Bombardier in a system to exchange in-flight data about weather and other conditions? What if their flight computers were in constant communication with each other? What would that enable? Leaving aviation briefly: self-driving cars have the potential to be much safer than human-driven cars; but they become astronomically safer if your Toyota can exchange data directly with the BMW coming in the opposite direction. (“Oh, you intend to turn left here? Your turn signal is out, by the way.”)

Extracting as much value as possible from a walled garden is false optimization. It may lead you to a local maximum in profitability, but it leaves the biggest gains, the 90% that Immelt talked about in his keynote, behind. Tim O’Reilly has talked about the “clothesline paradox“: if you dry your clothes on a clothesline, the money you save doesn’t disappear from the economy, even though it disappears from the electric company’s bottom line. The economics of walled gardens is the clothesline paradox’s evil twin. Building a walled garden may increase local profitability, but prevents larger gains, Immelt’s 90% gains in productivity, from existing. They never reach the economy.

Can the industrial Internet succeed in breaking down walled gardens, whether they arise from business culture, legacy technology, or some other source? That’s a hard problem. But it’s the problem the industrial Internet must solve if it is to succeed.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

November 30 2012

To eat or be eaten?

One of Marc Andreessen’s many accomplishments was the seminal essay “Why Software is Eating the World.” In it, the creator of Mosaic and Netscape argues for his investment thesis: everything is becoming software. Music and movies led the way, Skype makes the phone company obsolete, and even companies like Fedex and Walmart are all about software: their core competitive advantage isn’t driving trucks or hiring part-time employees, it’s the software they’ve developed for managing their logistics.

I’m not going to argue (much) with Marc, because he’s mostly right. But I’ve also been wondering why, when I look at the software world, I get bored fairly quickly. Yeah, yeah, another language that compiles to the JVM. Yeah, yeah, the Javascript framework of the day. Yeah, yeah, another new component in the Hadoop ecosystem. Seen it. Been there. Done that. In the past 20 years, haven’t we gained more than the ability to use sophisticated JavaScript to display ads based on a real-time prediction of the user’s next purchase?

When I look at what excites me, I see a much bigger world than just software. I’ve already argued that biology is in the process of exploding, and the biological revolution could be even bigger than the computer revolution. I’m increasingly interested in hardware and gadgetry, which I used to ignore almost completely. And we’re following the “Internet of Things” (and in particular, the “Internet of Very Big Things”) very closely. I’m not saying that software is irrelevant or uninteresting. I firmly believe that software will be a component of every (well, almost every) important new technology. But what grabs me these days isn’t software as a thing in itself, but software as a component of some larger system. The software may be what makes it work, but it’s not about the software.

A dozen or so years ago, people were talking about Internet-enabled refrigerators, a trend which (perhaps fortunately) never caught on. But it led to an interesting exercise: thinking of the dumbest device in your home, and imagine what could happen if it was intelligent and network-enabled. My furnace, for example: shortly after buying our house, we had the furnace repairman over 7 times during the month of November. And rather than waiting for me to notice that the house was getting cold at 2AM, it would have been nice for a “smart furnace” to notify the repairman, say “I’m broken, and here’s what’s probably wrong.” (The Nest doesn’t do that, but with a software update it probably could.)

The combination of low-cost, small-footprint computing (the BeagleBone, Raspberry Pi, and the Arduino), along with simple manufacturing (3D printing and CNC machines), and inexpensive sensors (for $150, the Kinect packages a set of sensors that until recently would easily have cost $10,000) means that it’s possible to build smart devices that are much smaller and more capable than anything we could have built back when we were talking about smart refrigerators. We’ve seen Internet-enabled scales, robotic vacuum cleaners, and more is on the way.

At the other end of the scale, GE’s “Unleashing the Industrial Internet” event had a fully instrumented network-capable jet engine on stage, with dozens of sensors delivering realtime data about the engine’s performance. That data can be used for everything from performance optimization to detecting problems. In a panel, Tim O’Reilly asked Matt Reilly of Accenture “do you want more Silicon Valley on your turf?” and his immediate reply was “absolutely.”

Even in biology: synthetic biology is basically nothing more than programming with DNA, using a programming language that we don’t yet understand and for which there is still no “definitive guide.” We’re only beginning to get to the point where we can reliably program and build “living software,” but we are certainly going to get there. And the consequences will be profound, as George Church has pointed out.

I’m not convinced that software is going to eat everything. I don’t see us living in a completely virtual world, mediated completely by browsers and dashboards. But I do see everything eating software: software will be a component of everything we do or buy, from our clothing to our food. Why is the FitBit a separate device? Why not integrate it into your shoes? Can we imagine cookies that incorporate proteins that have been engineered to become unappealing when we’ve eaten too much? Yes, we can, though we may not be happy about that. Seriously, I’ve had discussions about genetically engineered food that would diagnose diseases and turn different colors in your digestive track to indicate cancer and other conditions. (You can guess how you read the results).

Andreessen is certainly right in his fundamental argument that software has disrupted, and will continue to disrupt, just about every industry on the planet. He pointed to health care and education as the next industries to be disrupted; and we’re certainly seeing that, with Coursera and Udacity in education, and conferences like StrataRx in health care. We just need to push his conclusion farther. Is a robotic car a case of software eating the driver, or of the automobile eating software? You tell me. At the Industrial Internet event, Andreessen was quoted as saying “We only invest in hardware/software hybrids that would collapse if you pulled the software out.” Is an autonomous car something that would collapse if you pulled the software out? The car is still drivable. In any case, my “what’s the dumbest device in the house” exercise is way too limiting. When are we going to build something that we can’t now imagine, that isn’t simply an upgrade of what we already have? What would it mean for our walls and floors, or our plumbing, to be intelligent? At the other extreme, when will we build devices where we don’t even notice that they’ve “eaten” software? Again, Matt Reilly: “It will be about flights that are on time, luggage that doesn’t get lost.”

In the last few months, I’ve seen a number of articles on the future of venture investment. Some argue that it’s too easy and inexpensive to look for “the next Netscape,” and as a consequence, big ambitious projects are being starved. It’s hard for me to accept that. Yes, there’s a certain amount of herd thinking in venture capital, but investors also know that when everyone is doing the same thing, they aren’t going to make any money. Fred Wilson has argued that momentum is moving from consumer Internet to enterprise software, certainly a market that is ripe for disruption. But as much as I’d like to see Oracle disrupted, that still isn’t ambitious enough.

Innovation will find the funds that it needs (and it isn’t supposed to be easy). With both SpaceX and Tesla Motors, Elon Musk has proven that it’s possible for the right entrepreneur to take insane risks and make headway. Of course, neither has “succeeded,” in the sense of a lucrative IPO or buyout. That’s not the point either, since being an entrepreneur is all about risking failure. Neither SpaceX nor Tesla are Facebook-like “consumer web” startups, nor even enterprise software startups or education startups. They’re not “software” at all, though they’ve both certainly eaten a lot of software to get where they are. And that leads to the most important question:

What’s the next big thing that’s going to eat software?

Related:

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
Get rid of the ads (sfw)

Don't be the product, buy the product!

Schweinderl