Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 06 2013

The end of integrated systems

I always travel with a pair of binoculars and, to the puzzlement of my fellow airline passengers, spend part of every flight gazing through them at whatever happens to be below us: Midwestern towns, Pennsylvania strip mines, rural railroads that stretch across the Nevada desert. Over the last 175 years or so, industrialized America has been molded into a collection of human patterns–not just organic New England villages in haphazard layouts, but also the Colorado farm settlements surveyed in strict grids. A close look at the rectangular shapes below reveals minor variations, though. Every jog that a street takes is a testament to some compromise between mankind and our environment that results in a deviation from mathematical perfection. The world, and our interaction with it, can’t conform strictly to top-down ideals.

We’re about to enter a new era in which computers will interact aggressively with our physical environment. Part of the challenge in building the programmable world will be finding ways to make it interact gracefully with the world as it exists today. In what you might call the virtual Internet, human information has been squeezed into the formulations of computer scientists: rigid relational databases, glyphs drawn from UTF-8, information addressed through the Domain Name System. To participate in it, we converse in the language and structures of computers. The physical Internet will need to understand much more flexibly the vagaries of human behavior and the conventions of the built environment.

A few weeks ago, I found myself explaining the premise of the programmable world to a stranger. He caught on when the conversation turned to driverless cars and intelligent infrastructure. “Aha!” he exclaimed, “They have these smart traffic lights in Los Angeles now; they could talk to the cars!”

The idea that centralized traffic computers will someday communicate wirelessly with vehicles, measuring traffic conditions and directing cars, was widely discussed a few years ago, but now it struck me as outdated. Computers can easily read traffic signals with machine vision the same way that humans do–by interpreting red, yellow, and green lights in the context of an intersection’s layout. On the other side, traffic signals often use cameras to analyze volumes and adjust to them. With these kinds of flexible-sensing technologies perfected, there will be no need for rigid, integrated systems to link cars to traffic lights. The example will repeat itself in other cases where software gathers data from sensors and controls physical devices.

The world is becoming programmable, with physical devices drawing intelligence from layers of software that can connect to widespread sensor networks and optimize entire systems. Cars drive themselves, thermostats silently adjust to our apparent preferences, wind turbines gird themselves for stormy gusts based on data collected from other turbines. These services are increasingly enabled not by rigid software infrastructure that connects giant systems end-to-end, but by ambient data-gathering, Web-like connections, and machine-learning techniques that help computers interact with the same user interfaces that humans rely on.

We’re headed for a re-hash of the API/Web scraping debate, and even more than on the Web, scraping is poised for an advantage since physical-world conventions have been under refinement for millennia. It’s remarkably easy to differentiate a men’s room door from a women’s room door, or to tell whether a door is locked by looking at the deadbolt, and software that can understand these distinctions is already available.

There will, of course, still be a very big place for integrated systems in the physical Internet. Environments built from scratch; machines that operate in formalized, well-contained environments; and systems whose first priority is reliability and clarity will all make good use of traditional APIs. Trains will always communicate with dispatchers through formal integrated systems, just as Twitter’s API is likely the best foundation for a Twitter app regardless of how sophisticated your Twitter-scraping software might be.

But building a dispatching system for a railroad is nothing like developing a car that could communicate, machine-to-machine, with any traffic light it might encounter. Governments would need to invest billions in retrofitting their signaling systems, cars would need to accommodate the multitude of standards that would emerge from the process, and in order for adoption to be practical, traffic-signal retrofits would need to be universal. It’s better to gently build systems layer by layer that can accept human convention as it already exists, and the technology to do that is emerging now.

A couple of other observations about the superiority of loosely-coupled services in the physical world:

  • Google Maps, Waze, and Inrix all harvest traffic data ambiently from connected devices that constantly report location and speed. The alternative, described before networked navigation devices were common, was to gather this data through connected infrastructure. State departments of transportation, spurred by the SAFETEA-LU transportation bill of 2005 (surely the most tortured acronym in Washington), invested heavily in “smart transportation” infrastructure like roadway sensors and digital overhead signs, with the aim of, for instance, directing traffic around congestion. Compared to any of the Web services, these systems are slow to react, have poor coverage, are inflexible, and are incredibly expensive. The Web services make collateral use of infrastructure–gathering data quietly from devices that were principally installed for other purposes.
  • A few weeks ago I spent a storm-drenched six hours in one of O’Hare Airport’s lower-ring terminals. As we waited for the storm to clear and flights to resume, we watched the data screen over the gate–the Virgil in our inferno–which periodically showed a weather map and updated our predicted departure time. It was clear to anyone with a smartphone, though, that the information it displayed, at the end of a long chain of data transmission and reformatting that went from weather services and the airline’s management system through the airport’s display system, was out of date. The flight updates arrived on the airline’s Web site 10 to 15 minutes before they appeared on the gate screen; the weather map, which showed our storm crossing the Mississippi River even as it appeared over the airfield, was a consistent hour behind. By following updates on the Internet, customers were able to subvert an otherwise-rigid and hierarchical information flow, finding data where it was freshest and most accurate among loosely-coupled services online. Software that can automatically seek out the highest-quality information, flexibly switching between sources, will in turn provide the highest-quality intelligence.

The API represents the world on the computers’ terms: human output squished constrained by programming conventions. Computers are on their way to participating in the world in human terms, accepting winding roads and handwritten signs. In other words, they’ll soon meet us halfway.

January 25 2013

The driverless-car liability question gets ahead of itself

Megan McArdle has taken on the question of how liability might work in the bold new world of driverless cars. Here’s her framing scenario:

Imagine a not-implausible situation: you are driving down a brisk road at 30 mph with a car heading towards you in the other lane at approximately the same speed. A large ball rolls out into the street, too close for you to brake. You, the human, knows that the ball is likely to be followed, in seconds, by a small child; you slam on the brakes (perhaps giving yourself whiplash) or swerve, at considerable risk of hitting the other car.

What should a self-driving car do?  More to the point, if you hit the kid, or the other car, who gets sued?

The lawyer could go after you, with your piddling $250,000 liability policy and approximately 83 cents worth of equity in your home. Or he could go after the automaker, which has billions in cash, and the ultimate responsibility for whatever decision the car made. What do you think is going to happen?

The implication is that the problem of concentrated liability might make automakers reluctant to take the risk of introducing driverless cars.

I think McArdle is taking a bit too much of a leap here. Automakers are accustomed to having the deepest pockets within view of any accident scene. Liability questions raised by this new kind of intelligence will have to be worked out — maybe by forcing drivers to take on the liability for their cars’ performance via their insurance companies, and insurance companies in turn certifying types of technology that they’ll insure. By the time driverless cars become a reality they’ll probably be substantially safer than human drivers, so the insurance companies might be willing to accept the tradeoff and everyone will benefit.

(Incidentally, I’m told by people who have taken rides in Google’s car that the most unnerving part of it is that it drives like your driver’s ed teacher told you to — at exactly the speed limit, with full stops at stop signs and conservative behavior at yellow lights.)

But we’ll probably get the basic liability testing out of the way before a car like Google’s hits the road in large numbers. First will come a wave of machine vision-based driver-assist technologies like automatic cruise control on highways (similar to some kinds of technology that have been around for years). These features present liability issues similar to those in a fully driverless car — can an automaker’s driving judgment be faulted in an accident? — but in a somewhat less fraught context.

The interesting question to me is how the legal system might handle liability for software that effectively drives a car better than any human possibly could. In the kind of scenario that McArdle outlines, a human driver would take intuitive action to avoid an accident — action that will certainly be at least a little bit sub-optimal. Sophisticated driving software could do a much better job at taking the entire context of the situation into account, evaluating several maneuvers, and choosing the one that maximizes survival rates through a coldly rational model.

That doesn’t solve the problem of liability chasing deep pockets, of course, but that’s a problem with the legal system, not the premise of a driverless car. One benefit that carmakers might enjoy is that driverless cars could store black box-type recordings, with detailed data on the context in which every decision was made, in order to show in court that the car’s software acted as well as it possibly could have.

In that case, driverless cars might present a liability problem for anyone who doesn’t own one — a human driver who crashes into a driverless car will find it nearly impossible to show he’s not at fault.

This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Reposted byRK RK
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
No Soup for you

Don't be the product, buy the product!

YES, I want to SOUP ●UP for ...