Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 26 2014

Hurdles to the Internet of Things prove more social than technical

Last Saturday’s IoT Festival at MIT became a meeting-ground for people connecting the physical world. Embedded systems developers, security experts, data scientists, and artists all joined in this event. Although it was called a festival, it had a typical conference format with speakers, slides, and question periods. Hallway discussions were intense.

However you define the Internet of Things (O’Reilly has its own take on it, in our Solid blog site and conference), a lot stands in the way of its promise. And these hurdles are more social than technical.

Participants eye O'Reilly booksParticipants eye O'Reilly books

Participants eye O’Reilly books (Credit: IoT Festival)

Some of the social discussion we have to have before we get the Internet of Things rolling are:

  • What effects will all this data collection, and the injection of intelligence into devices, have on privacy and personal autonomy? And how much of these precious values are we willing to risk to reap the IoT’s potential for improving life, saving resources, and lowering costs?

  • Another set of trade-offs involve competing technical goals. For instance, power consumption can constrain features, and security measures can interfere with latency and speed. What is the priority for each situation in which we are deploying devices? How do we make choices based on our ultimate goals for these things?

  • How do we persuade manufacturers to build standard communication protocols into everyday objects? For those manufacturers using proprietary protocols and keeping the data generated by the objects on private servers, how can we offer them business models that makes sharing data more appealing than hoarding it?

  • What data do we really want? Some industries are already drowning in data. (Others, such as health care, have huge amounts of potential data locked up in inaccessible silos.) We have to decide what we need from data and collect just what’s useful. Collection and analysis will probably be iterative: we’ll collect some data to see what we need, then go back and instrument things to collect different data.

  • How much can we trust the IoT? We all fly on planes that depend heavily on sensors, feedback, and automated controls. Will we trust similar controls to keep self-driving vehicles from colliding on the highway at 65 miles per hour? How much can we take humans out of the loop?

  • Similarly, because hubs in the IoT are collecting data and influencing outcomes at the edges, they require a trust relationship among the edges, and between the edges and the entity who is collecting and analyzing the data.

  • What do we need from government? Scads of data that is central to the IoT–people always cite the satellite GIS information as an example–comes from government sources. How much do we ask the government for help, how much do we ask it to get out of the way of private innovators, and how much can private innovators feed back to government to aid its own efforts to make our lives better?

  • It’s certain that IoT will displace workers, but likely that it will also create more employment opportunities. How can we retrain workers and move them to the new opportunities without too much disruption to their lives? Will the machine serve the man, or the other way around?

We already have a lot of the technology we need, but it has to be pulled together. It must be commercially feasible at a mass production level, and robust in real-life environments presenting all their unpredictability. Many problems need to be solved at what the Jim Gettys (famous for his work on the X Window System, OLPC, and bufferbloat) called “layer 8 of the Internet”: politics.

Large conference hallLarge conference hall

Large conference hall. I am in striped olive green shirt at right side of photo near rear (Credit: IoT Festival)

The conference’s audience was well-balanced in terms of age and included a good number of women, although still outnumbered by men. Attendance was about 175 and most of the conference was simulcast. Videos should be posted soon at the agenda site. Most impressive to me was the relatively small attrition that occurred during the long day.

Isis3D, a sponsor, set up a booth where their impressive 3D printer ratcheted out large and very finely detailed plastic artifacts. Texas Instruments’ University Program organized a number of demonstrations and presentations, including a large-scale give away of LaunchPads with their partner, Anaren.

Isis3D demos their 3D printerIsis3D demos their 3D printer

Isis3D demos their 3D printer (Credit: IoT Festival)

Does the IoT hold water?

One application of the IoT we could all applaud is reducing the use of water, pesticide, and fertilizer in agriculture. This application also illustrates some of the challenges the IoT faces. AgSmarts installs sensors that can check temperature, water saturation, and other factors to report which fields need more or less of a given resource. The service could be extended in two ways:

  • Pull in more environmental data from available sources besides the AgSmarts devices. For instance, Dr. Shawana Johnson of Global Marketing Insight suggested that satellite photos (we’re back to that oft-cited GIS data) could show which fields are dry.

  • Close the loop by running all the data through software that adjusts the delivery of water or fertilizer with little or no human intervention. This is a big leap in sophistication, reviving my earlier question about trusting the IoT. Closing the loop would require real-time analysis of data from hundreds of locations. It’s worth noting that AgSmarts offers a page about Analytics, but all it says right now is “Coming Soon.”

Lights, camera, action

A couple other interesting aspects of the IoT are responsive luminaires (light fixtures) and unmanned aerial vehicles, which can track things on the ground through cameras and other sensors.

Consumer products to control indoor lights are already available. Lighting expert John Luciani reported that outdoor lights are usually coordinated through the Zigbee wireless protocol. Some applications for control over outdoor lighting include:

  • Allowing police in their cruisers to brighten the lights during an emergency.

  • Increasing power to LEDs gradually over time, to compensate for their natural dimming.

  • Identifying power supplies that are nearing the end of their life, because LEDs can last much longer than power supplies.

Lights don’t suffer from several of the major constraints that developers struggle with when using many IoT devices. Because luminaires need a lot of power to cast their light, they always have a power source that is easily high enough to handle their radio communications and other computing needs. Second, there is plenty of room for a radio antenna. Finally, recent ANSI standards have allowed a light fixture to contain more control wires.

Thomas Almholt of Texas Instruments reported that Texas electric utilities have been installing smart meters. Theoretically, these could help customers and utilities save money and reduce usage, because rates change every 15 minutes and most customers have no window on these changes. But the biggest benefit they discovered from the smart meters was quite unexpected: teams came out to houses when the meters started to act in bizarre ways, and prevented a number of fires from being starting by the rogue meters.

A presentation on UAVs and other drones (not all are air-borne) highlighted uses for journalism, law enforcement, and environmental tracking, along with the risks of putting too much power in the hands of those inside or outside of government who possess drones.

MIT’s SENSEable City Lab, as an example of a positive use, measures water quality in the nearby Charles River through drones, producing data at a much more fine-gained level than what could be measured before, spacially and temporally.

Kade Crockford of the Massachusetts ACLU laid out (through a video simulation as scary as it was amusing) the ways UAVs could invade city life and engage in creepy tracking. Drones can go more places than other surveillance technologies (such as helicopters) and can detect our movements without us detecting the drones. The chilling effects of having these robots virtually breathe down our necks may outweigh the actual abuses perpetrated by governments or other actors. This is an illustration of my earlier point about trust between the center and the edges. (Crockford led a discussion on privacy in the conference, but I can’t report on it because I felt I needed to attend a different concurrent session.)

Protocols such as the 802.15 family and Zigbee permit communication among devices, but few of these networks have the processing power to analyze the data they produce and take action on their own. To store their data, base useful decisions on it, and allow humans to view it, we need to bring their data onto the larger Internet.

One tempting solution is to use the cellular network in which companies have invested so much. But there are several challenges to doing so.

  • Cell phone charges can build up quickly.

  • Because of the shortage of phone numbers, an embedded device must use a special mobile subscriber ISDN number (MSISDN), which comes with limitations.

  • To cross the distance to cell towers, radios on the devices need to use much more power than they need to communicate over local wireless options such as WiFi and Bluetooth.

A better option, if the environment permits, is to place a wireless router near a cluster of devices. The devices can use a low-power local area network to communicate with the router, which is connected to the Internet by cable or fiber.

When a hub needs to contact a device to send it a command, several options allow the device to avoid wasting power keeping its radio on. The device can wake up regularly to poll the router and get the command in the router’s response. Alternatively, the router can wake the device when necessary.

Government and the IoT platform

One theme coming out of State of the Union addresses and other White House communications is the role of innovation in raising living standards. As I’ve mentioned, there was a bit of discussion at the conference about jobs and what IoT might do to them. Job creation is one of the four goals of the SmartAmerica Challenge run by the Presidential Innovation Fellows program. The other three are saving lives, creating new business opportunities, and improving the economy.

I was quite disappointed that climate change and other environmental protections were left off the SmartAmerica list. Although I applaud the four goals, I noticed that each pleases a powerful lobbying constituency (the health industry, chambers of commerce, businesses, and unions). The environment is the world’s most urgent challenge, and one that millions of people care about, but big money isn’t behind the cause.

Two people presenting the SmartAmerica Challenge suggested that most of the technologies enabling IoT have been developed already. What we need are open and easy-to-use architectures, and open standards that turn each layer of the system into a platform on which others are free to innovate. The call for open standards–which can also create open markets–also came from Bill Curtis of ARM.

Another leading developer in the computer field, Bob Frankston, advised us to “look for resources, not solutions.” I take this to mean we can create flexible hardware and software infrastructure that many innovators can use as platforms for small, nimble solutions. Frankston’s philosophy, as he put it to me later, is “removing the barriers to rapid evolution and experimentation, thus removing the need for elaborate scenarios.”

Bob Frankston speakingBob Frankston speaking

Bob Frankston speaking (Credit: IoT Festival)

Dr. Johnson claimed that government agencies can’t share a lot of their applications for their data because of security concerns, and predicted that each time a tier of government data is published, it will be greeted by an explosion of innovation from the private sector.

However, government can play another useful role. One speaker pointed out that it can guide the industry to use standards by requiring those standards in its own procurements.

Standards we should all learn

Both Almholt and Curtis laid out protocol stacks for the IoT, with a lot of overlap. Both agreed that many Internet protocols in widespread use were inappropriate for IoT. For instance, connections on the IoT tend to be too noisy and unreliable for TCP to work well. Another complication is that an 802.15.4 packet has a maximum size of 127 bytes, and IPv6 (the addressing system of choice for mobile devices) takes up at least 40. Clever reuse of data between packets can reduce this overhead.

Curtis said that, because most Internet standards are too complex for the constrained devices in the IoT, these devices tend to run proprietary protocols, creating data silos. He also called for devices that use less than 10 milliwatts of power. Almholt said that devices will start harvesting electricity from light, vibration, and thermal sources, thus doing without batteries altogether and running for long periods without intervention.

Some of the protocols that show promise for the IoT include:

  • 6LoWPAN or successors for network-layer transmission.

  • The Constrained Access Protocol (CoAP) for RESTful message exchange. By running over UDP instead of TCP and using binary headers instead of ASCII, this protocol achieves most of the benefits of HTTP but is more appropriate for small devices.

  • The Time Synchronized Mesh Protocol (TSMP) for coordinating data transmissions in an environment where devices contend for wireless bandwidth, thus letting devices shut down their radios and save power when they are not transmitting.

  • RPL for defining and changing routes among devices. This protocol sacrifices speed for robustness.

  • eDTLS, a replacement for the TLS security standard that runs over UDP.

Adoption of these standards would lead to a communication chasm between the protocol stack used within the LAN and the one used with the larger outside world. Therefore, edge devices would be devoted to translating between the two networks. One speaker suggested that many devices will not be provided with IP addresses, for security reasons.

Challenges that remain

Research with impacts on the IoT is flourishing. I’ll finish this article with a potpourri of problems and how people are addressing them:

  • Gettys reported on the outrageous state of device security. In Brazil, an enormous botnet has been found running just on the consumer hubs that people install in their homes and businesses for Internet access. He recommended that manufacturers adopt OpenWRT firmware to protect devices, and that technically knowledgeable individuals install it themselves where possible.

  • Although talking over the Internet is a big advance for devices, talking to each other over local networks at the radio layer (using for instance the 802.15.4 standards) is more efficient and enables more rapid responses to the events they monitor. Unfortunately, according Almholt, interoperability is a decade away.

  • Mobility can cause us to lose context. A recent day-to-day example involves the 911 emergency call system, which is supposed to report the caller’s location to the responder. When people gave up their landlines and called 911 from cell phones, it became much harder to pinpoint where they were. The same thing happen in a factory or hospital when a staffer logs in from a mobile device.

Whether we want the IoT to turn on devices while we sit on the couch (not necessarily an indulgence for the lazy, but a way to let the elderly and disabled live more independently) or help solve the world’s most pressing problems in energy and water management, we need to take the societal and technological steps to solve these problems. Then let the fun begin.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter — and to learn more about the Solid Conference coming to San Francisco in May, visit the Solid website.

February 25 2014

The technology and jobs debate raises complex questions

Editor’s note: Doug Hill and I recently had a conversation here on Radar about the impact of automation on jobs. In one of our exchanges, Doug mentioned a piece by James Bessen. James reached out to me and was kind enough to provide a response. What follows is their exchange.


JAMES BESSEN: I agree, Doug, that we cannot dismiss the concerns that technology might cause massive unemployment just because technology did not do this in the past. However, in the past, people also predicted that machines would cause mass unemployment. It might be helpful to understand why this didn’t happen in the past and to ask if anything today is fundamentally different.

Many people make a simple argument: 1) they observe that machines can perform job tasks and 2) they conclude that therefore humans will lose their jobs. I argued that this logic is too simple. During the 19th century, machines took more than 98% of the labor needed to weave a yard of cloth. But the number of weavers actually grew because the lower price of cloth increased demand. This is why, contrary to Marx, machines did not create mass unemployment.

Is something like that happening today? Yes, in some areas of white collar automation. The ATMs did not eliminate bank tellers, word processing software did not eliminate secretaries, etc. As I explain, ATMs increased the demand for bank tellers. In other cases, the work shifts to new occupations: there are fewer switchboard operators, but more receptionists; there are fewer typographers, but more graphic designers. As a whole, the number of jobs in office and sales occupations using computers has grown, not shrunk.

But that is not the whole story. First, new jobs require new skills. This was true in the past, too. I argued that the textile mills also hired workers who were adept at learning new skills and that they invested significantly in worker-learning on the job. You cites Dublin, but my research has corrected Dublin on this point (see Bessen, James (2003), “Technology and Learning by Factory Workers: The Stretch-Out at Lowell, 1842“, Journal of Economic History, 63, pp. 33-64 and also “Was Mechanization De-skilling?).” Moreover, my view is that the mill owners sought educated workers because it was in their business interests to do so, not for philanthropic reasons. And the mill girls saw the educational amenities of Lowell as an important attraction. For example, Lucy Larcom, who later taught college, compared the early years at Lowell to women’s colleges later in the century. I believe skills are also important for the new jobs being created today, and the difficulty of developing new skills and new labor markets is one reason wages are stagnant.

Second, the effect of technology in office occupations is very different from what is happening today in manufacturing occupations. As I noted, technology worked to reduce textile jobs during the 20th century, and for the last decade the number of manufacturing jobs has been shrinking. More than technology is at work here, but technology is not creating jobs on net in manufacturing. So, yes, there are two sides to technology’s effect on jobs, but it is far from obvious that the effect of technology is to eliminate jobs overall.

DOUG HILL: Thank you, James, for your response to the conversation Jim Stogdill and I have had over the last few weeks on the relationship between automation and jobs. I’ve read the two papers you referred to in your email, and they were quite helpful in fleshing out points you made in your article in the Washington Post. My conviction that your arguments overlook some crucial questions, however, hasn’t changed.

Two points and a quibble to mention:

  1. The quibble: Contrary to what you suggest in your email, I never claimed it’s “obvious” that the effect of technology is to eliminate jobs overall. To the contrary, I specifically said that technology can be counted on to cut both ways, and that we don’t know what will happen as automation evolves.
  2. All skills aren’t created equal. The gist of your argument is that new technologies require new skills and that having those new skills creates new opportunities for employment. Your example is the growth of textile manufacturing in Massachusetts over the course of the 19th century, an industry in which women workers played a substantial role.You say that the owners of the Massachusetts mills made significant investments in “human capital,” that is, they trained the women they hired so they acquired new skills needed in the mills. You never fully explain, though, exactly what those new skills were. Rather, you use increases in employee productivity (and some fancy equations) as prima facie evidence that new skills were acquired.This is looking at the situation from management’s perspective; it tells us nothing about what life was like for the women actually doing the work. That’s an important omission, I think, both in this specific case and in the general discussion regarding automation and jobs. By far, the bulk of that discussion has concerned the jobs automation might eliminate. What isn’t talked about nearly as much is the quality of jobs automation might leave behind.

    The fundamental method of industrial production is the breaking down of a given process into a series of discreet, repetitive tasks. This doesn’t necessarily mean they’re easy tasks, but by definition, they’re not complex. It’s an efficient way to make a commodity but not an especially satisfying way to make a living. I think of the worker Jim Stogdill met at the Ford electronics plant, who stood at his work station and shoved a capacitor into a circuit board all day. He seemed satisfied with the job, but the vast literature on the degradation workers experience on assembly lines (not to mention the suicides at Foxconn) shows that many, many workers feel otherwise, even when they’re well paid. See, for example, Ely Chiony’s Automobile Workers and the American Dream.

    Your analysis shows what “skill” often means where automation is concerned. The gains in productivity you celebrate came mainly from management’s demands that mill workers steadily increase their work load. Where they once tended two looms, “stretch-outs” required them to tend three, then four. “Speed-ups” periodically accelerated the speed of the machines as well. Experienced workers learned to handle the pace. At what cost, you don’t say.

    In one of his earlier emails, Stogdill cited the famous I Love Lucy episode where Lucy finds herself struggling to keep up with the conveyer belt in a chocolate factory. As funny as that bit is, it’s also evocative of the sorts of pressures faced by workers the world over who struggle to keep pace with machines. And, as I mentioned in my earlier email, getting more from less is a philosophy that isn’t limited to factories. A wide variety of automation techniques are widely and aggressively used to boost productivity in the knowledge and service industries as well.

  3. If it all depends on demand, we’re screwed. Increasing productivity in the Massachusetts mills meant it took fewer employees to produce the same amount of cloth, or more. Yet the number of jobs the mills provided steadily increased, as did the pay earned by the workers, at least for a while. Why? Because greater efficiencies of production and increased competition lowered the price of cloth, and demand for cheap cloth products grew. “This is why,” you say, “contrary to Marx, machines did not create mass unemployment.”The problem with this, as also mentioned in my earlier email, is that we’ve reached a point where we can no longer afford to base our national and global economies on an ever-increasing demand for an ever-increasing supply of consumables. That model is well on its way to killing the planet. Another model must be found, and quickly. I’m hoping some of those seeking to exploit the potential of automation might be taking that into consideration.

BESSEN: You raise some important issues about work satisfaction/alienation and environmental sustainability. I fail to see how the ATM machines have “degraded” bank tellers, subjected them to unwarranted “speed up” and are killing the planet. Nevertheless, my points are simply limited to the topic at hand, namely, whether technology creates more jobs than it destroys.

HILL: Here is what the Bureau of Labor Statistics currently says about bank tellers:

Job Outlook:

Employment of tellers is projected to show little or no change from 2012 to 2022.

Past job growth for tellers was driven by a rapid expansion of bank branches, where most tellers work. However, the growth of bank branches is expected to slow because of both changes to the industry and the abundance of banks in certain areas.

In addition, online and mobile banking allows customers to handle many transactions traditionally handled by tellers. As more people use online banking, fewer bank customers will visit the teller window. This will result in decreased demand for tellers. Some banks also are developing systems that allow customers to interact with tellers through webcams at ATMs. This technology will allow tellers to service a greater number of customers from one location, reducing the number of tellers needed for each bank.

The Bureau adds that tellers make an average of $11.99 an hour. A high school diploma and “short-term, on-the-job training” are required. Job prospects are expected to be excellent “because many workers leave this occupation.”

Number of bank tellers employed nationally, 2010: 556,310

Number of bank tellers employed nationally, 2012: 541,770

Jim Stogdill wrap-up

I want to thank James and Doug for engaging in this question and giving us the opportunity to publish it. However, if you read this carefully you’ll see that James and Doug might not have been having the same argument. Last week at Strata in Santa Clara, I moderated an Oxford Style debate on the proposition that automation creates more jobs than it destroys, and the same thing happened there. Our two teams of debaters squared off and promptly began arguing different questions. I hope to post some material from that discussion soon. It was super thought-provoking.

It’s not hard to see why this happens. It’s a complicated topic, really complicated in fact. Economists are the most unfortunate of scientists in that they are usually trying to answer some of the most complex questions we can pose with only the most rudimentary mathematical approximations to support their task.

When we ask the question, “does automation create more jobs than it destroys?” we might as well ask, “Are you an optimist or a pessimist?” Given the nearly unanswerable complexity of the question, it becomes a stand-in for a question of optimism.

But there’s another more fundamental problem, and this is where we start talking past each other. The question as posed is based on an assumption of scarcity. But if you think in terms of growing and/or future abundance, the question becomes either less important or moot. This is how the nano-techy singularists see it. “Lack of jobs aren’t the problem — figuring out what to do with your free time when everything is provided by your fabber is.”

We certainly haven’t answered any of these questions here, nor did we expect to. That was never my point in asking them either here or in the debate at Strata. I hope we have gotten a few people talking about it, though.

I’ve been dismayed by the “religion of disruptionism” in our industry. Creative destruction is how it works, but it seems like sometimes we are attracted to the destruction part of it for destruction’s sake. There is a whiff of schadenfreude in the valley’s approach to innovation that I find a bit sad.

There was a telling moment in our debate at Santa Clara when the “con” team looked out at the audience and said something like, “None of you need to worry about losing your jobs to automation; we’ve cleanly distinguished between the populations that represent the creative and destructive sides of Shumpeterian economics.”

In the end, we don’t know if history repeats itself forever and the creatively displaced can be retrained and find new jobs (or if it repeats itself and they can’t), but we do know that the pace of change matters. That an economy and a society can only absorb change so quickly and that pace can be affected by policy.

“Let them eat cake” isn’t policy.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter — and to learn more about the Solid Conference coming to San Francisco in May, visit the Solid website.

February 22 2014

The Industrial IoT isn’t the same as the consumer IoT

Reading Kipp Bradford’s recent article, The Industrial Internet of Things: The opportunity no one’s talking about, got me thinking about commonly held misconceptions about what the Industrial Internet of Things (IIoT) is — as well as what it’s not.

Misconception 1: The IIoT is the same as the consumer Internet of Things (IoT), except it’s located on a factory floor somewhere.

This misconception is easy to understand, given that both the IIoT and the consumer IoT have that “Internet of Things” term in common. Yes, the IIoT includes devices located in industrial settings: maybe a factory floor, or perhaps as part of a high-speed train system, or inside a hotel or restaurant, or a municipal lighting system, or within the energy grid itself.

But the industrial IoT has far more stringent requirements than the consumer IoT, including the need for no-compromise control, rock-solid security, unfailing reliability even in harsh (extremely hot or cold, dusty, humid, noisy, inconvenient) environments, and the ability to operate with little or no human intervention. And unlike more recently designed consumer-level devices, many of the billion or so industrial devices already operating on existing networks were put in place to withstand the test of time, often measured in decades.

The differences between the IIoT and IoT are not just a matter of slight degree or semantics. If your Fitbit or Nest device fails, it might be inconvenient. But if a train braking system fails, it could be a matter of life and death.

Misconception 2: The IIoT is always about, as Bradford describes it, devices that “push and pull status and command information from the networked world.”

The consumer IoT is synonymous with functions that affect human-perceived comfort, security, and efficiency. In contrast, many industrial networks have basic operating roles and needs that do not require, and in fact are not helped by, human intervention. Think of operations that must happen too quickly — or too reliably, or too frequently, or from too harsh or remote an environment — to make it practical to “push and pull status and command information” to or from any kind of centralized anything, be it an Internet server or the cloud.

A major goal for the IIoT, then, must be to help autonomous communities of devices to operate more effectively, peer to peer, without relying on exchanging data beyond their communities. One challenge now is that various communications protocols have been established for specific types of industrial devices (e.g., for building automation, or lighting, or transportation) that are incompatible with one another. Internet Protocol (IP) could serve as a unifying communications pathway within industrial communities of devices, improving peer-to-peer communications.

Misconception 3: The problem is that industrial device owners aren’t interested in, or actively resist, connecting our smart devices together.

If IP were extended all the way to industrial devices, another advantage is that they could also participate, as appropriate, beyond peer-to-peer communities. Individually, industrial devices generate the “small data” that, in the aggregate, combines to become the “big data” used for IoT analytics and intelligent control. IIoT devices that are IP-enabled could retain their ability to operate without human intervention, yet still receive input or provide small-data output via the IoT.

But unlike most consumer IoT scenarios, which involve digital devices that already have IP support built in or that can be IP enabled easily, typical IIoT scenarios involve pre-IP legacy devices. And unfortunately, IP enablement isn’t free. Industrial device owners need a direct economic benefit to justify IP enabling their non-IP devices. Alternatively, they need a way to gain the benefits of IP without giving up their investments in their existing industrial devices — that is, without stranding these valuable industrial assets.

Rather than seeing industrial device owners as barriers to progress, we should be looking for ways to help industrial devices become as connected as appropriate — for example, for improved peer-to-peer operation and to contribute their important small data to the larger big-data picture of the IoT.

So, what is the real IIoT opportunity no one’s talking about?

The real opportunity of the IIoT is not to pretend that it’s the same as the IoT, but rather to provide industrial device networks with an affordable and easy migration path to IP.

This approach is not an “excuse for trying to patch up, or hide behind, outdated technology” that Bradford talks about. Rather, it’s a chance to offer multi-protocol, multimedia solutions that recognize and embrace the special considerations and, yes, constraints of the industrial world — which include myriad existing protocols, devices installed for their reliability and longevity, the need for both wired and wireless connections in some environments, and so on. This approach will build bridges to the IIoT, so that any given community of devices can achieve its full potential — the IzoT platform I’ve been working on at Echelon aims to accomplish this.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter — and to learn more about the Solid Conference coming to San Francisco in May, visit the Solid website.

Slo-mo for the masses

Edgertronic EvolutionEdgertronic Evolution

Edgertronic version evolution, from initial prototype A through production-ready prototype C.

The connectivity of everything isn’t just about objects talking to each other via the Internet. It’s also about the accelerating democratization of formerly elite technology. Yes, it’s about putting powerful devices in touch with each other — but it’s also about putting powerful devices within the grasp of anyone who wants them. Case in point: the Edgertronic camera.

All kids cultivate particular interests growing up. For Michael Matter, Edgertronic’s co-inventor, it was high-speed flash photography. He got hooked when his father brought home a book of high-speed images taken by Harold Eugene “Doc” Edgerton, the legendary MIT professor of electrical engineering who pioneered stroboscopic photography.

“It was amazing stuff,” Matter recalls, “and I wanted to figure out how he did it. No, more than that I wanted to do it. I wanted to make those images. But then I looked into the cost of equipment, and it was thousands of dollars. I was 13 or 14, this was the 1970s, and my budget was pretty well restricted to my allowance.”

But Matter had a highly technical bent, and he was undaunted.

“I bought an off-the-shelf camera flash and put in a smaller value capacitor,” he recalls. “That allowed me to reduce the flash period from a millisecond to 20 or 30 microseconds. I had lower light output, of course, but I knew I could work with that. I got those short bursts I needed.”

Matter’s dad was an avid golfer, and he asked his son to take some shots of him teeing off — or rather, shots of the face of his trusty driver at the precise moment of ball impact. Matter took a spring from an old tape measure and jury-rigged a trigger for the flash.

“I’d put it ever-so-slightly in front of the ball,” he says, “and it worked pretty well. It’d stand up to four or five hits. We got some good images of the ball distorting, morphing into a kind of egg shape, on impact.”

Matter later matriculated at MIT, and he sent in images of those golf balls to Edgerton, along with a fan letter. He ended up taking a class from Edgerton, and his admiration for the idiosyncratic engineer only grew.

“He was the quintessential maker,” Matter says of his mentor. “Almost everything in his lab was slightly radioactive from all his field work photographing atom bomb and hydrogen bomb tests. He’d look at a problem and come up with a solution. Sometimes his solutions were crude and ugly, but they worked. That was part of his philosophy — better to have something crude and ugly that works than something elegant and expensive that doesn’t work as well.”

Edgerton had a camera capable of taking sub-microsecond exposures, but it was extremely bulky, and he let Matter tinker with it.

“It had huge vacuum tubes, and it had a notorious reputation around the lab for unreliability,” Matter said. ”I took it apart and was able to make some improvements on it. It worked better.”

And did Edgerton praise his protégé? Not at all.

“That wasn’t his style,” Matter said, “and I admired him for it. If you were in his lab, a certain level of competence was assumed.  That was better than praise.”

Matter spent the next 30 years designing electronic devices, including servers, computers, and consumer products. And he was good at what he did; he and business partner Juan Pineda were principal designers for the Apple Powerbook 500 series. But he never lost interest in high-speed photography, and he remained frustrated by the stratospherically high price of the equipment. Top-end high-speed cameras can cost $100,000 to $500,000 — even “cheap” models register in the mid five figures. But his conversations with Pineda inspired him to think about the technology in a highly disruptive way.

“Early in my work with Juan, he made an observation to me,” Matter recalls. “We were working on a mid-performance, low-price project at Apollo Computers. Another group we were talking with was focused on high-performance, high-end systems. Juan said, ‘You know, anybody can do a no-holds-barred design when money is no object. Sooner or later, you’ll get what you want. But the challenge is when you’re limited by expense. That’s when skill comes in.’”

That idea — relatively high performance at a low price point — stuck with Matter, and fused with his predilections for  deep tech tinkering. Ultimately, he was driven back to his childhood hobby. With Pineda, he spent two years skunk-working on high-speed videography. The Edgertronic was the result of all their brainstorming. At about $5,500, it is well within the means of anyone seriously interested in photography; professional-grade digital SLR cameras, after all, can sell for $10,000. And the fact that the Edgertronic runs Linux and has an Ethernet port makes it an effective liaison between the physical and virtual worlds.

Like all digital video cameras, the Edgertronic operates in a pretty straightforward fashion: light travels through a lens to a sensor, and from there is transmitted to a processor where images are translated as digital information, then stored in a memory component.

That said, the Edgertronic isn’t just another video camera. At the relatively low pixel resolution of 192×96, it can operate in excess of 17,000 frames per second (fps). At the maximum resolution of 1280×1024, the camera processes 500 fps — still impressive, given standard cinema speed is 18 fps. In an evaluation of the Edgertronic on tested.com, TV’s Mythbusters website, Norman Chan opined the Edgertronic’s “sweet spot” was 640×360 resolution at about 2,000 fps. On his review, Chan has posted a clip taken by an Edgertronic of a hammer smashing a light bulb at 5,000 fps. The video is crisp, and the glass shards fly about in a slow, stately and almost balletic manner:

In other words, the Edgertronic delivers on its promise: good quality high-speed video on a working person’s budget. How did they do it? Matter isn’t evasive, exactly, but he is loathe to discuss specifics. He emphasized the Edgertronic did not involve the “bleeding edge” of engineering — the ne plus ultra of videographic technology. Rather, he said, he and Pineda used existing components in new ways.

“We’ve done it before,” he observes. ”For example, we were once working on a graphics processing project, and we needed a chip that could do several multiples per second. Well, there wasn’t a chip like that out there made specifically for graphics processing. But Texas Instruments did have a $40 speech recognition chip that could do five multiples a second. Some people were using it for music synthesizers or mixing studios. Texas Instruments never envisioned it for work stations or graphics processing. But we did. And it worked.”

In his review, Chan reveals that Edgertronic uses an off-the-shelf 18x14mm CMOS sensor. While not dirt cheap — it accounts for about half the price of the camera — it costs exponentially less than any custom-designed sensor.

In any event, cheap sensors alone do not a functional $5,000 high-speed video camera make. As Matter intimates, the proprietary leavening in this particular technological loaf is engineering. But what about applications? Sure, you can record bullet flights in your home basement shooting range. But not many of us combine doomsday prep philosophy with an interest in high-speed photography.

“Our camera isn’t going to be used for slo-mo replays in the World Series or the Superbowl,” says Matter. “That’s not our intent. Obviously, it has application in the arts and filmmaking, and will appeal to amateurs and hobbyists for their personal projects. But we think industry and research will account for most of our orders.”

One example: production lines. Say somewhere in the bowels of your line, a widget isn’t being stuck to a doo-hickey in the most felicitous way possible. Some high-speed footage of the assembly process could tell you what’s wrong. But standard high-speed cameras are rather bulky, and most are not amenable to sticking into tight places. Plus, once again, they’re extremely pricey. At several hundred thousand dollars a pop, it could well make more sense to eschew the camera and troubleshoot the line by tearing it apart. Enter the Edgertronic.

“Unlike other high-speed cameras, ours are compact, no larger than an SLR camera,” says Matter, “so you can put them almost anywhere. Also, price won’t be an issue for even small companies. The same with research facilities. Say a small lab is fine-tuning a new industrial process. A high-speed camera could really help with that, but you couldn’t justify a $500,000 outlay. We think the Edgertronic will change that dynamic.”

Matter and Pineda thus promote their camera as a basic research and investigative tool. Edgertronics already have been purchased by researchers working on fluid dynamics, particle systems, and animal locomotion; by high schools and colleges for various subjects; and by labs conducting product torture testing.

“We think of it as a microscope for time,” says Matter. “We just want to help people see how things really work. When you slow things down, you see they often don’t function anything like you supposed. Technically-minded people get that. Our first batch of 60 cameras sold out quickly — one guy is on his third camera. We’re doubling up for our next production run. We’ve identified a need, and now we’re filling it.”


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter — and to learn more about the Solid Conference coming to San Francisco in May, visit the Solid website.

February 21 2014

Architecting the connected world

In the first two posts in this series, I examined the connection between information architecture and user interface design, and then looked closely at the opportunities and constraints inherent in information architecture as we’ve learned to practice it on the web. Here, I outline strategies and tactics that may help us think through these constraints and that may in turn help us lay the groundwork for an information architecture practice tailored and responsive to our increasingly connected physical environments.

The challenge at hand

NYU Media, Culture, and Communication professor Alexander Galloway examines the cultural and political impact of the Internet by tracing its history back to its postwar origins. Built on a distributed information model, “the Internet can survive [nuclear] attacks not because it is stronger than the opposition, but precisely because it is weaker. The Internet has a different diagram than a nuclear attack does; it is in a different shape.” For Galloway, the global distributed network trumped the most powerful weapon built to date not by being more powerful, but by usurping its assumptions about where power lies.

This “differently shaped” foundation is what drives many of the design challenges I’ve raised concerning the Internet of Things, the Industrial Internet and our otherwise connected physical environments. It is also what creates such depth of possibility. In order to accommodate this shape, we will need to take a wider look at the emergent shape of the world, and the way we interact with it. This means reaching out into other disciplines and branches of knowledge (e.g. in the humanities and physical sciences) for inspiration and to seek out new ways to create and communicate meaning.

The discipline I’ve leaned on most heavily in these last few posts is linguistics. From these explorations, a few tactics for information architecture for connected environments emerge: leveraging multiple modalities, practicing soft assembly, and embracing ambiguity.

Leverage multiple modalities

In my previous post, I examined the nature of writing as it relates to information architecture for the web. As a mode of communication – and of meaning making – writing is incredibly powerful, but that power does come with limitations. Language’s linearity is one of these limitations; its symbolic modality is another.

In linguistics, a sign’s “modality” refers to the nature of the relationship between a signifier and a signified, between the name of a thing and the thing itself. In Ferdinand de Saussure’s classic example (from the Course in General Linguistics), the thing “tree” is associated with the work “tree” only by arbitrary convention.

This symbolic modality, though by far the most common modality in language, isn’t the only way we make sense of the world. Linguist Charles Sanders Peirce identified two additional modes that offer alternatives to the arbitrary sign of spoken language: the iconic and the indexical modes.

Indexical modes link signifier and signified by concept association. Smoke signifies fire, thunder signifies a storm, footprints signify a past presence. The signifier “points” to the signified (yes, the reference is related to our index finger). Iconic modes, on the other hand, link signifier and signified by resemblance or imitation of salient features. A circle with two dots and lines in the right places signifies the human face to humans – our perceptual apparatus is tuned to recognize this pattern as a face (a pre-programmed disposition which easily becomes a source of amusement when comparing the power outlets of various regions).

Before you write off this scrutiny of modes as esoteric navel-gazing – or juvenile visual punning – let’s take an example many of us come into contact with daily: the Apple MacBook trackpad. Prior to the Mountain Lion operating system, moving your finger “down” the trackpad (i.e. from the space bar toward the edge of the computer body) caused a page on screen to scroll down – which is to say, the elements you perceived on the screen moved up (so you could see what had been concealed “below” the bottom edge of the screen).

The root of this spatial metaphor is enlightening: if you think back a few years to the scroll wheel mouse (or perhaps you’re still using one?), you can see where the metaphor originates. When you scroll the wheel toward you, it is as if the little wheel is in contact with a sheet of paper sitting underneath the mouse. You move the wheel toward you, the wheel moves the paper up. It is an easy mechanical model that we can understand based on our embodied relationship to physical things.

This model was translated as-is to the first trackpads — until another kinetic model replaced it: the touchscreen. On a touchscreen, you’re no longer mediating movement with a little wheel; you’re touching it and moving it around directly. Once this model became sufficiently widespread, it became readily accessible to users of other devices – such as the trackpad – and, eventually, became the default trackpad setting for Apple devices.

OutletsOutlets

Left: Danish electrical plugs, on Wikimedia Commons. Right: US electrical plugs, photo by Andy Fitzgerald.

Here we can see different modes at play. The trackpad isn’t strictly symbolic, nor is it iconic. Its relationship to the action it accomplishes is inferred by our embodied understanding of the physical world. This is signification in the indexical mode.

This thought exercise isn’t purely academic: by basing these kinds of interactions on mechanical models that we understand in a physical and embodied way (we know in our bodies what it is like to touch and move a thing with our fingers), we root them in a symbolic experience that is more deeply rooted in our understanding of the physical world. In the case of iconic signification, for example in recognizing faces in power outlets (and even dispositions – compare, for instance, the plug shapes of North America and Denmark), our meaning making is even more deeply rooted in our innate perceptual apparatus.

Practice soft assembly

Attention to modalities is important because the more deeply a pattern or behavior is rooted in embodied experience, the more durable it is – and the more reliably it can be leveraged in design. Stewart Brand’s concept of “pace layers” helps to explain why this is the case. Brand argues in How Buildings Learn that the structures that make up our experienced environment don’t all change at the same rate. Nature is very slow to change; culture changes more quickly. Commerce changes more quickly than the layers below it, and technology changes faster than all of these. Fashion, relative to these other layers, changes most quickly of all.

Pace LayersPace Layers

Illustration: Pace layers, by Preston Grubbs, illustrator at Deloitte Digital.

When we discuss meaning making, our pace layer model doesn’t describe rate of change; it describes depth of embodiment. Remembering the “ctrl-z to undo” function is a learned symbolic (hence arbitrary) association. Touching an icon on a touchscreen (or, indexically, on a trackpad) and sliding it up is an associative embodied action. If we get these actions right, we’ve tapped into a level of meaning making that is rooted in one of the slowest of the physical pace layers: the fundamental way in which we perceive our natural world.

This has obvious advantages to usability. Intuitive interfaces are instantly useable because they capitalize on the operational knowledge we already have. When an interface is completely new, in order to be intuitive it must “borrow” knowledge from another sphere of experience (what Donald Norman in The Psychology of Everyday Things refers to as “knowledge in the world”). The touchscreen interface popularized by the iPhone is a ready example of this. More recently, Nest Protect’s ”wave to hush” interaction provides an example that builds on learned physical interactions (waving smoke away from a smoke detector to try to shut it up) in an entirely new but instantly comprehensible way.

An additional and often overlooked advantage of tapping into deep layers of meaning making is that by leveraging more embodied associations, we’re able to design for systems that fit together loosely and in a natural way. By “natural” here, I mean associations that don’t need to be overwritten by an arbitrary, symbolic association in order to signify; associations that are rooted in our experience of the world and in our innate perceptual abilities. Our models become more stable and, ironically, more fluid at once: remapping one’s use of the trackpad is as simple as swapping out one easily accessible mental model (the wheel metaphor) for another (the touchscreen metaphor).

This loose coupling allows for structural alternatives to rigid (and often brittle) complex top-down organizational approaches. In Beyond the Brain, University of Lethbridge Psychology professor Louise Barrett uses this concept of “soft assembly” to explain how in both animals and robots “a whole variety of local control factors effectively exploit specific local (often temporary) conditions, along with the intrinsic dynamics of an animal’s body, to come up with effective behavior ‘on the fly.’” Barrett describes how soft assembly accounts for complex behavior in simple organisms (her examples include ants and pre-microprocessor robots), then extends those examples to show how complex instances of human and animal perception can likewise be explained by taking into account the fundamental constitutive elements of perception.

For those of us tasked with designing the architectures and interaction models of networked physical spaces, searching hard for the most fundamental level at which an association is understood (in whatever mode it is most basically communicated), and then articulating that association in a way that exploits the intrinsic dynamics of its environment, allows us to build physical and information structures that don’t have to be held together by force, convention, or rote learning, but which fit together by the nature of their core structures.

Embrace ambiguity

Alexander Galloway claims that the Internet is a different shape than war. I have been making the argument that the Internet of Things is a different shape than textuality. The systems that comprise our increasingly connected physical environments are, for the moment, often broader than we can understand fluidly. Embracing ambiguity – embracing the possibility of not understanding exactly how the pieces fit together, but trusting them to come together because they are rooted in deep layers of perception and understanding – opens up the possibility of designing systems that surpass our expectations of them. “Soft assembling” those systems based on cognitively and perceptually deep patterns and insight ensures that our design decision aren’t just guesses or luck. In a word, we set the stage for emergence.

Embracing ambiguity is hard. Especially if you’re doing work for someone else – and even more so if that someone else is not a fan of ambiguity (and this includes just about everyone on the client side of the design relationship). By all means, be specific whenever possible. But when you’re designing for a range of contexts, users, and uses – and especially when those design solutions are in an emerging space – specificity can artificially limit a design.

In practical terms, embracing ambiguity means searching for solutions beyond those we can easily identify with language. It means digging for the modes of meaning making that are buried in our physical interactions with the built and the natural environment. This allows us the possibility of designing for systems which fit together with themselves (even though we can’t see all the parts) and with us (even though there’s much about our own behavior we still don’t understand), potentially reshaping the way we perceive and act in our world as a result.

Architecting connected physical environments requires that we think differently about how people think and about how they understand the world we live in. Textuality is a tool – and a powerful tool – but it represents only one way of making sense of information environments. We can add additional approaches to designing for these environments by thinking beyond text and beyond language. We accomplish this by leveraging our embodied experience (and embodied understanding of the world) in our design process: a combination of considering modalities, practicing soft assembly, and embracing ambiguity will help us move beyond the limits of language and think through ways to architect physical environments that make this new context enriching and empowering.

As we’re quickly discovering, the nature of connected physical environments imbricates the digital and the analog in ways that were unimaginable a decade ago. We’re at a revolutionary information crossroads, one where our symbolic and physical worlds are coming together in an unprecedented way. Our temptation thus far has been to drive ahead with technology and to try to fit all the pieces together with the tried and true methods of literacy and engineering. Accepting that the shape of this new world is not the same as what we have known up until now does not mean we have to give up attempts to shape it to our common good. It does, however, mean that we will have to rethink our approach to “control” in light of the deep structures that form the basis of embodied human experience.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

Oobleck security

I’ve been thinking (and writing) a lot lately about the intersection of hardware and software, and how standing at that crossroads does not fit neatly into our mental models of how to approach the world. Previously, there was hardware and there was software, and the two didn’t really mix. When trying to describe my thinking to a colleague at work, the best way to describe the world was that it’s becoming “oobleck,” the mixture of cornstarch and water that isn’t quite a solid but isn’t quite a liquid, named after the Dr. Seuss book Bartholomew and the Oobleck.  (If you want to know how to make it, check out this video.)

One of the reasons I liked the metaphor of oobleck for the melding of hardware and software is that it can rapidly change on you when you’re not looking. It pours like a liquid, but it can “snap” like a solid. If you place the material under stress, as might happen if you set it on a speaker cone, it changes state and acts much more like a weird solid.

This “phase change” effect may also occur as we connect highly specialized embedded computers (really, “sensors”) to the Internet. As Bruce Schneier recently observed, patching embedded systems is hard. As if on cue, a security software company published a report that thousands of TVs and refrigerators may have been compromised to send spam. Embedded systems are easy to overlook from a security perspective because at the dawn of the Internet, security was relatively easy: install a firewall to sit between your network and the Internet, and carefully monitor all connections in and out of the network. Security was spliced into the protocol stack at the network layer. One of my earliest projects in understanding the network layer was working with early versions of Linux to configure a firewall with the then-new technology of address translation. As a result, my early career took place in and around network firewalls.

Most firewalls at the time worked at the network and transport layers of the protocol stack, and would operate on IP addresses and ports. A single firewall could shield an entire network of unpatched machines from attack, and in some ways, made it safe to use Windows on Internet-connected machines. Firewalls worked well in their heyday of the 1990s. Technology innovation took place inside the firewall, so the strict perimeter controls they imposed did not interfere with the use of technology.

By the mid-2000s, though, services like Skype and Google apps contended with firewalls as an obstacle to be traversed. (2001 even saw the publication of RFC 3093, a firewall “enhancement” protocol to allow routed IP packets through the firewall. Read it, but note the date before you dive into the details.) Staff inside the firewall needed to access services outside the firewall, and these services now included critical applications like Salesforce.com. Applications became firewall aware and started working around them. Had applications not evolved, it is likely the firewall would have throttled much of the innovation of the last decade.

To bring this back to embedded systems: what is the security model for a world with loads of sensors? (After all, remember that Bartholomew almost drowned in the Oobleck, and we’d like to avoid that fate.) As Kipp Bradford recently pointed out, when every HVAC compressor is connected to the Internet, that’s a lot of devices. By definition, real-time automation models assume continuous connectivity to gather data, and more importantly, send commands down to the infrastructure.

Many of these systems have proven to have poor security; for a recent example, consider this report from last fall about vulnerabilities in electrical and water systems. How do we add security to a sensor network, especially a network that is built around me as a person?Do I need to wear a firewall device to keep my sensors secure? Is the firewall an app that runs on my phone? (Oops, maybe I don’t want it running on my phone, given the architecture of most phone software…) Operating just at the network transport layer probably isn’t enough. Real-time data is typically transmitted using UDP, so many of the unwritten rules we have for TCP-based networking won’t apply. Besides, the firewall probably needs to operate at the API level to guard against any action I don’t want to be taken. What kind of security needs to be put around the data set? A strong firewall and cryptography that prevents my data in flight is no good if it’s easy to tap into the data store they all report to.

Finally, there’s the matter of user interface. Simply saying that users will have to figure it out won’t cut it. Default passwords — or even passwords as a concept — are insufficient. Does authentication have to become biometric? If so, what does that mean for registering devices the first time? As is probably apparent, I don’t have many answers yet.

Considering security with these types of deployments is like trying to deal with oobleck. It’s almost concrete, but just when you think it is solid enough to grasp, it turns into a liquid and runs through your fingers. With apologies to Shakespeare, this brave new world that has such machines in it needs a new security model. We need to defend against the classic network-based attacks of the past as well as new data-driven attacks or subversions of a distributed command-and-control system.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

February 19 2014

The home automation paradox

Editor’s note: This post originally appeared on Scott Jenson’s blog, Exploring the World Beyond Mobile. This lightly edited version is republished here with permission.

The level of hype around the “Internet of Things” (IoT) is getting a bit out of control. It may be the technology that crashes into Gartner’s trough of disillusionment faster than any other. But that doesn’t mean we can’t figure things out. Quite the contrary — as the trade press collectively loses its mind over the IoT, I’m spurred on further to try and figure this out. In my mind, the biggest barrier we have to making the IoT work comes from us. We are being naive as our overly simplistic understanding of how we control the IoT is likely going to fail and generate a huge consumer backlash.

But let’s backup just a bit. The Internet of Things is a vast sprawling concept. Most people refer to just the consumer side of things: smart devices for your home and office. This is more precisely referred to as “home automation,” but to most folks, that sounds just a bit boring. Nevertheless, when some writer trots out that tired old chestnut: “My alarm clock turns on my coffee machine!”, that is home automation.

But of course, it’s much more than just coffee machines. Door locks are turning on music, moisture sensors are turning on yard sprinklers, and motion sensors are turning on lights. The entire house will flower into responsive activities, making our lives easier, more secure and even more fun.

The luddites will always crawl out and claim this is creepy or scary in the same way that answering machines were dehumanizing. Which, of course, is entirely missing the point. Just because something is more mechanical doesn’t mean it won’t have an enormous and yet human impact. My answering machine, as robotic as it may be, allows me to get critical messages from my son’s school. That is definitely not creepy.

However, I am deeply concerned these home automation scenarios are too simplistic. As a UX designer, I know how quixotic and down right goofy humans can be. The simple rule-based “if this, then that” style scenarios trotted out are doomed to fail. Well, maybe fail is too strong of a word. They won’t fail as in a “face plant into burning lava” fail. In fact, I’ll admit that they might even work 90% of the time. To many people that may seem fine, but just try using a voice recognition system with a 10% failure rate. It’s the small mistakes that will drive you crazy.

I’m reminded of one of the key learnings of the artificial intelligence (AI) community. It was called Moravec’s Paradox:

It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.

Moravec’s paradox created two type of AI problems: HardEasy and EasyHard.

HardEasy problems were assumed to be very hard to accomplish, such as playing chess. The assumption was that you’d have to replicate human cunning and experience in order to play chess well. It turns out this was completely wrong, as a simple brute force approach was able to do quite well. This was a hard problem that turned out to be (relatively) easy.

The EasyHard problem is exactly the opposite: a problem that everyone expects to be simple but turns out to quite hard indeed. The classic example here is language translation. The engineers at the time expected the hardest problem was just finding a big enough dictionary. All you had to do was look up the words just plop them down in perfect order. Obviously, that problem is something we’re still working on today. An EasyHard problem is one that seems simple but never….quite….works…..the….way….you…..want.

I claim that home automation is an EasyHard problem. The engineer in all of us assumes it is going to be simple: walk in a room, turn on the lights. What’s the big deal? Now, I’ll admit, this rule does indeed work most of the time, but here are series of exceptions that break down:

Problem: I walk into the room and my wife is sleeping; turning on the lights wakes her up.
Solution: More sensors — detect someone on the bed.

Problem: I walk into the room and my dog is sleeping on the bed; my room lights don’t turn on.
Solution: Better sensors — detect human vs pets.

Problem: I walk into the room, my wife is watching TV on the bed. She wants me to hand her a book, but as the the room is dark, I can’t see it.
Solution: Read my mind.

Don’t misunderstand my intentions here. I’m not luddite! I do strongly feel that we are going to eventually get to home automation. My point is that as an EasyHard problem, we don’t treat home automation with the respect it deserves. Just because we can automate our homes doesn’t mean we’ll automate them correctly. The real work with home automation isn’t with the IoT connectivity; it’s the control system that will make it do the right thing at the right time.

Let’s take a look at my three scenarios above and discuss how they will impact our eventual solutions to home automation.

More Sensors

Almost every scenario today is built on a very fault-intolerant structure. A single sensor controls the lights. A single door knob alerts the house I’m coming in. This has the obvious  error condition that if that sensor fails, the entire action breaks down. But the second more likely case is that it just infers the wrong conclusion. A single motion sensor in my room assumes that I am the only thing that matters, my sleeping wife is a comfort casualty. I can guarantee you that as smart homes roll out, saying “sorry dear, that shouldn’t have happened” is going to wear very thin.

The solution, of course, is to have more sensors that can reason and know how many people are in a room. This isn’t exactly that hard, but it will take a lot more work as you need to build up a model of the house, populate it with proxies — creating, in effect, a simulation of your home. This will surely come, but it will just take a little time for it to become robust and tolerant of our oh-so-human capability to act in unexpected ways.

Better Sensors

This, too, should be soon in coming. There are already sensors that can tell the difference from humans and pets; they just aren’t widely used yet. This will feed into the software simulation of my house, knowing where people, pets and things are throughout the space. This is starting to sound a bit like an AI system, modeling my life and making decisions based on what it thinks is needed at the time. Again, not exactly impossible, but tricky stuff that will, over time, get better and better.

Read my mind

But at some point we reach a limit. When do you turn on the lights so I can find the book and when do I just muddle through because I don’t want to bother my wife? This is where the software has to have the “humility” to stop and just ask. I discussed this a bit in my UX grid of IoT post: background swarms of smart devices will do as much of the “easy stuff” as they can but will eventually need me to signal intent so they can cascade a complex set of actions that fulfill my goal.

Take the book example again. I walk into the room; the AI detects my wife on the bed. It could even detect the TV is on but still know she is not sleeping. But because it’s not clearly reasonable to turn on the lights to full brightness, it just turns on the low baseboard lighting so I can navigate. So far so good — the automatic system is being helpful but conservative. When I walk up to my wife and she asks for the book, I just have to say “lights” and the system turns on the lights, which could be a complex set of commands turning on five different lights at different intensities.

At some point, a button, an utterance, a gesture will be needed because human interaction is too subtle to be fully encapsulated by an AI. Humans can’t always do it; what makes us think computers can?

Summary

My point here is to emphatically support the idea of home automation. However, the UX designer in me is far too painfully aware that humans are messy, illogical beasts, and simplistic if/then rules are going to create a backlash against this technology. It isn’t until we take the coordinated control of these IoT devices seriously that we’ll start building more nuanced and error-tolerant systems. They will certainly be simplistic at first and still fail, but at least we’ll be on the right path. We must create systems that expect us to be human, not punish us for when we are.

February 18 2014

I, Cyborg

There is an existential unease lying at the root of the Internet of Things — a sense that we may emerge not less than human, certainly, but other than human.

Kelsey BresemanKelsey Breseman

Kelsey Breseman, engineer at Technical Machine.

Well, not to worry. As Kelsey Breseman, engineer at Technical Machine, points out, we don’t need to fret about becoming cyborgs. We’re already cyborgs: biological matrices augmented by wirelessly connected silicon arrays of various configurations. The problem is that we’re pretty clunky as cyborgs go. We rely on screens and mobile devices to extend our powers beyond the biological. That leads to everything from atrophying social skills as face-to-face interactions decline to fatal encounters with garbage trucks as we wander, texting and oblivious, into traffic.

So, if we’re going to be cyborgs, argues Breseman, let’s be competent, sophisticated cyborgs. For one thing, it’s now in our ability to upgrade beyond the screen. For another, being better cyborgs may make us — paradoxically — more human.

“I’m really concerned about how we integrate human beings into the growing web of technology,” says Breseman, who will speak at O’Reilly’s upcoming Solid conference in San Francisco in May. “It’s easy to get caught up in the ‘cool new thing’ mentality, but you can end up with a situation where the point for the technology is the technology, not the human being using it. It becomes closed rather than inclusive — an ‘app developers developing apps for app developers to develop apps’ kind of thing.”

Those concerns have led Breseman and her colleagues at Technical Machine to the development of the Tessel: an open-source Arduino-style microcontroller that runs JavaScript and allows hardware project prototyping. And not, Breseman emphasizes, the mere prototyping of ‘cool new things’ — rather, the prototyping of things that will connect people to the emerging Internet of Things in ways that have nothing to do with screens or smart phones.

“I’m not talking about smart watches or smart clothing,” explains Breseman. “In a way, they’re already passé. The product line hasn’t caught up with the technology. Think about epidermal circuits — you apply them to your skin in the same way you apply a temporary tattoo. They’ve been around for a couple of years. Something like that has so many potential applications — take the Quantified Self movement, for example. Smart micro devices attached right to the skin would make everything now in use for Quantified Self seem antiquated, trivial.”

Breseman looks to a visionary of the past to extrapolate the future: “In the late 1980s, Mark Weiser coined the term ‘ubiquitous computing‘ to describe a society where computers were so common, so omnipresent, that people would ultimately stop interfacing with them,” Breseman says. “In other words, computers would be everywhere, embedded in the environment. You wouldn’t rely on a specific device for information. The data would be available to you on an ongoing basis, through a variety of non-intrusive — even invisible — sources.”

Weiser described such an era as “… the age of calm technology, when technology recedes into the background of our lives…” That trope — calm technology — is extremely appealing, says Breseman.

“We could stop interacting with our devices, stop staring at screens, and start looking at each other, start talking to each other again,” she says. “I’d find that tremendously exciting.”

Breseman is concerned that the Internet of Things is seen only as a new and shiny buzz phrase. “We should be looking at it as a way to address our needs as human beings,” she says, “to connect people to the Internet more elegantly, not just as a source for more toys. Yes, we are now dependent on information technology. It has expanded our lives, and we don’t want to give it up. But we’re not applying it very well. We could do it so much better.”

Part of the problem has been the bifurcation of engineering into software and hardware camps, she says. Software engineers type into screens, and hardware engineers design physical things, and there have been few — if any — places that the twain have met. The two disciplines are poised to merge in the Internet of Things — but it won’t be an easy melding, Breseman allows. Each field carves different neural pathways, inculcates different values.

“Because of that, it has been really hard to figure out things that let people engage with the Internet in a physical sense,” Breseman says. “When we were designing Tessel, we discovered how hugely difficult it is to make an interactive Internet device.”

Still, Tessel and devices like it ultimately will become the machine tools of the Internet of Everything: the forges and lathes where the new infrastructure is built. That’s what Breseman hopes, anyway.

“What we would like,” she muses, “is for people to figure out their needs first and then order Tessels rather than the other way around. By that I mean you should first determine why and how connecting to the Internet physically would augment your life, make it better. Then get a Tessel to help you with your prototypes. We’ll see more and better products that way, and it keeps the emphasis where it belongs — on human beings, not the devices.”


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

February 12 2014

Searching for the software stack for the physical world

When I flip through a book on networking, one of the first things I look for is the protocol stack diagram. An elegant representation of the protocol stack can help you make sense of where to put things, separate out important mental concepts, and help explain how a technology is organized.

I’m no stranger to the idea of trying to diagram out a protocol space; I had a successful effort back when the second edition of my book on 802.11 was published. I’ve been a party to several awesome conversations recently about how to organize the broad space that’s referred to as the “Internet of Things.”IoT StackIoT Stack

Let’s start at the bottom, with the hardware layer, which is labeled Things. These are devices that aren’t typically thought of as computers. Sure, they wind up using a computing ecosystem, but they are not really general-purpose computers. These devices are embedded devices that interact with the physical world, such as sensors and motors. Primarily, they will either be the eyes and ears of the overall system, or the channel for actions on the physical world. They will be designed around low power consumption, and therefore will use a low throughput communication channel. If they communicate with a network, it will typically be through a radio interface, but the tyranny of limited power consumption means that the network interface will usually be limited in some way.

Things provide their information or are instructed to act on the world by connecting through a Network Transport layer. Networking allows reports from devices to be received and acted on. In this model, the network transport consists of a set of technologies that move data around, and is a combination of the OSI data link, networking, and transport layers. Mapping into technologies that we use, it would be TCP/IP on Wi-Fi for packet transport, with data carried over a protocol like REST.

The Data layer aggregates many devices into an overall collective picture. In the IoT world, users are interacting with the physical world and asking questions like “what is the temperature in this room?” That question isn’t answered by just one device (or if it is, the temperature of the room is likely to fluctuate wildly). Each component of the hardware layer at the bottom of the stack contributes a piece of the overall contextual picture. A light bulb can be on or off, but to determine the desired state, you might mix in overall power consumption of the building, how many people are present, the locations of those people, total demand on the electrical grid, and possibly even the preferences of the people in an area. (I once toured an FAA regional air traffic control center, and the groups of controllers that serve each sub-region of airspace could customize the lighting, ranging from normal lighting to quite dim lighting.)

The value of the base level of devices in this environment depends on how much unique context it can add to the overall picture and enable the operation of software that can operate on concepts of the physical world like room temperature or whether I am asleep. Looking back on it, the foundation for the data layer was being laid years ago, as is obvious reading section three of Tim O’Reilly’s “What is Web 2.0?” essay from 2005. In this world, you are either contributing to or working with context, or you are not doing very interesting work.

Sharing context widely requires APIs. If the real-world import of a piece of data is only apparent when it is combined with other sources of data, an application needs to be able to mash up several data sources to create its own unique picture of the world. APIs enable programmers to build context that represents what is important to users and build a cognitively significant aggregation. “Room temperature” may depend on getting data from temperature, humidity, and sunlight sensors, perhaps in several locations in a room.

In addition to reporting up, APIs need to enable control to flow down the stack. If “room temperature” is a complex state that depends on data from several sensors, it may require acting on several aspects of a climate control system to change: the heater, air conditioner, fan, and maybe even whether the blinds in a room are open or closed.

Designing APIs is hard; in addition to getting the data operations and manipulation right, you need to design for performance, security, and to enable applications over time. A good place to start is O’Reilly’s API strategy guide.

Finally, we reach the top of the stack when we get to Applications. Early applications allowed you to control the physical world like a remote control. Nice, but by no means the end of the story. One of the smartest technology analysts I know often challenges me by saying, “It’s great that you can report anomalous behavior. If you know something is wrong, do something about it!” The “killer app” in this world is automation: being able to work without explicit instructions from the user, or to continuously fine-tune and optimize the real world based on what has flowed up the stack.

Unlike my previous effort in depicting the world of 802.11, this stack is very much a work in progress. Please leave your thoughts in the comments.

February 10 2014

More 1876 than 1995

Corliss_engineCorliss_engine

Photo: Wikimedia Commons. Corliss engine, showing valvegear (New Catechism of the Steam Engine, 1904).

Philadelphia’s Centennial Exposition of 1876 was America’s first World’s Fair, and was ostensibly held to mark the nation’s 100th birthday. But it heralded the future as much as it celebrated the past, showcasing the country’s strongest suit: technology.

The centerpiece of the Expo was a gigantic Corliss engine, the apotheosis of 40 years of steam technology. Thirty percent more efficient than standard steam engines of the day, it powered virtually every industrial exhibit at the exposition via a maze of belts, pulleys, and shafts. Visitors were stunned that the gigantic apparatus was supervised by a single attendant, who spent much of his time reading newspapers.

“This exposition was attended by 10 million people at a time when travel was slow and difficult, and it changed the world,” observes Jim Stogdill, general manager of Radar at O’Reilly Media, and general manager of O’Reilly’s upcoming Internet-of-Things-related conference, Solid.

“Think of a farm boy from Kansas looking at that Corliss engine, seeing what it could do, thinking of what was possible,” Stogdill continues. ”When he left the exposition, he was a different person. He understood what the technology he saw meant to his own work and life.”

The 1876 exposition didn’t mark the beginning of the Industrial Revolution, says Stogdill. Rather, it signaled its fruition, its point of critical mass. It was the nexus where everything — advanced steam technology, mass production, railroads, telegraphy — merged.

“It foreshadowed the near future, when the Industrial Revolution led to the rapid transformation of society, culturally as well as economically. More than 10,000 patents followed the exposition, and it accelerated the global adoption of the ‘American System of Manufacture.’ The world was never the same after that.”

In terms of the Internet of Things, we have reached that same point of critical mass. In fact, the present moment is more similar to 1876 than to more recent digital disruptions, Stogdill argues. “It’s not just the sheer physicality of this stuff,” he says. “It is also the breadth and speed of the change bearing down on us.”

While the Internet changed everything, says Stogdill, “its changes came in waves, with scientists and alpha geeks affected first, followed by the early adopters who clamored to try it. It wash’t until the Internet was ubiquitous that every Kansas farm boy went online. That 1876 Kansas farm boy may not have foreseen every innovation the Industrial Revolution would bring, but he knew — whether he liked it or not — that his world was changing.”

As the Internet subsumes physical objects, the rate of change is accelerating, observes Stogdill. “Today, stable wireless platforms, standardized software interface components and cheap, widely available sensors have made the connection of virtually every device — from coffee pots to cars — not only possible; they have made it certain.”

“Internet of Things” is now widely used to describe this latest permutation of digital technology; indeed, “overused” may be a more apt description. It teeters on the knife-edge of cliché. “The term is clunky,” Stogdill acknowledges, “but the buzz for the underlying concepts is deserved.”

Stogdill is quick to point out that this “Internet of Everything” goes far beyond the development of new consumer products. Open source sensors and microcontroller libraries already are allowing the easy integration of software interfaces with everything from weather stations to locomotives. Large, complicated systems — water delivery infrastructure, power plants, sewage treatment plants, office buildings — will be made intelligent by these software and sensor packages, allowing real-time control and exquisitely efficient operation. Manufacturing has been made frictionless, development costs are plunging, and new manufacturing-as-a-service frameworks will create new business models and drive factory production costs down and production up.

“When the digital age began accelerating,” Stogdill explains, “Nicholas Negroponte observed that the world was moving from atoms to bits — that is, the high-value economic sectors were transforming from industrial production to aggregating information.”

“I see the Internet of Everything as the next step,” he says. ”We won’t be moving back to atoms, but we’ll be combining atoms and bits, merging software and hardware. Data will literally grow physical appendages, and inform industrial production and public services in extremely powerful and efficient ways. Power plants will adjust production according to real-time demand, traffic will smooth out as driverless cars become commonplace. We’ll be able to track air and water pollution to an unprecedented degree. Buildings will not only monitor their environmental conditions for maximum comfort and energy efficiency, they’ll be able to adjust energy consumption so it corresponds to electricity availability from sustainable sources.”

Stogdill believes these converging phenomena have put us on the cusp of a transformation as dramatic as the Industrial Revolution.

“Everyone will be affected by this collision of hardware and software, by the merging of the virtual and real,” he says. “It’s really a watershed moment in technology and culture. We’re at one of those tipping points of history again, where everything shifts to a different reality. That’s what the 1876 exposition was all about. It’s one thing to read about the exposition’s Corliss engine, but it would’ve been a wholly different experience to stand in that exhibit hall and see, feel, and hear its 1,400 horsepower at work, driving thousands of machines. It is that sensory experience that we intend to capture with Solid. When people look back in 150 years, we think they could well say, ‘This is when they got it. This is when they understood.’”


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

February 07 2014

Why Solid, why now

A few years ago at OSCON, one of the tutorials demonstrated how to click a virtual light switch in Second Life and have a real desk lamp light up in the room. Looking back, it was rather trivial, but it was striking at the time to see software people taking an interest in the “real world.” And what better metaphor for the collision of virtual and real than a connection between Second Life and the Portland Convention Center?

In December 2012, our Radar team was meeting in Sebastopol and we were talking about trends in robotics, Maker DIY, Internet of Things, wearables, smart grid, industrial Internet, advanced manufacturing, frictionless supply chain, etc. We were trying to figure out where to put our focus among all of these trends when suddenly it was obvious (at least to Mike Loukides, who pointed it out): they are all more alike than different, and we could focus on all of them by looking at the relationships among them. The Solid program was conceived that day.

We called it Solid because most of the people we work with are software people and we wanted to draw attention to the non-virtuality of all this stuff. This isn’t the stuff of some abstract “cyber” domain; these were all trends that lived at the nexus of software and very real physical hardware. A big part of this story is that hardware is getting less hard, and more like software in its malleability, but there remains a sheer physicality to this stuff that makes it different. Negroponte told us to forget atoms and focus on bits, and we did for 20 years. But now, the pendulum is swinging back and atoms matter again — except now they are atoms layered with bits, and that makes this different.

Software and hardware have been essentially separate mediums wielded by the separate tribes of hackers/ coders/ programmers/ software engineers and capital E Engineers. Solid is about the beginning of something different, the collision of software and hardware into a new combined medium that will be wielded by people (or teams) that understand both, of bits and atoms uniting in the same intellectual space. What should we call it? Fluidware? That won’t be it, but I’d like eventually to come up with a term that conveys that a combination of software and less hard hardware is something distinct.

Naturally, this impacts more than just the stuff that we produce from this new medium. It influences organizational models; the things engineers, designers, and hackers need to know to do their jobs; and the business models you can deliver with bit-enabled collections of atoms.

With Solid, we envision an annual program of engagement that will span various media as well as multiple in-person events. We’re going to do Solid:Local events, which we started in Cambridge, MA, on February 6, and we’re going to anchor the program with the Solid Conference in San Francisco this year in May that is going to be very different from the other conferences we do.

First off, the Solid Conference will be at Ft. Mason, a venue we chose for it’s beautiful views, and because it can host the kinds of industrial-scale things we’d like to demonstrate there. We still have a lot to do to bring this program together, but our guiding principle is to give our audience a visceral feel for this world of combined hardware and software. We’re going for something that feels less like a conference, and more like a miniature World’s Fair Exposition with talks.

I believe that we are on the cusp of something as dramatic as the Industrial Revolution. In that time, a series of industrial expositions drew millions of people to London, Paris, Philadelphia and elsewhere to experience firsthand the industrialization of their world.

Here in my home office outside of Philadelphia, I’m about 15 miles from the location of the 1876 Exposition that we’re using for inspiration. If you entered Machinery Hall during that long summer with its 1400 horsepower Corliss engine powering belt-driven machinery throughout the vast hall, you couldn’t help but leave with an appreciation for how the world was changing. We won’t be building a Machinery Hall, certainly not at that scale, but we are building a program with as much show as tell. We want to viscerally demonstrate the principals that we see at work here.

We hope you can join us in San Francisco or at one of the many Solid:Locals we plan to put on. Share your stories about interesting intersections of hardware and software with us, too.

February 05 2014

Trope or fact? Technology creates more jobs than it destroys

Editor’s note: We’re trying something new here. I read this back-and-forth exchange and decided we should give it a try. Or, more accurately, since we’re already having plenty of back-and-forth email exchanges like that, we just need to start publishing them. My friend Doug Hill agreed to be a guinea pig and chat with me about a subject that’s on both of our minds and publish it here.

Stogdill: I got this tweet over the holidays while I was reading your book. I mean, I literally got distracted by this tweet while I was reading your book:

It felt like a natural moment of irony that I had to share with you. The author obviously has an optimistic point of view, and his claims seem non-obvious in light of our jobless recovery. I also recently read George Packer’s The Unwinding, and frankly it seems to better reflect the truth on the ground, at least if you get outside of the big five metro areas. But I suspect not a lot of techno optimists are spending time in places that won’t get 4G LTE for another year or two.

I’m not going to ask you what you think of the article because I think I already know the answer. I do have a few things on my mind, though. Is our jobless recovery a new structural reality brought about by more and more pervasive automation? Are Norbert Wiener’s predictions from the late 1940s finally coming true? Or is creative destruction still working, but just taking some time to adjust this time around? And, if one is skeptical of technology, is it like being skeptical of tectonics? You can’t change it, so bolt your house down?

Hill: Your timing is good. The day your email arrived the lead story in the news was the latest federal jobs report, which told us that the jobless “recovery” continues apace. Jobless, that is.

The national conversation about the impact of automation on employment continues apace, too. Thomas Friedman devoted his New York Times column a couple of days ago to The Second Machine Age, the new book by Erik Brynjolfsson and Andrew McAfee. They’re the professors from MIT whose previous book, Race Against the Machine, helped start that national conversation, in part because it demonstrated both an appreciation of automation’s advantages and an awareness that lots of workers could be left behind.

I’ll try to briefly answer your questions, Jim, and then note a couple points that strike me as somewhat incongruous in the current discussion.

Is our jobless economy a new structural reality brought about by more and more pervasive automation?

Yes. Traditional economic theory holds that advances in technology create jobs rather than eliminate them. Even if that maxim were true in the past (and it’s not universally accepted), many economists believe the pace of innovation in automation today is overturning it. This puts automation’s most fervent boosters in the odd position of arguing that technological advance will disrupt pretty much everything except traditional economic theory.

Are Norbert Wiener’s predictions from the late 1940s finally coming true? Or is creative destruction still working, but just taking some time to adjust this time around?

Yes to both. Wiener’s predictions that automation would be used to undermine labor are coming true, and creative destruction is still at work. The problem is that we won’t necessarily like what the destruction creates.

Now, about those incongruous points that bug me:

  1. First, a quibble over semantics. It’s convenient in our discussions about automation to use the word “robots,” but also misleading. Much, if not most, of the jobs displacement we’re seeing now is coming from systems and techniques that are facilitated by computers but less mechanical than the robots we typically envision assembling parts in factories. I don’t doubt that actual robots will be an ever-more-important force in the future, but they’ll be adding momentum to methods that corporations have been using for quite awhile now to increase productivity, even as they’re reducing payrolls.
  2. It’s commonly said that the answer to joblessness is education. Our employment problems will be solved by training people to do the sorts of jobs that the economy of the future will require. But wait a minute. If it’s true that the economy of the future will increasingly depend on automation, won’t we simply be educating people to do the sorts of jobs that eliminate more jobs?
  3. Techno optimists argue that our current employment problems are merely manifestations of a transition period on the way to a glorious future. “Let the robots take the jobs,” says Kevin Kelly, ”and let them help us dream up new work that matters.”

    Even on his own terms, the future Kelly envisions seems more nightmarish than dreamlike. Everyone agrees automation is going to grow consistently more capable. As it does, Kelly says, robots will take over every job, including whatever new jobs we dream up to replace the previous jobs we lost to robots. If he’s right, we won’t be dreaming up new work that matters because we want to, but because we’ll have no choice. It will indeed be a race against the machines, and machines don’t get tired.

One more thing. You asked if being skeptical of technology is like being skeptical of tectonics. My first thought was to wonder whether anybody is really skeptical of tectonics, but given the polls on global warming, I guess anything is possible — more possible, I think, than a reversal of the robot revolution. So yeah, go ahead and bolt the house down.

Stogdill: Let me think where to start. My problem in this conversation is that I find myself arguing both sides of the question in my head, which makes it hard to present a coherent argument to you.

First, let me just say that I enter this discussion with some natural inclination toward a Schumpeterian point of view. In 1996, I visited a Ford electronics plant in Pennsylvania that was going through its own automation transformation. They had recently equipped the plant with then-new surface mount soldiering robots and redesigned the electronic modules that they produced there to take advantage of the tech. The remaining workers each tended two robots instead of placing parts on the boards themselves.

Except for this one guy. For some reason I’ve long forgotten, one of the boards they manufactured still required a single through-board capacitor, and a worker at that station placed capacitors in holes all day. Every 10 seconds for eight hours, a board would arrive in front of him, he would drop a capacitor’s legs through two little holes, push it over a bit to make sure it was all the way through, and then it was off to the next station to be soldiered. It was like watching Lucy in the Chocolate Factory.

I was horrified — but when I talked to him, he was bound and determined to keep that job from being automated. I simply couldn’t understand it. Why wouldn’t he want to upgrade his skills and run one of the more complex machines? Even now, when I’m more sympathetic to his plight, I’m still mystified that he could stand to continue doing that job when something else might have been available.

Yet, these days I find myself losing patience with the reflexive trope “of course technology creates more jobs than it destroys; it always has. What are you, a Luddite?” The Earth will keep rotating around the sun, too; it always has, right up until the sun supernovas, and then it won’t.

Which isn’t to say that the robots are about to supernova, but that arguments that depend on the past perpetuating into the future are not arguments — they’re wishes. (See also, China’s economy will always keep growing at a torrid rate despite over-reliance on investment at the expense of consumption because it always has). And I just can’t really take anyone seriously who makes an argument like that if they can’t explain the mechanisms that will continue to make it true.

So, to my thinking, this boils down to a few key questions. Was that argument even really true in the past, at least the recent past? If it was, is the present enough like the past that we can assume that, with re-training, we’ll find work for the people being displaced by this round of automation? Or, is it possible that something structurally different is happening now? And, even if we still believe in the creative part of creative destruction, what destructive pace can our society absorb and are there policies that we should be enacting to manage and ease the transition?

This article does a nice job of explaining what I think might be different this time with its description of the “cognitive elite.” As automation takes the next layer of jobs at the current bottom, we humans are asked to do more and more complex stuff, higher up the value hierarchy. But what if we can’t? Or, if not enough of us can? What if it’s not a matter of just retraining — what if we’re just not talented enough? The result would surely be a supply/demand mismatch at the high end of the cognitive scale, and we’d expect a dumbbell shape to develop in our income distribution curve. Or, in other words, we’d expect new Stanford grads going to Google to make $100K and everyone else to work at Walmart. And more and more, that seems like it’s happening.

Anyway, right now I’m all question, no answer. Others are suggesting that this jobless recovery has nothing to do with automation. It’s the (lack of) unions, stupid. I really don’t know, but I think we — meaning we technologists and engineers — need to be willing to ask the question “is something different this go-round?” and not just roll out the old history-is-future tropes.

We’re trying to create that conversation at least a bit by holding an Oxford-style debate at our next Strata conference. The statement we’ll be debating is: “Technology creates more jobs than it destroys,” and I’ll be doing my best to moderate in an even-handed way.

By the way, your point that it’s “not just robots” is well taken. I was talking to someone recently who works in the business process automation space, and they’ve begun to refer to those processes as “robots,” too — even though they have no physical manifestation. I was using the term in that broad sense, too.

Hill: In your last email you made two points in passing that I’d like to agree with right off the bat.

One is your comment that, when it comes to predicting what impact automation will have on employment, you find yourself “arguing both sides of the question.” Technology always has and always will cut both ways, so we can be reasonably certain that, whatever happens, both sides of the question are going to come into play. That’s about the only certainty we have, really, which is why I also liked it when you said, “I’m all question, no answer.” That’s true of all of us, whether we admit it or not.

We are obligated, nonetheless, to take what Norbert Wiener called “the imaginative forward glance.” For what it’s worth then, my answer to your question, “Is something different this go-round?” — by which you meant, even if it was once true that technological advancement created more rather than fewer jobs, that may no longer be true, given the pace, scale, and scope of the advances in automation we’re witnessing today — is yes and no.

That is, yes, I do think the scope and scale of technological change we’re seeing today presents us with challenges of a different order of magnitude than what we’ve faced previously. At the same time, I think it’s also true that we can look to the past to gain some sense of where automation might be taking us in the future.

In the articles on this issue you and I have traded back and forth over the past several weeks, I notice that two of the most optimistic, as far as our automated future is concerned, ran in the Washington Post. I want to go on record as denying any suspicion that Jeff Bezos had anything to do with that.

Still, the most recent of those articles, James Bessen’s piece on the lessons to be learned from the experience of America’s first industrial-scale textile factories (“Will robots steal our jobs? The humble loom suggests not”) was so confidently upbeat that I’m sure Bezos would have approved. It may be useful, for that reason, to take a closer look at some of Bessen’s claims.

To hear him tell it, the early mills in Waltham and Lowell, Massachusetts, were 19th-century precursors of the cushy working conditions enjoyed in Silicon Valley today. The mill owners recruited educated, middle-class young women from surrounding farm communities and supplied them with places to live, houses of worship, a lecture hall, a library, a savings bank, and a hospital.

“Lowell marked a bold social experiment,” Bessen says, “for a society where, not so long before, the activity of young, unmarried women had been circumscribed by the Puritan establishment.”

The suggestion that the Lowell mills were somehow responsible for liberating young women from the clutches of Puritanism is questionable — the power of the Puritan church had been dissipating for all sorts of reasons for more than a century before factories appeared on the banks of the Merrimack — but let that go.

It is true that, in the beginning, the mills offered young women from middle-class families an unprecedented opportunity for a taste of freedom before they married and settled down. Because their parents were relatively secure financially, they could afford to leave them temporarily behind without leaving them destitute. That’s a long way from saying that the mills represented some beneficent “social experiment” in which management took a special interest in cultivating the well-being of the women they employed.

Thomas Dublin’s Women at Work: The Transformation of Work and Community in Lowell, Massachusetts, 1826-1860 tells a different story. Women were recruited to staff the mills, Dublin says, because they were an available source of labor (men were working the farms or employed in smaller-scale factories in the cities) and because they could be paid less than men. All supervisory positions in the mills were held by men. Also contrary to Bessen’s contention, the women weren’t hired because they were smart enough to learn specialized skills. Women tended the machines; they didn’t run them. “To the extent that jobs did not require special training, strength or endurance, or expose operatives to the risk of injury,” Dublin says, “women were employed.”

How much time they had to enjoy the amenities supposedly provided by management is another question. According to Dublin, mill workers put in 12 hours a day, six days a week, with only three regular holidays a year. As the number of mills increased, so did the pressure to make laborers more productive. Speedups and stretch-outs were imposed. A speedup meant the pace of the machinery was increased, a stretch-out meant that each employee was required to tend additional pieces of machinery. Periodic cuts in piece wages were another fact of mill life.

Because of their middle-class backgrounds, and because they were accustomed to pre-industrial standards of propriety, the first generation of women felt empowered enough to protest these conditions, to little avail. Management offered few concessions, and many women left. The generation of women who replaced them were less likely to protest. Most had fled the Irish famine and had no middle-class homes to return to.

I go into this in some detail, Jim, because it’s important to acknowledge what automation’s fundamental purpose has always been: to increase management profits. Bold social experiments to benefit workers haven’t figured prominently in the equation.

It’s true that factory jobs have, in the long run, raised the standard of living for millions of workers (the guy you met in the Ford electronics plant comes to mind), but we shouldn’t kid ourselves that they’ve necessarily been pleasant, fulfilling ways to make a living. Nor should we kid ourselves that management won’t use automation to eliminate jobs in the future, if automation offers opportunities to increase profits.

We also need to consider whether basing our economy on the production and sale of ever-higher piles of consumables, however they’re manufactured, is a model the planet can sustain any longer. That’s the essential dilemma we face, I think. We must have jobs, but they have to be directed toward some other purpose.

I realize I haven’t addressed, at least directly, any of the questions posed in your email. Sorry about that — the Bessen article got under my skin.

Stogdill: That Bessen article did get under your skin, didn’t it? Well, anger in the face of ill-considered certainty is reasonable as far as I’m concerned. Unearned certainty strikes me as the disease of our age.

Reading your response, I had a whole swirl of things running through my head. Starting with, “Does human dignity require meaningful employment?” I mean, separate from the economic considerations, what if we’re just wired to be happier when we grow or hunt for our own food? — and will the abstractions necessary to thrive in an automation economy satisfy those needs?

Also, with regard to your comments about how much stuff do we need — does that question even really matter? Is perpetual sustainability on a planet where the Second Law of Thermodynamics holds sway even possible? Anyway, that diversion can wait for another day.

Let me just close this exchange by focusing for just one moment on your point that productivity gains have always been about increasing management profits. Of course they have. I don’t think that has ever been in question. Productivity gains are where every increase in wealth ever has come from (except for that first moment when someone stumbled on something useful bubbling out of the ground), and profit is how is how we incent investment in productivity. The question is how widely gains will be shared.

Historically, that argument has been about the mechanisms (and politics) to appropriately distribute productivity gains between capital and labor. That was the fundamental argument of the 20th century, and we fought and died over it — and for maybe 30 years, reached maybe a reasonable answer.

But what if we are automating to a point where there will be no meaningful link between labor and capital? There will still be labor, of course, but it will be doing these abstract “high value” things that have nothing whatsoever to do with the bottom three layers of Maslov’s hierarchy. In a world without labor directly tied to capital and its productivity gains, can we expect the mechanisms of the 20th century to have any impact at all? Can we even imagine a mechanism that flows the value produced by robots to the humans they sidelined? Can unemployed people join a union?

We didn’t answer these questions, but thanks for exploring them with me.

Bluetooth Low Energy: what do we do with you?

“The best minds of my generation are thinking about how to make people click ads,” as Jeff Hammerbacher said. And it’s not just data analysts: it’s creeping into every aspect of technology, including hardware.

One of the more exciting developments of the past year is Bluetooth Low Energy (BLE). Unfortunately, the application that I’ve seen discussed most frequently is user tracking: devices in stores can use the BLE device in your cell phone to tell exactly where you’re standing, what you’re looking at, and target ads, offer you deals, send over salespeople, and so on.

Color me uninterested. I don’t really care about the spyware: if Needless Markup wants to know what I’m looking at, they can send someone out on the floor to look. But I am dismayed by the lack of imagination around what we can do with BLE.

If you look around, you’ll find some interesting hints about what we might be able to to with BLE:

  • The Tile app is a tag for a keychain that uses BLE to locate lost keys. In a conversation, O’Reilly author Matthew Gast suggested that you could extend the concept to develop a collar that would help to locate missing pets. That scenario requires some kind of accessibility to neighbor’s networks, but for a pie-in-the-sky discussion, it’s a good idea.
  • At ORDCamp last weekend, I was in a wide-ranging conversation about smart washing machines, wearable computing, and the like. Someone mentioned that clothes could have tags that would let the washer know when the settings were wrong, or when you had the colors mixed. This is something you could implement with BLE. Water wouldn’t be a problem since you could seal the entire device, including the battery; heat might be an issue. But again, we’re talking imagination, what might be possible. We can solve the engineering problems later.
  • One proposal for our Solid conference involved tagging garbage cans with BLE devices, to help automate garbage collection. I want to see the demo, particularly if it includes garbage trucks in a conference hall.

Why the dearth of imagination in the press and among most of the people talking about the next generation of Bluetooth apps? Why can’t we get beyond different flavors of spyware? And when BLE is widely deployed, will we see devices that at least try to improve our quality of life, or will the best minds of our generation still be pitching ads?

Come to Solid to be part of that discussion. I’m sure our future will be full of spyware. But we can at least brainstorm other more interesting, more useful, more fun applications of low-power wireless computing.

February 04 2014

The Industrial Internet of Things

WeishauptBurnerWeishauptBurner

Photo: Kipp Bradford. This industrial burner from Weishaupt churns through 40 million BTUs per hour of fuel.

A few days ago, a company called Echelon caused a stir when it released a new product called IzoT. You may never have heard of Echelon; for most of us, they are merely a part of the invisible glue that connects modern life. But more than 100 million products — from street lights to gas pumps to HVAC systems — use Echelon technology for connectivity. So, for many electrical engineers, Echelon’s products are a big deal. Thus, when Echelon began touting IzoT as the future of the Industrial Internet of Things (IIoT), it was bound to get some attention.

Admittedly, the Internet of Things (IoT) is all the buzz right now. Echelon, like everyone else, is trying to capture some of that mindshare for their products. In this case, the product is a proprietary system of chips, protocols, and interfaces for enabling the IoT on industrial devices. But what struck me and my colleagues was how really outdated this approach seems, and how far it misses the point of the emerging IoT.

Although there are many different ways to describe the IoT, it is essentially a network of devices where all the devices:

  1. Have local intelligence
  2. Have a shared API so they can speak with each other in a useful way, even if they speak multiple protocols
  3. Push and pull status and command information from the networked world

In the industrial context, rolling out a better networking chip is just a minor improvement on an unchanged 1980s practice that requires complex installations, including running miles of wire inside walls. This IzoT would have been a breakthrough product back when I got my first Sony Walkman.

This isn’t a problem confined to Echelon. It’s a problem shared by many industries. I just spent several days wandering the floor of the International Air Conditioning, Heating, and Refrigeration (AHR) trade show, a show about as far from CES as you can get: truck-sized boilers and cooling towers that look unchanged from their 19th-century origins dotted the floor. For most of the developed world, HVAC is where the lion’s share of our energy goes. (That alone is reason enough to care about it.) It’s also a treasure trove of data (think Nest), and a great place for all the obvious, sensible IoT applications like monitoring, control, and network intelligence. But after looking around at IoT products at AHR, it was clear that they came from Bizzaro World.[1]

EmersonCopelandEmersonCopeland

Photo: Kipp Bradford. Copeland alone has sold more than 100 million compressors. That’s a lot of “things” to get on the Internet.

I spoke with an engineer showing off one of the HVAC industry-leading low-power wireless networking technologies for data centers. He told me his company’s new wireless sensor network system runs at 2.6 GHz, not 2.4 GHz (though the data sheet doesn’t confirm or deny that). Looking at the products that won industry innovation awards, I was especially depressed. Take, for example, this variable-speed compressor technology. When you put variable-speed motors into an air-conditioning compressor, you get a 25–35% efficiency boost. That’s a lot of energy saved! And it looks just like variable-speed technology that has been around for decades — just not in the HVAC world.

Now here’s where things get interesting: variable-speed control demands a smart processor controlling the compressor motor, plus the intelligent sensors to tell the motor when to vary its speed. With all that intelligence, I should be able to connect it to my network. So, when will my air conditioner talk to my Nest so I can optimize my energy consumption? Never. Or at least not until industry standards and government regulations overlap just right and force them to talk to each other, just as recent regulatory conditions forced 40-year-old variable-motor control technology into 100-year-old compressor technology.

Of course, like any inquisitive engineer, I had to ask the manufacturer of my high-efficiency boiler if it was possible to hook that up to my Nest to analyze performance. After he said, “Sure, try hooking up the thermostat wire,” and made fun of me for a few minutes, the engineer said, “Yeah, if you can build your own modbus-to-wifi bridge, you can access our modbus interface and get your boiler online. But do you really want some Russian hacker controlling your house heat?” It would be so easy for my Nest thermostat to have a useful conversation with my boiler beyond “on/off,” but it doesn’t.

Don’t get me wrong. I really am sympathetic to engineering within the constraints of industrial (versus consumer) environments, having spent a period of my life designing toys and another period making medical and industrial products. I’ve had to use my engineering skills to make things as dissimilar as Elmo and dental implants. But constraints are no excuse for trying to patch up, or hide behind, outdated technology.

The electrical engineers designing IoT devices for consumers have created exciting and transformative technologies like z-wave, Zigbee, BLE, wifi, and more, giving consumer devices robust and nearly transparent connectivity with increasingly easy installation. Engineers in the industrial world seem to be stuck making small technological tweaks that might enhance safety, reliability, robustness, and NSA-proof security of device networks. This represents an unfortunate increase in the bifurcation of the Internet of Things. It also represents a huge opportunity for those who refuse to submit to the notion that the IoT is either consumer or industrial, but never both.

For HVAC, innovation in 2014 means solving problems from 1983 with 1984 technology because the government told them so. The general attitude can be summed up as: “We have all the market share and high barriers to entry, so we don’t care about your problems.” Left alone, this industry (like so many others) will keep building walls that prevent us from having real control over our devices and our data. That directly contradicts a key goal of the IoT: connecting our smart devices together.

And that’s where the opportunity lies. There is significant value in HVAC and similar industries for any company that can break down the cultural and technical barriers between the Internet of Things and the Industrial Internet of Things. Companies that recognize the new business models created by well-designed, smart, interconnected devices will be handsomely rewarded. When my Nest can talk to my boiler, air conditioner, and Hue lights, all while analyzing performance versus weather data, third parties could sell comfort contracts, efficiency contracts, or grid stabilization contracts.

As a matter of fact, we are already seeing a $16 billion market for grid stabilization services opened up by smart, connected heating devices. It’s easy to envision a future where the electric company pays me during peak load times because my variable-speed air conditioner slows down, my Hue lights dim imperceptibly, and my clothes dryer pauses, all reducing my grid load — or my heater charges up a bank of thermal storage bricks in advance of a cold front before I return home from work. Perhaps I can finally quantify the energy savings of the efficiency improvements that I make.

My Sony Walkman already went the way of the dodo, but there’s still a chance to blow open closed industries like HVAC and bridge the IoT and the IIoT before my Beats go out of style.


[1] I’m not sorry for the Super Friends reference.

January 31 2014

Drone on

Jeff Bezos’ recent demonstration of a drone aircraft simulating delivery of an Amazon parcel was more stunt than technological breakthrough. We aren’t there yet. Yes, such things may well come to pass, but there are obstacles aplenty to overcome — not so much engineering snags, but cultural and regulatory issues.

The first widely publicized application of modern drone aircraft — dropping Hellfire missiles on suspected terrorists — greatly skewed perceptions of the technology. On the one hand, the sophistication of such unmanned systems generated admiration from technophiles (and also average citizens who saw them as valuable adjuncts in the war against terrorism). On the other, the significant civilian casualties that were collateral to some strikes have engendered outrage. Further, the fear that drones could be used for domestic spying has ratcheted up our paranoia, particularly in the wake of Edward Snowden’s revelations of National Security Agency overreach.

It’s as though drones have been painted with a giant red “D,” á la The Scarlet Letter. Manufacturers of the aircraft, not surprisingly, are desperate to rehabilitate the technology’s image, and properly so. Drones have too many exciting — even essential — civilian applications to restrict their use to war.

That’s why drone proponents don’t want us to call them “drones.” They’d prefer we use the periphrastic term, “Unmanned Aerial Vehicle,” or at least the acronym — UAV. Or depending on whom you talk to, Unmanned Vehicle System (UVS).

Will either catch on? Probably not. Think back: do we call it Star Wars or the Strategic Defense Initiative? Obamacare, or the Affordable Care Act? Popular culture has a way of reducing bombastic rubrics to pithy, evocative phrases. Chances are pretty good “drones” it is, and “drones” it’ll stay.

And that’s not so bad, says Brendan Schulman, an attorney with the New York office of Kramer Levin Naftalis & Frankel.

“People need to use terminology they understand,” says Schulman, who serves as the firm’s e-discovery counsel and specializes in commercial unmanned aviation law. ”And they understand ‘drones.’ I know that causes some discomfort to manufacturers and people who use drones for commercial purposes, but I have no trouble with the term. I think we can live with it.”

Michael Toscano, the president and CEO of the Association of Unmanned Vehicle Systems International, feels differently: not surprisingly, he favors the acronym “UVS” over “drone.”

“The term ‘drones’ evokes a hostile, military device,” says Toscano. “That isn’t what the broad applications of this technology are about. Commercial unmanned aircraft have a wide range of uses, from monitoring pipelines and power lines, to wildlife censuses, to agriculture. In Japan, they’ve been used for years for pesticide applications on crops. They’re far superior to manned aircraft for that — the Japanese government even partnered with Yamaha to develop a craft specifically designed for the job.”

Nomenclature aside, drones are stuck in a time warp here in the United States. The Federal Aviation Administration (FAA) prohibits their general use pending development of final regulations; such rules have been in the works for years but have yet to be released. Still, the FAA does allow some variances on the ban: government agencies and universities, after thrashing through a maze of red tape, can obtain waivers for specific tasks. And for the most part, field researchers who have hands-on-joystick experience with the aircraft are wildly enthusiastic about them.

“They give us superb data,” says Michael Hutt, the Unmanned Aircraft Systems Manager for the U.S. Geological Survey (USGS). USGS is currently using drones — specifically the fixed-wing RQ-11A Raven and the hovering RQ-16A T-Hawk — for everything from surveying abandoned mine sites to monitoring coal seam fires, estimating waterfowl populations, and evaluating pygmy rabbit habitat.

Landsat [satellites] have been the workhorse for most of our projects,” continues Hutt, “and they’re pretty good as far as they go, but they have definite limitations. You’re able to take images only when a satellite passes overhead, obviously, and you’re limited to what you can see by the weather. Also, your resolution is only 30 meters. With the Raven or Hawk, we can fly anytime we want. And because we fly low and slow, our data is collected in resolutions of centimeters, not meters. It’s just a level of detail we’ve never had before.”

And highly detailed data, of course, translates as highly accurate and reliable data. Hutt offers an example: a project to map fossil beds at White Sands National Monument.

“The problem is that these beds aren’t static,” says Hutt. “They tend to erode, or they can get covered up with blowing sand. It has been difficult to tell just what we have, and where the fossils are precisely. But with our T-Hawks, were able to get down to 2.5-centimeter resolution. We know exactly where the fossils are — it doesn’t matter if a sand dune forms on top of them.”

Even with current FAA restrictions, Hutt notes, universities are crowding into the drone zone.

“Not long ago, maybe five or six universities were using them, or had applied for permits,” Hutt says. ”Now, there’s close to 200. The gathering of high-quality data is becoming easy and cheap. That’s because of aircraft advances, of course, and also because the payload packages — the sensors and cameras — are dropping dramatically in price. A few years ago, you needed a $500,000 mapping camera to obtain the information we’re now getting with an off-the-shelf $1,000 camera.”

Unmanned aircraft may be easier to fly than F-22 fighter jets, but they impose their own challenges, observes Jeff Sloan, a top drone jock for USGS.

“Dealing with wind is the major problem,” says Sloan. ”The aircraft we’re using are really quite small, and they get pushed around easily. Also, though they can get into tighter spots than any manned aircraft, steep terrain can get you pretty tense. You’re flying very low, and there’s not much room for error.”

If and when the FAA gives the green light for broad commercial drone applications, a large cohort of potential pilots with the requisite skill sets will be ready and waiting.

“We’ve found that the best pilots are those who’ve had a lot of video game experience,” laughs Sloan, “so it’s good to know my kids have a future.”

But back to the FAA: a primary reason for the rules delay is public worry over safety and privacy. Toscano says safety concerns are mostly just that — foreboding more than real risk.

“It’s going to take a while before you see general risk tolerance, but it’ll come,” Toscano said. ”The Internet started with real apprehension, and now, of course there’s general acceptance. There are risks with the Internet, but they’re widely understood now, and people realize there’s a trade-off for the benefits.”

System vulnerability is tops among safety concerns, acknowledges Toscano. Some folks worry that a drone’s wireless links could be severed or jammed, and that the craft would start noodling around independently, with potentially dire results.

“That probably could be addressed by programming the drone with a pre-determined landing spot,” says Toscano. ”It could default to a safe zone if something went wrong.”

And what about privacy? Drones, says Toscano, don’t pose a greater threat to privacy than any other extant technology with potential eavesdropping applications.

“The law is the critical factor here,” he insists. “The Fourth Amendment supports the expectation of privacy. If that’s violated by any means, it’s a crime, regardless of the technology responsible.”

More to the point, says Toscano, we have to give unmanned aircraft a little time and space to develop: we’re still at Drones 1.0.

“With any new technology, it’s the same story: crawl, walk, run,” Toscano says. ”What Bezos did was show what UVS technology would look like at a full run — but it’s important to remember, we’re really still crawling.”

To get drones up on their hind legs and sprinting — or soaring — they’re going to need a helping hand from the FAA. And things aren’t going swimmingly there, says attorney Schulman.

“They recently issued a road map on the probable form of the new rules,” Schulman says, “and frankly, it’s discouraging. It suggests the regulations are going to be pretty burdensome, particularly for the smaller UAVs, which are the kind best suited for civilian uses. If that’s how things work out, it will have a major chilling effect on the industry.”

January 29 2014

Four short links: 29 January 2014

  1. Bounce Explorer — throwable sensor (video, CO2, etc) for first responders.
  2. Sintering Patent Expires Today — key patent expires, though there are others in the field. Sintering is where the printer fuses powder with a laser, which produces smooth surfaces and works for ceramics and other materials beyond plastic. Hope is that sintering printers will see same massive growth that FDM (current tech) printers saw after the FDM patent expired 5 years ago.
  3. Internet is the Greatest Legal Facilitator of Inequality in Human History (The Atlantic) — hyperbole aside, this piece does a good job of outlining “why they hate us” and what the systemic challenges are.
  4. First Carbon Fiber 3D Printer Announced — $5000 price tag. Nice!

January 27 2014

The new bot on the block

Fukushima changed robotics. More precisely, it changed the way the Japanese view robotics. And given the historic preeminence of the Japanese in robotic technology, that shift is resonating through the entire sector.

Before the catastrophic earthquake and tsunami of 2011, the Japanese were focused on “companion” robots, says Rodney Brooks, a former Panasonic Professor of Robotics at MIT, the founder and former technical officer of IRobot, and the founder, chairman and CTO of Rethink Robotics. The goal, says Brooks, was making robots that were analogues of human beings — constructs that could engage with people on a meaningful, emotional level. Cuteness was emphasized: a cybernetic, if much smarter, equivalent of Hello Kitty, seemed the paradigm.

But the multiple core meltdown at the Fukushima Daiichi nuclear complex following the 2011 tsunami changed that focus abruptly.

“Fukushima was a wake-up call for them,” says Brooks. “They needed robots that could do real work in highly radioactive environments, and instead they had robots that were focused on singing and dancing. I was with IRobot then, and they asked us for some help. They realized they needed to make a shift, and now they’re focusing on more pragmatic designs.”

Pragmatism was always the guiding principle for Brooks and his companies, and is currently manifest in Baxter, Rethink’s flagship product. Baxter is a breakthrough production robot for a number of reasons. Equipped with two articulated arms, it can perform a multitude of tasks. It requires no application code to start up, and no expensive software to function. No specialists are required to program it; workers with minimal technical background can “teach” the robot right on the production line through a graphical user interface and arm manipulation. Also, Baxter requires no cage — human laborers can work safely alongside it on the assembly line.

Moreover, it is cheap: about $25,000 per unit. It is thus the robotic equivalent of the Model T, and like the Model T, Baxter and its subsequent iterations will impose sweeping changes in the way people live and work.

“We’re at the point with production robots where we were with mobile robots in the late 1980s and early 1990s,” says Brooks. “The advances are accelerating dramatically.”

What’s the biggest selling point for this new breed of robot? Brooks sums it up in a single word: dignity.

“The era of cheap labor for factory line work is coming to a close, and that’s a good thing,” he says. “It’s grueling, and it can be dangerous. It strips people of their sense of worth. China is moving beyond the human factory line — as people there become more prosperous and educated, they aspire to more meaningful work. Robots like Baxter will take up the slack out of necessity.”

And not just for the assemblage of widgets and gizmos. Baxter-like robots will become essential in the health sector, opines Brooks — particularly in elder care. As the Baby Boom piglet continues its course through the demographic python, the need for attendants is outstripping supply. No wonder: the work is low-paid and demanding. Robots can fill this breach, says Brooks, doing everything from preparing and delivering meals to shuttling laundry, changing bedpans and mopping floors.

“Again, the basic issue is dignity,” Brooks said. “Robots can free people from the more menial and onerous aspects of elder care, and they can deliver an extremely high level of service, providing better quality of life for seniors.”

Ultimately, robots could be more app than hardware: the sexy operating system on Joaquin Phoenix’s mobile device in the recent film “Her” may not be far off the mark. Basically, you’ll carry a “robot app” on your smartphone. The phone can be docked to a compatible mechanism — say, a lawn mower, or car, or humanoid mannequin — resulting in an autonomous device ready to trim your greensward, chauffeur you to the opera, or mix your Mojitos.

YDreams Robotics, a company co-founded by Brooks protégé Artur Arsenio, is actively pursuing this line of research.

“It’s just a very efficient way of marketing robots to mass consumers,” says Arsenio. “Smartphones basically have everything you need, including cameras and sensors, to turn mere things into robots.”

YDream has its first product coming out in April: a lamp. It’s a very fine if utterly unaware desk lamp on its own, says Artur, but when you connect it to a smartphone loaded with the requisite app, it can do everything from intelligently adjusting lighting to gauging your emotional state.

“It uses its sensors to interface socially,” Artur says. “It can determine how you feel by your facial expressions and voice. In a video conference, it can tell you how other participants are feeling. Or if it senses you’re sad, it may Facebook your girlfriend that you need cheering up.”

Yikes. That may be a bit more interaction than you want from a desk lamp, but get used to it. Robots could intrude in ways that may seem a little off-putting at first — but that’s a marker of any new technology. Moreover, says Paul Saffo, a consulting professor at Stanford’s School of Engineering and a technology forecaster of repute, the highest use of robots won’t be doing old things better. It will be doing new things, things that haven’t been done before, things that weren’t possible before the development of key technology.

“Whenever we have new tech, we invariably try to use it to do old things in a new way — like paving cow paths,” says Saffo. “But the sooner we get over that — the sooner we look beyond the cow paths — the better off we’ll be. Right now, a lot of the thinking is, ‘Let’s have robots drive our cars, and look like people, and be physical objects.’

But the most important robots working today don’t have physical embodiments, says Saffo — think of them as ether-bots, if you will. Your credit application? It’s a disembodied robot that gets first crack at that. And the same goes for your resume when you apply for a job.

In short, robots already are embedded in our lives in ways we don’t think of as “robotic.” This trend will only accelerate. At a certain point, things may start feeling a little — well Singularity-ish. Not to worry — it’s highly unlikely Skynet will rain nuclear missiles down on us anytime soon. But the melding of robotic technology with dumb things nevertheless presents some profound challenges — mainly because robots and humans react on disparate time scales.

“The real questions now are authority and accountability,” says Saffo. “In other words, we have to figure out how to balance the autonomy systems need to function with the control we need to ensure safety.”

Saffo cites modern passenger planes like the Airbus 330 as an example.

“Essentially they’re flying robots,” he says. “And they fly beautifully, conserving fuel to the optimal degree and so forth. But the design limits are so tight — if they go too fast, they can fall apart; if they go too slow, they stall. And when something goes wrong, the pilot has perhaps 50 kilometers to respond. At typical speeds, that doesn’t add up to much reaction time.”

Saffo noted the crash of Air France Flight 447 in the mid-Atlantic in 2009 involved an Airbus 330. Investigations revealed the likely cause was turbulence complicated by the icing up of the plane’s speed sensors. This caused the autopilot to disengage, and the plane began to roll. The pilots had insufficient time to compensate, and the aircraft slammed into the water at 107 knots.

“The pilot figured out what was wrong — but it was 20 seconds too late,” says Saffo. “To me, it shows we need to devote real effort to defining boundary parameters on autonomous systems. We have to communicate with our robots better. Ideally, we want a human being constantly monitoring the system, so he or she can intervene when necessary. And we need to establish parameters that make intervention even possible.”

Rod Brooks will be speaking at the upcoming Solid Conference in May. If you are interested in robotics and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

January 24 2014

The lingering seduction of the page

In an earlier post in this series, I examined the articulatory relationship between information architecture and user interface design, and argued that the tools that have emerged for constructing information architectures on the web will only get us so far when it comes to expressing information systems across diverse digital touchpoints. Here, I want to look more closely at these traditional web IA tools in order to tease out two things: (1) ways we might rely on these tools moving forward, and (2) ways we’ll need to expand our approach to IA as we design for the Internet of Things.

First stop: the library

Information Architecture for the World Wide WebInformation Architecture for the World Wide WebThe seminal text for Information Architecture as it is practiced in the design of online information environments is Peter Morville’s and Louis Rosenfeld’s Information Architecture for the World Wide Web, affectionately known as “The Polar Bear Book.”

First published in 1998, The Polar Bear Book gave a name and a clear, effective methodology to a set of practices many designers and developers working on the web had already begun to encounter. Morville and Rosenfeld are both trained as professional librarians and were able to draw on this time-tested field in order to sort through many of the new information challenges coming out of the rapidly expanding web.

If we look at IA as two faces of the same coin, The Polar Bear Book focuses on the largely top-down “Internet Librarian” side of information design. The other side of the coin approaches the problems posed by data from the bottom up. In Everything is Miscellaneous: The Power of the New Digital Disorder, David Weinberger argues that the fundamental problem of the “second order” (think “card catalogue”) organization typical of library sciences-informed approaches is that they fail to recognize the key differentiator of digital information: that it can exist in multiple locations at once, without any single location being the “home” position. Weinberger argues that in the “third order” of digital information practices, “understanding is metaknowledge.” For Weinberger, “we understand something when we see how the pieces fit together.”

Successful approaches to organizing electronic data generally make liberal use of both top-down and bottom-up design tactics. Primary navigation (driven by top-down thinking) gives us a birds-eye view of the major categories on a website, allowing us to quickly focus on content related to politics, business, entertainment, technology, etc. The “You May Also Like” and “Related Stories” links come from work in the metadata-driven bottom-up space.

On the web, this textually mediated blend of top-down and bottom-up is usually pretty successful. This is no surprise: the web is, after all, primarily a textual medium. At its core, HTML is a language for marking up text-based documents. It makes them interpretable by machines (browsers) so they can be served up for human consumption. We’ve accommodated images and sounds in this information ecology by marking them up with tags (either by professional indexers or “folksonomically,” by whomever cares to pitch in).

There’s an important point here that often goes without saying: the IA we’ve inherited from the web is textual — it is based on the perception of the world mediated through the technology of writing; herin lies the limitation of the IA we know from the web as we begin to design for the Internet of Things.

Reading brains

We don’t often think of writing as “technology,” but inasmuch as technology constitutes the explicit modification of techniques and practices in order to solve a problem, writing definitely fits the bill. Language centers can be pinpointed in the brain — these are areas programmed into our genes that allow us to generate spoken language — but in order to read and write, our brains must create new connections not accounted for in our genetic makeup.

In Proust and the Squid, cognitive neuroscientist Maryanne Wolf describes the physical, neurological difference between a reading and writing brain and a pre-literate linguistic brain. Wolf writes that, with the invention of reading “we rearranged the very organization of our brain.” Whereas we learn to speak by being immersed in language, learning to read and write is a painstaking process of instruction, practice, and feedback. Though the two acts are linked by a common practice of language, writing involves a different cognitive process than speaking. It is one that relies on the technology of the written word. This technology is not transmitted through our genes; it is transmitted through our culture.

It is important to understand that writing is not simply a translation of speech. This distinction matters because it has profound consequences. Wolf writes that “the new circuits and pathways that the brain fashions in order to read become the foundation for being able to think in different, innovative ways.” As the ability to read becomes widespread, this new capacity for thinking differently, too, becomes widespread.

Though writing constitutes a major leap past speech in terms of cognitive process, it shares one very important common trait with spoken language: linearity. Writing, like speech, follows a syntagmatic structure in which meaning is constituted by the flow of elements in order — and in which alternate orders often convey alternate meanings.

When it comes to the design of information environments, this linearity is generally a foregone conclusion, a feature of the cognitive landscape which “goes without saying” and is therefore never called into question. Indeed, when we’re dealing primarily with text or text-based media, there is no need to call it into question.

In the case of embodied experience in physical space, however, we natively bring to bear a perceptual apparatus which goes well beyond the linear confines of written and spoken language. When we evaluate an event in our physical environment — a room, a person, a meaningful glance — we do so with a system of perception orders of magnitude more sophisticated than linear narrative. JJ Gibson describes this as the perceptual awareness resulting from a “flowing array of stimulation.” When we layer on top of that the non-linear nature of dynamic systems, it quickly becomes apparent that despite the immense gains in cognition brought about by the technology of writing, these advances still only partially equip us to adequately navigate immersive, physical connected environments.

The trouble with systems (and why they’re worth it)

Thinking in Systems: A PrimerThinking in Systems: A Primer

Photo: Andy Fitzgerald, of content from Thinking in Systems: A Primer, by Donella Meadows.

I have written elsewhere in more detail about challenges posed to linguistic thinkers by systems. To put all of that in a nutshell, complex systems baffle us because we have a limited capacity to track system-influencing inputs and outputs and system-changing flows. As systems thinking pioneer Donella Meadows characterizes them in her book Thinking in Systems: A Primer, self-organizing, nonlinear, feedback systems are “inherently unpredictable” and “understandable only in the most general way.”

According to Meadows, we learn to navigate systems by constructing models that approximate a simplified representation of the system’s operation and allow us to navigate it with more or less success. As more and more of our world — our information, our social networks, our devices, and our interactions with all of these — becomes connected, our systems become increasingly difficult (and undesirable) to compartmentalize. They also become less intrinsically reliant on linear textual mediation: our “smart” devices don’t need to translate their messages to each other into English (or French or Japanese) in order to interact.

This is both the great challenge and the great potential of the Internet of Things. We’re beginning to interact with our built information environments not only in a classically signified, textual way, but also in a physical-being-operating-in-the-world kind of way. The text remains — and the need to interact with that textual facet with the tools we’ve honed on the web (i.e. traditional IA) remains. But as the information environments we’re tasked with representing become less textual and more embodied, the tools we use to represent them must likewise evolve beyond our current text-based solutions.

Fumbling toward system literacy

In order to rise to meet this new context, we’re going to need as many semiotic paths as we can find — or create. And in order to do that, we will have to pay close attention to the cognitive support structures that normally “go without saying” in our conceptual models.

This will be hard work. The payoff, however, is potentially revolutionary. The threshold at   which we find ourselves is not merely another incremental step in technological advancement. The immersion in dynamic systems that the connected environment foreshadows holds the potential to re-shape the way we think — the way our brains are “wired” — much as reading and writing did. Though mediated by human-made, culturally transmitted technology (e.g. moveable type, or, in this case, Internet protocols), these changes hold the power to affect our core cognitive process, our very capacity to think.

What this kind of “system literacy” might look like is as puzzling to me now as reading and writing must have appeared to pre-literate societies. The potential of being able to grasp how our world is connected in its entirety — people, social systems, economies, trade, climate, mobility, marginalization — is both mesmerizing and terrifying. Mesmerizing because it seems almost magical; terrifying because it hints at how unsophisticated and parochial our current perspective must look from such a vantage point.

As information architects and interface designers, all of this means that we’re going to have to be nimble and creative in the way we approach design for these environments. We’re going to have to cast out beyond the tools and techniques we’re most comfortable with to find workable solutions to new problems of complexity. We aren’t the only ones working on this, but our role is an important one: engineers and users alike look to us to frame the rhetoric and usability of immersive digital spaces. We’re at a major transition in the way we conceive of putting together information environments. Much like Morville and Rosenfeld in 1998, we’re “to some degree all still making it up as we go along.” I don’t pretend to know what a fully developed information architecture for the Internet of Things might look like, but in the spirit of exploration, I’d like to offer a few pointers that might help nudge us in the right direction — a topic I’ll tackle in my next post.

January 23 2014

Podcast: Solid, tech for humans, and maybe a field trip?

We were snowed in, but the phones still worked, so Jon Bruner, Mike Loukides, and I got together on the phone to have a chat. We start off talking about the results of the Solid call for proposals, but as is often the case, the conversation meandered from there.

Here are some of the links we mention in this episode:

You can subscribe to O’Reilly Radar podcast through iTunes or SoundCloud, or directly through our podcast’s RSS feed.

January 08 2014

The emergence of the connected city

Photo: Millertime83Photo: Millertime83

If the modern city is a symbol for randomness — even chaos — the city of the near future is shaping up along opposite metaphorical lines. The urban environment is evolving rapidly, and a model is emerging that is more efficient, more functional, more — connected, in a word.

This will affect how we work, commute, and spend our leisure time. It may well influence how we relate to one another, and how we think about the world. Certainly, our lives will be augmented: better public transportation systems, quicker responses from police and fire services, more efficient energy consumption. But there could also be dystopian impacts: dwindling privacy and imperiled personal data. We could even lose some of the ferment that makes large cities such compelling places to live; chaos is stressful, but it can also be stimulating.

It will come as no surprise that converging digital technologies are driving cities toward connectedness. When conjoined, ISM band transmitters, sensors, and smart phone apps form networks that can make cities pretty darn smart — and maybe more hygienic. This latter possibility, at least, is proposed by Samrat Saha of the DCI Marketing Group in Milwaukee. Saha suggests “crowdsourcing” municipal trash pick-up via BLE modules, proximity sensors and custom mobile device apps.

“My idea is a bit tongue in cheek, but I think it shows how we can gain real efficiencies in urban settings by gathering information and relaying it via the Cloud,” Saha says. “First, you deploy sensors in garbage cans. Each can provides a rough estimate of its fill level and communicates that to a BLE 112 Module.”

BLE112_M_RGB_frontBLE112_M_RGB_front

As pedestrians who have downloaded custom “garbage can” apps on their BLE-capable iPhone or Android devices pass by, continues Saha, the information is collected from the module and relayed to a Cloud-hosted service for action — garbage pick-up for brimming cans, in other words. The process will also allow planners to optimize trash can placement, redeploying receptacles from areas where need is minimal to more garbage-rich environs.

“It should also allow greater efficiency in determining pick-up schedules,” said Saha. “For example, in some areas regular scheduled pick-ups may be best. But managers may find it’s also a good idea to put some trash collectors on a roving basis to service cans when they’re full. That could work well for areas where there’s a great deal of traffic and cans fill up quickly but unpredictably — and conversely, in low-traffic areas, where regular pick-up isn’t necessary. Both situations would benefit from rapid-response flexibility.”

Garbage can connectivity has larger implications than just, well, garbage. Brett Goldstein, the former Chief Data and Information Officer for the City of Chicago and a current lecturer at the University of Chicago, says city officials found clear patterns between damaged or missing garbage cans and rat problems.

“We found areas that showed an abnormal increase in missing or broken receptacles started getting rat outbreaks around seven days later,” Goldstein said. “That’s very valuable information. If you have sensors on enough garbage cans, you could get a temporal leading edge, allowing a response before there’s a problem. In urban planning, you want to emphasize prevention, not reaction.”

Such Cloud-based app-centric systems aren’t suited only for trash receptacles, of course. Companies such as Johnson Controls are now marketing apps for smart buildings — the base component for smart cities. (Johnson’s Metasys management system, for example, feeds data to its app-based Paoptix Platform to maximize energy efficiency in buildings.) In short, instrumented cities already are emerging. Smart nodes — including augmented buildings, utilities and public service systems — are establishing connections with one another, like axon-linked neurons.

But Goldstein, who was best known in Chicago for putting tremendous quantities of the city’s data online for public access, emphasizes instrumented cities are still in their infancy, and that their successful development will depend on how well we “parent” them.

“I hesitate to refer to ‘Big Data,’ because I think it’s a terribly overused term,” Goldstein said. “But the fact remains that we can now capture huge amounts of urban data. So, to me, the biggest challenge is transitioning the fields — merging public policy with computer science into functional networks.”

There are other obstacles to the development of the intelligent city, of course. Among them: how do you incentivize enough people to download apps sufficient to achieve a functional connected system? Indeed, the human element could prove the biggest fly in the ointment. We may resist quantifying ourselves to such a degree, even for the sake of our cities.

On the other hand, the connected city exists to serve people, not the other way around, observes Drew Conway, senior advisor to the New York City Mayor’s Office of Data Analytics and founder of the data community support group DataGotham. People ultimately act in their self-interest, and if the connected city brings boons, people will accept and support it. But attention must be paid to unintended consequences, emphasizes Conway.

“I never forget that humanity is behind all those bits of data I consume,” says Conway. “Who does the data serve, after all? Human beings decided why and where to put out those sensors, so data is inherently biased — and I always keep the human element in mind. And ultimately, we have to look at the true impacts of employing that data.”

As an example, continues Conway, “say the data tells you that an illegal conversion has created a fire hazard in a low-income residential building. You move the residents out, thus avoiding potential loss of life. But now you have poor people out on the street with no place to go. There has to be follow-through. When we talk of connections, we must ensure that some of those connections are between city services and social services.”

Like many technocrats, Conway also is concerned about possible threats to individual rights posed by data collected in the name of the commonwealth.

“One of New York’s most popular programs is expanding free public WiFi,” he says. “It’s a great initiative, and it has a lot of support. But what if an agency decided it wanted access to weblog data from high-crime areas? What are the implications for people not involved in any criminal activity? We haven’t done a good job of articulating where the lines should be, and we need to have that debate. Connected cities are the future, but I’d welcome informed skepticism on their development. I don’t think the real issue is the technical limitations — it’s the quid pro quo involved in getting the data and applying it to services. It’s about the trade-offs.”


For more on the convergence of software and hardware, check out our Solid Conference.

Register for the O'Reilly Solid ConferenceRegister for the O'Reilly Solid Conference

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl