Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 26 2014

February 25 2014

February 22 2014

February 21 2014

February 19 2014

February 18 2014

February 12 2014

Searching for the software stack for the physical world

When I flip through a book on networking, one of the first things I look for is the protocol stack diagram. An elegant representation of the protocol stack can help you make sense of where to put things, separate out important mental concepts, and help explain how a technology is organized.

I’m no stranger to the idea of trying to diagram out a protocol space; I had a successful effort back when the second edition of my book on 802.11 was published. I’ve been a party to several awesome conversations recently about how to organize the broad space that’s referred to as the “Internet of Things.”IoT StackIoT Stack

Let’s start at the bottom, with the hardware layer, which is labeled Things. These are devices that aren’t typically thought of as computers. Sure, they wind up using a computing ecosystem, but they are not really general-purpose computers. These devices are embedded devices that interact with the physical world, such as sensors and motors. Primarily, they will either be the eyes and ears of the overall system, or the channel for actions on the physical world. They will be designed around low power consumption, and therefore will use a low throughput communication channel. If they communicate with a network, it will typically be through a radio interface, but the tyranny of limited power consumption means that the network interface will usually be limited in some way.

Things provide their information or are instructed to act on the world by connecting through a Network Transport layer. Networking allows reports from devices to be received and acted on. In this model, the network transport consists of a set of technologies that move data around, and is a combination of the OSI data link, networking, and transport layers. Mapping into technologies that we use, it would be TCP/IP on Wi-Fi for packet transport, with data carried over a protocol like REST.

The Data layer aggregates many devices into an overall collective picture. In the IoT world, users are interacting with the physical world and asking questions like “what is the temperature in this room?” That question isn’t answered by just one device (or if it is, the temperature of the room is likely to fluctuate wildly). Each component of the hardware layer at the bottom of the stack contributes a piece of the overall contextual picture. A light bulb can be on or off, but to determine the desired state, you might mix in overall power consumption of the building, how many people are present, the locations of those people, total demand on the electrical grid, and possibly even the preferences of the people in an area. (I once toured an FAA regional air traffic control center, and the groups of controllers that serve each sub-region of airspace could customize the lighting, ranging from normal lighting to quite dim lighting.)

The value of the base level of devices in this environment depends on how much unique context it can add to the overall picture and enable the operation of software that can operate on concepts of the physical world like room temperature or whether I am asleep. Looking back on it, the foundation for the data layer was being laid years ago, as is obvious reading section three of Tim O’Reilly’s “What is Web 2.0?” essay from 2005. In this world, you are either contributing to or working with context, or you are not doing very interesting work.

Sharing context widely requires APIs. If the real-world import of a piece of data is only apparent when it is combined with other sources of data, an application needs to be able to mash up several data sources to create its own unique picture of the world. APIs enable programmers to build context that represents what is important to users and build a cognitively significant aggregation. “Room temperature” may depend on getting data from temperature, humidity, and sunlight sensors, perhaps in several locations in a room.

In addition to reporting up, APIs need to enable control to flow down the stack. If “room temperature” is a complex state that depends on data from several sensors, it may require acting on several aspects of a climate control system to change: the heater, air conditioner, fan, and maybe even whether the blinds in a room are open or closed.

Designing APIs is hard; in addition to getting the data operations and manipulation right, you need to design for performance, security, and to enable applications over time. A good place to start is O’Reilly’s API strategy guide.

Finally, we reach the top of the stack when we get to Applications. Early applications allowed you to control the physical world like a remote control. Nice, but by no means the end of the story. One of the smartest technology analysts I know often challenges me by saying, “It’s great that you can report anomalous behavior. If you know something is wrong, do something about it!” The “killer app” in this world is automation: being able to work without explicit instructions from the user, or to continuously fine-tune and optimize the real world based on what has flowed up the stack.

Unlike my previous effort in depicting the world of 802.11, this stack is very much a work in progress. Please leave your thoughts in the comments.

February 10 2014

More 1876 than 1995

Corliss_engineCorliss_engine

Photo: Wikimedia Commons. Corliss engine, showing valvegear (New Catechism of the Steam Engine, 1904).

Philadelphia’s Centennial Exposition of 1876 was America’s first World’s Fair, and was ostensibly held to mark the nation’s 100th birthday. But it heralded the future as much as it celebrated the past, showcasing the country’s strongest suit: technology.

The centerpiece of the Expo was a gigantic Corliss engine, the apotheosis of 40 years of steam technology. Thirty percent more efficient than standard steam engines of the day, it powered virtually every industrial exhibit at the exposition via a maze of belts, pulleys, and shafts. Visitors were stunned that the gigantic apparatus was supervised by a single attendant, who spent much of his time reading newspapers.

“This exposition was attended by 10 million people at a time when travel was slow and difficult, and it changed the world,” observes Jim Stogdill, general manager of Radar at O’Reilly Media, and general manager of O’Reilly’s upcoming Internet-of-Things-related conference, Solid.

“Think of a farm boy from Kansas looking at that Corliss engine, seeing what it could do, thinking of what was possible,” Stogdill continues. ”When he left the exposition, he was a different person. He understood what the technology he saw meant to his own work and life.”

The 1876 exposition didn’t mark the beginning of the Industrial Revolution, says Stogdill. Rather, it signaled its fruition, its point of critical mass. It was the nexus where everything — advanced steam technology, mass production, railroads, telegraphy — merged.

“It foreshadowed the near future, when the Industrial Revolution led to the rapid transformation of society, culturally as well as economically. More than 10,000 patents followed the exposition, and it accelerated the global adoption of the ‘American System of Manufacture.’ The world was never the same after that.”

In terms of the Internet of Things, we have reached that same point of critical mass. In fact, the present moment is more similar to 1876 than to more recent digital disruptions, Stogdill argues. “It’s not just the sheer physicality of this stuff,” he says. “It is also the breadth and speed of the change bearing down on us.”

While the Internet changed everything, says Stogdill, “its changes came in waves, with scientists and alpha geeks affected first, followed by the early adopters who clamored to try it. It wash’t until the Internet was ubiquitous that every Kansas farm boy went online. That 1876 Kansas farm boy may not have foreseen every innovation the Industrial Revolution would bring, but he knew — whether he liked it or not — that his world was changing.”

As the Internet subsumes physical objects, the rate of change is accelerating, observes Stogdill. “Today, stable wireless platforms, standardized software interface components and cheap, widely available sensors have made the connection of virtually every device — from coffee pots to cars — not only possible; they have made it certain.”

“Internet of Things” is now widely used to describe this latest permutation of digital technology; indeed, “overused” may be a more apt description. It teeters on the knife-edge of cliché. “The term is clunky,” Stogdill acknowledges, “but the buzz for the underlying concepts is deserved.”

Stogdill is quick to point out that this “Internet of Everything” goes far beyond the development of new consumer products. Open source sensors and microcontroller libraries already are allowing the easy integration of software interfaces with everything from weather stations to locomotives. Large, complicated systems — water delivery infrastructure, power plants, sewage treatment plants, office buildings — will be made intelligent by these software and sensor packages, allowing real-time control and exquisitely efficient operation. Manufacturing has been made frictionless, development costs are plunging, and new manufacturing-as-a-service frameworks will create new business models and drive factory production costs down and production up.

“When the digital age began accelerating,” Stogdill explains, “Nicholas Negroponte observed that the world was moving from atoms to bits — that is, the high-value economic sectors were transforming from industrial production to aggregating information.”

“I see the Internet of Everything as the next step,” he says. ”We won’t be moving back to atoms, but we’ll be combining atoms and bits, merging software and hardware. Data will literally grow physical appendages, and inform industrial production and public services in extremely powerful and efficient ways. Power plants will adjust production according to real-time demand, traffic will smooth out as driverless cars become commonplace. We’ll be able to track air and water pollution to an unprecedented degree. Buildings will not only monitor their environmental conditions for maximum comfort and energy efficiency, they’ll be able to adjust energy consumption so it corresponds to electricity availability from sustainable sources.”

Stogdill believes these converging phenomena have put us on the cusp of a transformation as dramatic as the Industrial Revolution.

“Everyone will be affected by this collision of hardware and software, by the merging of the virtual and real,” he says. “It’s really a watershed moment in technology and culture. We’re at one of those tipping points of history again, where everything shifts to a different reality. That’s what the 1876 exposition was all about. It’s one thing to read about the exposition’s Corliss engine, but it would’ve been a wholly different experience to stand in that exhibit hall and see, feel, and hear its 1,400 horsepower at work, driving thousands of machines. It is that sensory experience that we intend to capture with Solid. When people look back in 150 years, we think they could well say, ‘This is when they got it. This is when they understood.’”


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

February 07 2014

Why Solid, why now

A few years ago at OSCON, one of the tutorials demonstrated how to click a virtual light switch in Second Life and have a real desk lamp light up in the room. Looking back, it was rather trivial, but it was striking at the time to see software people taking an interest in the “real world.” And what better metaphor for the collision of virtual and real than a connection between Second Life and the Portland Convention Center?

In December 2012, our Radar team was meeting in Sebastopol and we were talking about trends in robotics, Maker DIY, Internet of Things, wearables, smart grid, industrial Internet, advanced manufacturing, frictionless supply chain, etc. We were trying to figure out where to put our focus among all of these trends when suddenly it was obvious (at least to Mike Loukides, who pointed it out): they are all more alike than different, and we could focus on all of them by looking at the relationships among them. The Solid program was conceived that day.

We called it Solid because most of the people we work with are software people and we wanted to draw attention to the non-virtuality of all this stuff. This isn’t the stuff of some abstract “cyber” domain; these were all trends that lived at the nexus of software and very real physical hardware. A big part of this story is that hardware is getting less hard, and more like software in its malleability, but there remains a sheer physicality to this stuff that makes it different. Negroponte told us to forget atoms and focus on bits, and we did for 20 years. But now, the pendulum is swinging back and atoms matter again — except now they are atoms layered with bits, and that makes this different.

Software and hardware have been essentially separate mediums wielded by the separate tribes of hackers/ coders/ programmers/ software engineers and capital E Engineers. Solid is about the beginning of something different, the collision of software and hardware into a new combined medium that will be wielded by people (or teams) that understand both, of bits and atoms uniting in the same intellectual space. What should we call it? Fluidware? That won’t be it, but I’d like eventually to come up with a term that conveys that a combination of software and less hard hardware is something distinct.

Naturally, this impacts more than just the stuff that we produce from this new medium. It influences organizational models; the things engineers, designers, and hackers need to know to do their jobs; and the business models you can deliver with bit-enabled collections of atoms.

With Solid, we envision an annual program of engagement that will span various media as well as multiple in-person events. We’re going to do Solid:Local events, which we started in Cambridge, MA, on February 6, and we’re going to anchor the program with the Solid Conference in San Francisco this year in May that is going to be very different from the other conferences we do.

First off, the Solid Conference will be at Ft. Mason, a venue we chose for it’s beautiful views, and because it can host the kinds of industrial-scale things we’d like to demonstrate there. We still have a lot to do to bring this program together, but our guiding principle is to give our audience a visceral feel for this world of combined hardware and software. We’re going for something that feels less like a conference, and more like a miniature World’s Fair Exposition with talks.

I believe that we are on the cusp of something as dramatic as the Industrial Revolution. In that time, a series of industrial expositions drew millions of people to London, Paris, Philadelphia and elsewhere to experience firsthand the industrialization of their world.

Here in my home office outside of Philadelphia, I’m about 15 miles from the location of the 1876 Exposition that we’re using for inspiration. If you entered Machinery Hall during that long summer with its 1400 horsepower Corliss engine powering belt-driven machinery throughout the vast hall, you couldn’t help but leave with an appreciation for how the world was changing. We won’t be building a Machinery Hall, certainly not at that scale, but we are building a program with as much show as tell. We want to viscerally demonstrate the principals that we see at work here.

We hope you can join us in San Francisco or at one of the many Solid:Locals we plan to put on. Share your stories about interesting intersections of hardware and software with us, too.

February 05 2014

Trope or fact? Technology creates more jobs than it destroys

Editor’s note: We’re trying something new here. I read this back-and-forth exchange and decided we should give it a try. Or, more accurately, since we’re already having plenty of back-and-forth email exchanges like that, we just need to start publishing them. My friend Doug Hill agreed to be a guinea pig and chat with me about a subject that’s on both of our minds and publish it here.

Stogdill: I got this tweet over the holidays while I was reading your book. I mean, I literally got distracted by this tweet while I was reading your book:

It felt like a natural moment of irony that I had to share with you. The author obviously has an optimistic point of view, and his claims seem non-obvious in light of our jobless recovery. I also recently read George Packer’s The Unwinding, and frankly it seems to better reflect the truth on the ground, at least if you get outside of the big five metro areas. But I suspect not a lot of techno optimists are spending time in places that won’t get 4G LTE for another year or two.

I’m not going to ask you what you think of the article because I think I already know the answer. I do have a few things on my mind, though. Is our jobless recovery a new structural reality brought about by more and more pervasive automation? Are Norbert Wiener’s predictions from the late 1940s finally coming true? Or is creative destruction still working, but just taking some time to adjust this time around? And, if one is skeptical of technology, is it like being skeptical of tectonics? You can’t change it, so bolt your house down?

Hill: Your timing is good. The day your email arrived the lead story in the news was the latest federal jobs report, which told us that the jobless “recovery” continues apace. Jobless, that is.

The national conversation about the impact of automation on employment continues apace, too. Thomas Friedman devoted his New York Times column a couple of days ago to The Second Machine Age, the new book by Erik Brynjolfsson and Andrew McAfee. They’re the professors from MIT whose previous book, Race Against the Machine, helped start that national conversation, in part because it demonstrated both an appreciation of automation’s advantages and an awareness that lots of workers could be left behind.

I’ll try to briefly answer your questions, Jim, and then note a couple points that strike me as somewhat incongruous in the current discussion.

Is our jobless economy a new structural reality brought about by more and more pervasive automation?

Yes. Traditional economic theory holds that advances in technology create jobs rather than eliminate them. Even if that maxim were true in the past (and it’s not universally accepted), many economists believe the pace of innovation in automation today is overturning it. This puts automation’s most fervent boosters in the odd position of arguing that technological advance will disrupt pretty much everything except traditional economic theory.

Are Norbert Wiener’s predictions from the late 1940s finally coming true? Or is creative destruction still working, but just taking some time to adjust this time around?

Yes to both. Wiener’s predictions that automation would be used to undermine labor are coming true, and creative destruction is still at work. The problem is that we won’t necessarily like what the destruction creates.

Now, about those incongruous points that bug me:

  1. First, a quibble over semantics. It’s convenient in our discussions about automation to use the word “robots,” but also misleading. Much, if not most, of the jobs displacement we’re seeing now is coming from systems and techniques that are facilitated by computers but less mechanical than the robots we typically envision assembling parts in factories. I don’t doubt that actual robots will be an ever-more-important force in the future, but they’ll be adding momentum to methods that corporations have been using for quite awhile now to increase productivity, even as they’re reducing payrolls.
  2. It’s commonly said that the answer to joblessness is education. Our employment problems will be solved by training people to do the sorts of jobs that the economy of the future will require. But wait a minute. If it’s true that the economy of the future will increasingly depend on automation, won’t we simply be educating people to do the sorts of jobs that eliminate more jobs?
  3. Techno optimists argue that our current employment problems are merely manifestations of a transition period on the way to a glorious future. “Let the robots take the jobs,” says Kevin Kelly, ”and let them help us dream up new work that matters.”

    Even on his own terms, the future Kelly envisions seems more nightmarish than dreamlike. Everyone agrees automation is going to grow consistently more capable. As it does, Kelly says, robots will take over every job, including whatever new jobs we dream up to replace the previous jobs we lost to robots. If he’s right, we won’t be dreaming up new work that matters because we want to, but because we’ll have no choice. It will indeed be a race against the machines, and machines don’t get tired.

One more thing. You asked if being skeptical of technology is like being skeptical of tectonics. My first thought was to wonder whether anybody is really skeptical of tectonics, but given the polls on global warming, I guess anything is possible — more possible, I think, than a reversal of the robot revolution. So yeah, go ahead and bolt the house down.

Stogdill: Let me think where to start. My problem in this conversation is that I find myself arguing both sides of the question in my head, which makes it hard to present a coherent argument to you.

First, let me just say that I enter this discussion with some natural inclination toward a Schumpeterian point of view. In 1996, I visited a Ford electronics plant in Pennsylvania that was going through its own automation transformation. They had recently equipped the plant with then-new surface mount soldiering robots and redesigned the electronic modules that they produced there to take advantage of the tech. The remaining workers each tended two robots instead of placing parts on the boards themselves.

Except for this one guy. For some reason I’ve long forgotten, one of the boards they manufactured still required a single through-board capacitor, and a worker at that station placed capacitors in holes all day. Every 10 seconds for eight hours, a board would arrive in front of him, he would drop a capacitor’s legs through two little holes, push it over a bit to make sure it was all the way through, and then it was off to the next station to be soldiered. It was like watching Lucy in the Chocolate Factory.

I was horrified — but when I talked to him, he was bound and determined to keep that job from being automated. I simply couldn’t understand it. Why wouldn’t he want to upgrade his skills and run one of the more complex machines? Even now, when I’m more sympathetic to his plight, I’m still mystified that he could stand to continue doing that job when something else might have been available.

Yet, these days I find myself losing patience with the reflexive trope “of course technology creates more jobs than it destroys; it always has. What are you, a Luddite?” The Earth will keep rotating around the sun, too; it always has, right up until the sun supernovas, and then it won’t.

Which isn’t to say that the robots are about to supernova, but that arguments that depend on the past perpetuating into the future are not arguments — they’re wishes. (See also, China’s economy will always keep growing at a torrid rate despite over-reliance on investment at the expense of consumption because it always has). And I just can’t really take anyone seriously who makes an argument like that if they can’t explain the mechanisms that will continue to make it true.

So, to my thinking, this boils down to a few key questions. Was that argument even really true in the past, at least the recent past? If it was, is the present enough like the past that we can assume that, with re-training, we’ll find work for the people being displaced by this round of automation? Or, is it possible that something structurally different is happening now? And, even if we still believe in the creative part of creative destruction, what destructive pace can our society absorb and are there policies that we should be enacting to manage and ease the transition?

This article does a nice job of explaining what I think might be different this time with its description of the “cognitive elite.” As automation takes the next layer of jobs at the current bottom, we humans are asked to do more and more complex stuff, higher up the value hierarchy. But what if we can’t? Or, if not enough of us can? What if it’s not a matter of just retraining — what if we’re just not talented enough? The result would surely be a supply/demand mismatch at the high end of the cognitive scale, and we’d expect a dumbbell shape to develop in our income distribution curve. Or, in other words, we’d expect new Stanford grads going to Google to make $100K and everyone else to work at Walmart. And more and more, that seems like it’s happening.

Anyway, right now I’m all question, no answer. Others are suggesting that this jobless recovery has nothing to do with automation. It’s the (lack of) unions, stupid. I really don’t know, but I think we — meaning we technologists and engineers — need to be willing to ask the question “is something different this go-round?” and not just roll out the old history-is-future tropes.

We’re trying to create that conversation at least a bit by holding an Oxford-style debate at our next Strata conference. The statement we’ll be debating is: “Technology creates more jobs than it destroys,” and I’ll be doing my best to moderate in an even-handed way.

By the way, your point that it’s “not just robots” is well taken. I was talking to someone recently who works in the business process automation space, and they’ve begun to refer to those processes as “robots,” too — even though they have no physical manifestation. I was using the term in that broad sense, too.

Hill: In your last email you made two points in passing that I’d like to agree with right off the bat.

One is your comment that, when it comes to predicting what impact automation will have on employment, you find yourself “arguing both sides of the question.” Technology always has and always will cut both ways, so we can be reasonably certain that, whatever happens, both sides of the question are going to come into play. That’s about the only certainty we have, really, which is why I also liked it when you said, “I’m all question, no answer.” That’s true of all of us, whether we admit it or not.

We are obligated, nonetheless, to take what Norbert Wiener called “the imaginative forward glance.” For what it’s worth then, my answer to your question, “Is something different this go-round?” — by which you meant, even if it was once true that technological advancement created more rather than fewer jobs, that may no longer be true, given the pace, scale, and scope of the advances in automation we’re witnessing today — is yes and no.

That is, yes, I do think the scope and scale of technological change we’re seeing today presents us with challenges of a different order of magnitude than what we’ve faced previously. At the same time, I think it’s also true that we can look to the past to gain some sense of where automation might be taking us in the future.

In the articles on this issue you and I have traded back and forth over the past several weeks, I notice that two of the most optimistic, as far as our automated future is concerned, ran in the Washington Post. I want to go on record as denying any suspicion that Jeff Bezos had anything to do with that.

Still, the most recent of those articles, James Bessen’s piece on the lessons to be learned from the experience of America’s first industrial-scale textile factories (“Will robots steal our jobs? The humble loom suggests not”) was so confidently upbeat that I’m sure Bezos would have approved. It may be useful, for that reason, to take a closer look at some of Bessen’s claims.

To hear him tell it, the early mills in Waltham and Lowell, Massachusetts, were 19th-century precursors of the cushy working conditions enjoyed in Silicon Valley today. The mill owners recruited educated, middle-class young women from surrounding farm communities and supplied them with places to live, houses of worship, a lecture hall, a library, a savings bank, and a hospital.

“Lowell marked a bold social experiment,” Bessen says, “for a society where, not so long before, the activity of young, unmarried women had been circumscribed by the Puritan establishment.”

The suggestion that the Lowell mills were somehow responsible for liberating young women from the clutches of Puritanism is questionable — the power of the Puritan church had been dissipating for all sorts of reasons for more than a century before factories appeared on the banks of the Merrimack — but let that go.

It is true that, in the beginning, the mills offered young women from middle-class families an unprecedented opportunity for a taste of freedom before they married and settled down. Because their parents were relatively secure financially, they could afford to leave them temporarily behind without leaving them destitute. That’s a long way from saying that the mills represented some beneficent “social experiment” in which management took a special interest in cultivating the well-being of the women they employed.

Thomas Dublin’s Women at Work: The Transformation of Work and Community in Lowell, Massachusetts, 1826-1860 tells a different story. Women were recruited to staff the mills, Dublin says, because they were an available source of labor (men were working the farms or employed in smaller-scale factories in the cities) and because they could be paid less than men. All supervisory positions in the mills were held by men. Also contrary to Bessen’s contention, the women weren’t hired because they were smart enough to learn specialized skills. Women tended the machines; they didn’t run them. “To the extent that jobs did not require special training, strength or endurance, or expose operatives to the risk of injury,” Dublin says, “women were employed.”

How much time they had to enjoy the amenities supposedly provided by management is another question. According to Dublin, mill workers put in 12 hours a day, six days a week, with only three regular holidays a year. As the number of mills increased, so did the pressure to make laborers more productive. Speedups and stretch-outs were imposed. A speedup meant the pace of the machinery was increased, a stretch-out meant that each employee was required to tend additional pieces of machinery. Periodic cuts in piece wages were another fact of mill life.

Because of their middle-class backgrounds, and because they were accustomed to pre-industrial standards of propriety, the first generation of women felt empowered enough to protest these conditions, to little avail. Management offered few concessions, and many women left. The generation of women who replaced them were less likely to protest. Most had fled the Irish famine and had no middle-class homes to return to.

I go into this in some detail, Jim, because it’s important to acknowledge what automation’s fundamental purpose has always been: to increase management profits. Bold social experiments to benefit workers haven’t figured prominently in the equation.

It’s true that factory jobs have, in the long run, raised the standard of living for millions of workers (the guy you met in the Ford electronics plant comes to mind), but we shouldn’t kid ourselves that they’ve necessarily been pleasant, fulfilling ways to make a living. Nor should we kid ourselves that management won’t use automation to eliminate jobs in the future, if automation offers opportunities to increase profits.

We also need to consider whether basing our economy on the production and sale of ever-higher piles of consumables, however they’re manufactured, is a model the planet can sustain any longer. That’s the essential dilemma we face, I think. We must have jobs, but they have to be directed toward some other purpose.

I realize I haven’t addressed, at least directly, any of the questions posed in your email. Sorry about that — the Bessen article got under my skin.

Stogdill: That Bessen article did get under your skin, didn’t it? Well, anger in the face of ill-considered certainty is reasonable as far as I’m concerned. Unearned certainty strikes me as the disease of our age.

Reading your response, I had a whole swirl of things running through my head. Starting with, “Does human dignity require meaningful employment?” I mean, separate from the economic considerations, what if we’re just wired to be happier when we grow or hunt for our own food? — and will the abstractions necessary to thrive in an automation economy satisfy those needs?

Also, with regard to your comments about how much stuff do we need — does that question even really matter? Is perpetual sustainability on a planet where the Second Law of Thermodynamics holds sway even possible? Anyway, that diversion can wait for another day.

Let me just close this exchange by focusing for just one moment on your point that productivity gains have always been about increasing management profits. Of course they have. I don’t think that has ever been in question. Productivity gains are where every increase in wealth ever has come from (except for that first moment when someone stumbled on something useful bubbling out of the ground), and profit is how is how we incent investment in productivity. The question is how widely gains will be shared.

Historically, that argument has been about the mechanisms (and politics) to appropriately distribute productivity gains between capital and labor. That was the fundamental argument of the 20th century, and we fought and died over it — and for maybe 30 years, reached maybe a reasonable answer.

But what if we are automating to a point where there will be no meaningful link between labor and capital? There will still be labor, of course, but it will be doing these abstract “high value” things that have nothing whatsoever to do with the bottom three layers of Maslov’s hierarchy. In a world without labor directly tied to capital and its productivity gains, can we expect the mechanisms of the 20th century to have any impact at all? Can we even imagine a mechanism that flows the value produced by robots to the humans they sidelined? Can unemployed people join a union?

We didn’t answer these questions, but thanks for exploring them with me.

Bluetooth Low Energy: what do we do with you?

“The best minds of my generation are thinking about how to make people click ads,” as Jeff Hammerbacher said. And it’s not just data analysts: it’s creeping into every aspect of technology, including hardware.

One of the more exciting developments of the past year is Bluetooth Low Energy (BLE). Unfortunately, the application that I’ve seen discussed most frequently is user tracking: devices in stores can use the BLE device in your cell phone to tell exactly where you’re standing, what you’re looking at, and target ads, offer you deals, send over salespeople, and so on.

Color me uninterested. I don’t really care about the spyware: if Needless Markup wants to know what I’m looking at, they can send someone out on the floor to look. But I am dismayed by the lack of imagination around what we can do with BLE.

If you look around, you’ll find some interesting hints about what we might be able to to with BLE:

  • The Tile app is a tag for a keychain that uses BLE to locate lost keys. In a conversation, O’Reilly author Matthew Gast suggested that you could extend the concept to develop a collar that would help to locate missing pets. That scenario requires some kind of accessibility to neighbor’s networks, but for a pie-in-the-sky discussion, it’s a good idea.
  • At ORDCamp last weekend, I was in a wide-ranging conversation about smart washing machines, wearable computing, and the like. Someone mentioned that clothes could have tags that would let the washer know when the settings were wrong, or when you had the colors mixed. This is something you could implement with BLE. Water wouldn’t be a problem since you could seal the entire device, including the battery; heat might be an issue. But again, we’re talking imagination, what might be possible. We can solve the engineering problems later.
  • One proposal for our Solid conference involved tagging garbage cans with BLE devices, to help automate garbage collection. I want to see the demo, particularly if it includes garbage trucks in a conference hall.

Why the dearth of imagination in the press and among most of the people talking about the next generation of Bluetooth apps? Why can’t we get beyond different flavors of spyware? And when BLE is widely deployed, will we see devices that at least try to improve our quality of life, or will the best minds of our generation still be pitching ads?

Come to Solid to be part of that discussion. I’m sure our future will be full of spyware. But we can at least brainstorm other more interesting, more useful, more fun applications of low-power wireless computing.

February 04 2014

The Industrial Internet of Things

WeishauptBurnerWeishauptBurner

Photo: Kipp Bradford. This industrial burner from Weishaupt churns through 40 million BTUs per hour of fuel.

A few days ago, a company called Echelon caused a stir when it released a new product called IzoT. You may never have heard of Echelon; for most of us, they are merely a part of the invisible glue that connects modern life. But more than 100 million products — from street lights to gas pumps to HVAC systems — use Echelon technology for connectivity. So, for many electrical engineers, Echelon’s products are a big deal. Thus, when Echelon began touting IzoT as the future of the Industrial Internet of Things (IIoT), it was bound to get some attention.

Admittedly, the Internet of Things (IoT) is all the buzz right now. Echelon, like everyone else, is trying to capture some of that mindshare for their products. In this case, the product is a proprietary system of chips, protocols, and interfaces for enabling the IoT on industrial devices. But what struck me and my colleagues was how really outdated this approach seems, and how far it misses the point of the emerging IoT.

Although there are many different ways to describe the IoT, it is essentially a network of devices where all the devices:

  1. Have local intelligence
  2. Have a shared API so they can speak with each other in a useful way, even if they speak multiple protocols
  3. Push and pull status and command information from the networked world

In the industrial context, rolling out a better networking chip is just a minor improvement on an unchanged 1980s practice that requires complex installations, including running miles of wire inside walls. This IzoT would have been a breakthrough product back when I got my first Sony Walkman.

This isn’t a problem confined to Echelon. It’s a problem shared by many industries. I just spent several days wandering the floor of the International Air Conditioning, Heating, and Refrigeration (AHR) trade show, a show about as far from CES as you can get: truck-sized boilers and cooling towers that look unchanged from their 19th-century origins dotted the floor. For most of the developed world, HVAC is where the lion’s share of our energy goes. (That alone is reason enough to care about it.) It’s also a treasure trove of data (think Nest), and a great place for all the obvious, sensible IoT applications like monitoring, control, and network intelligence. But after looking around at IoT products at AHR, it was clear that they came from Bizzaro World.[1]

EmersonCopelandEmersonCopeland

Photo: Kipp Bradford. Copeland alone has sold more than 100 million compressors. That’s a lot of “things” to get on the Internet.

I spoke with an engineer showing off one of the HVAC industry-leading low-power wireless networking technologies for data centers. He told me his company’s new wireless sensor network system runs at 2.6 GHz, not 2.4 GHz (though the data sheet doesn’t confirm or deny that). Looking at the products that won industry innovation awards, I was especially depressed. Take, for example, this variable-speed compressor technology. When you put variable-speed motors into an air-conditioning compressor, you get a 25–35% efficiency boost. That’s a lot of energy saved! And it looks just like variable-speed technology that has been around for decades — just not in the HVAC world.

Now here’s where things get interesting: variable-speed control demands a smart processor controlling the compressor motor, plus the intelligent sensors to tell the motor when to vary its speed. With all that intelligence, I should be able to connect it to my network. So, when will my air conditioner talk to my Nest so I can optimize my energy consumption? Never. Or at least not until industry standards and government regulations overlap just right and force them to talk to each other, just as recent regulatory conditions forced 40-year-old variable-motor control technology into 100-year-old compressor technology.

Of course, like any inquisitive engineer, I had to ask the manufacturer of my high-efficiency boiler if it was possible to hook that up to my Nest to analyze performance. After he said, “Sure, try hooking up the thermostat wire,” and made fun of me for a few minutes, the engineer said, “Yeah, if you can build your own modbus-to-wifi bridge, you can access our modbus interface and get your boiler online. But do you really want some Russian hacker controlling your house heat?” It would be so easy for my Nest thermostat to have a useful conversation with my boiler beyond “on/off,” but it doesn’t.

Don’t get me wrong. I really am sympathetic to engineering within the constraints of industrial (versus consumer) environments, having spent a period of my life designing toys and another period making medical and industrial products. I’ve had to use my engineering skills to make things as dissimilar as Elmo and dental implants. But constraints are no excuse for trying to patch up, or hide behind, outdated technology.

The electrical engineers designing IoT devices for consumers have created exciting and transformative technologies like z-wave, Zigbee, BLE, wifi, and more, giving consumer devices robust and nearly transparent connectivity with increasingly easy installation. Engineers in the industrial world seem to be stuck making small technological tweaks that might enhance safety, reliability, robustness, and NSA-proof security of device networks. This represents an unfortunate increase in the bifurcation of the Internet of Things. It also represents a huge opportunity for those who refuse to submit to the notion that the IoT is either consumer or industrial, but never both.

For HVAC, innovation in 2014 means solving problems from 1983 with 1984 technology because the government told them so. The general attitude can be summed up as: “We have all the market share and high barriers to entry, so we don’t care about your problems.” Left alone, this industry (like so many others) will keep building walls that prevent us from having real control over our devices and our data. That directly contradicts a key goal of the IoT: connecting our smart devices together.

And that’s where the opportunity lies. There is significant value in HVAC and similar industries for any company that can break down the cultural and technical barriers between the Internet of Things and the Industrial Internet of Things. Companies that recognize the new business models created by well-designed, smart, interconnected devices will be handsomely rewarded. When my Nest can talk to my boiler, air conditioner, and Hue lights, all while analyzing performance versus weather data, third parties could sell comfort contracts, efficiency contracts, or grid stabilization contracts.

As a matter of fact, we are already seeing a $16 billion market for grid stabilization services opened up by smart, connected heating devices. It’s easy to envision a future where the electric company pays me during peak load times because my variable-speed air conditioner slows down, my Hue lights dim imperceptibly, and my clothes dryer pauses, all reducing my grid load — or my heater charges up a bank of thermal storage bricks in advance of a cold front before I return home from work. Perhaps I can finally quantify the energy savings of the efficiency improvements that I make.

My Sony Walkman already went the way of the dodo, but there’s still a chance to blow open closed industries like HVAC and bridge the IoT and the IIoT before my Beats go out of style.


[1] I’m not sorry for the Super Friends reference.

January 31 2014

Drone on

Jeff Bezos’ recent demonstration of a drone aircraft simulating delivery of an Amazon parcel was more stunt than technological breakthrough. We aren’t there yet. Yes, such things may well come to pass, but there are obstacles aplenty to overcome — not so much engineering snags, but cultural and regulatory issues.

The first widely publicized application of modern drone aircraft — dropping Hellfire missiles on suspected terrorists — greatly skewed perceptions of the technology. On the one hand, the sophistication of such unmanned systems generated admiration from technophiles (and also average citizens who saw them as valuable adjuncts in the war against terrorism). On the other, the significant civilian casualties that were collateral to some strikes have engendered outrage. Further, the fear that drones could be used for domestic spying has ratcheted up our paranoia, particularly in the wake of Edward Snowden’s revelations of National Security Agency overreach.

It’s as though drones have been painted with a giant red “D,” á la The Scarlet Letter. Manufacturers of the aircraft, not surprisingly, are desperate to rehabilitate the technology’s image, and properly so. Drones have too many exciting — even essential — civilian applications to restrict their use to war.

That’s why drone proponents don’t want us to call them “drones.” They’d prefer we use the periphrastic term, “Unmanned Aerial Vehicle,” or at least the acronym — UAV. Or depending on whom you talk to, Unmanned Vehicle System (UVS).

Will either catch on? Probably not. Think back: do we call it Star Wars or the Strategic Defense Initiative? Obamacare, or the Affordable Care Act? Popular culture has a way of reducing bombastic rubrics to pithy, evocative phrases. Chances are pretty good “drones” it is, and “drones” it’ll stay.

And that’s not so bad, says Brendan Schulman, an attorney with the New York office of Kramer Levin Naftalis & Frankel.

“People need to use terminology they understand,” says Schulman, who serves as the firm’s e-discovery counsel and specializes in commercial unmanned aviation law. ”And they understand ‘drones.’ I know that causes some discomfort to manufacturers and people who use drones for commercial purposes, but I have no trouble with the term. I think we can live with it.”

Michael Toscano, the president and CEO of the Association of Unmanned Vehicle Systems International, feels differently: not surprisingly, he favors the acronym “UVS” over “drone.”

“The term ‘drones’ evokes a hostile, military device,” says Toscano. “That isn’t what the broad applications of this technology are about. Commercial unmanned aircraft have a wide range of uses, from monitoring pipelines and power lines, to wildlife censuses, to agriculture. In Japan, they’ve been used for years for pesticide applications on crops. They’re far superior to manned aircraft for that — the Japanese government even partnered with Yamaha to develop a craft specifically designed for the job.”

Nomenclature aside, drones are stuck in a time warp here in the United States. The Federal Aviation Administration (FAA) prohibits their general use pending development of final regulations; such rules have been in the works for years but have yet to be released. Still, the FAA does allow some variances on the ban: government agencies and universities, after thrashing through a maze of red tape, can obtain waivers for specific tasks. And for the most part, field researchers who have hands-on-joystick experience with the aircraft are wildly enthusiastic about them.

“They give us superb data,” says Michael Hutt, the Unmanned Aircraft Systems Manager for the U.S. Geological Survey (USGS). USGS is currently using drones — specifically the fixed-wing RQ-11A Raven and the hovering RQ-16A T-Hawk — for everything from surveying abandoned mine sites to monitoring coal seam fires, estimating waterfowl populations, and evaluating pygmy rabbit habitat.

Landsat [satellites] have been the workhorse for most of our projects,” continues Hutt, “and they’re pretty good as far as they go, but they have definite limitations. You’re able to take images only when a satellite passes overhead, obviously, and you’re limited to what you can see by the weather. Also, your resolution is only 30 meters. With the Raven or Hawk, we can fly anytime we want. And because we fly low and slow, our data is collected in resolutions of centimeters, not meters. It’s just a level of detail we’ve never had before.”

And highly detailed data, of course, translates as highly accurate and reliable data. Hutt offers an example: a project to map fossil beds at White Sands National Monument.

“The problem is that these beds aren’t static,” says Hutt. “They tend to erode, or they can get covered up with blowing sand. It has been difficult to tell just what we have, and where the fossils are precisely. But with our T-Hawks, were able to get down to 2.5-centimeter resolution. We know exactly where the fossils are — it doesn’t matter if a sand dune forms on top of them.”

Even with current FAA restrictions, Hutt notes, universities are crowding into the drone zone.

“Not long ago, maybe five or six universities were using them, or had applied for permits,” Hutt says. ”Now, there’s close to 200. The gathering of high-quality data is becoming easy and cheap. That’s because of aircraft advances, of course, and also because the payload packages — the sensors and cameras — are dropping dramatically in price. A few years ago, you needed a $500,000 mapping camera to obtain the information we’re now getting with an off-the-shelf $1,000 camera.”

Unmanned aircraft may be easier to fly than F-22 fighter jets, but they impose their own challenges, observes Jeff Sloan, a top drone jock for USGS.

“Dealing with wind is the major problem,” says Sloan. ”The aircraft we’re using are really quite small, and they get pushed around easily. Also, though they can get into tighter spots than any manned aircraft, steep terrain can get you pretty tense. You’re flying very low, and there’s not much room for error.”

If and when the FAA gives the green light for broad commercial drone applications, a large cohort of potential pilots with the requisite skill sets will be ready and waiting.

“We’ve found that the best pilots are those who’ve had a lot of video game experience,” laughs Sloan, “so it’s good to know my kids have a future.”

But back to the FAA: a primary reason for the rules delay is public worry over safety and privacy. Toscano says safety concerns are mostly just that — foreboding more than real risk.

“It’s going to take a while before you see general risk tolerance, but it’ll come,” Toscano said. ”The Internet started with real apprehension, and now, of course there’s general acceptance. There are risks with the Internet, but they’re widely understood now, and people realize there’s a trade-off for the benefits.”

System vulnerability is tops among safety concerns, acknowledges Toscano. Some folks worry that a drone’s wireless links could be severed or jammed, and that the craft would start noodling around independently, with potentially dire results.

“That probably could be addressed by programming the drone with a pre-determined landing spot,” says Toscano. ”It could default to a safe zone if something went wrong.”

And what about privacy? Drones, says Toscano, don’t pose a greater threat to privacy than any other extant technology with potential eavesdropping applications.

“The law is the critical factor here,” he insists. “The Fourth Amendment supports the expectation of privacy. If that’s violated by any means, it’s a crime, regardless of the technology responsible.”

More to the point, says Toscano, we have to give unmanned aircraft a little time and space to develop: we’re still at Drones 1.0.

“With any new technology, it’s the same story: crawl, walk, run,” Toscano says. ”What Bezos did was show what UVS technology would look like at a full run — but it’s important to remember, we’re really still crawling.”

To get drones up on their hind legs and sprinting — or soaring — they’re going to need a helping hand from the FAA. And things aren’t going swimmingly there, says attorney Schulman.

“They recently issued a road map on the probable form of the new rules,” Schulman says, “and frankly, it’s discouraging. It suggests the regulations are going to be pretty burdensome, particularly for the smaller UAVs, which are the kind best suited for civilian uses. If that’s how things work out, it will have a major chilling effect on the industry.”

January 29 2014

Four short links: 29 January 2014

  1. Bounce Explorer — throwable sensor (video, CO2, etc) for first responders.
  2. Sintering Patent Expires Today — key patent expires, though there are others in the field. Sintering is where the printer fuses powder with a laser, which produces smooth surfaces and works for ceramics and other materials beyond plastic. Hope is that sintering printers will see same massive growth that FDM (current tech) printers saw after the FDM patent expired 5 years ago.
  3. Internet is the Greatest Legal Facilitator of Inequality in Human History (The Atlantic) — hyperbole aside, this piece does a good job of outlining “why they hate us” and what the systemic challenges are.
  4. First Carbon Fiber 3D Printer Announced — $5000 price tag. Nice!

January 27 2014

The new bot on the block

Fukushima changed robotics. More precisely, it changed the way the Japanese view robotics. And given the historic preeminence of the Japanese in robotic technology, that shift is resonating through the entire sector.

Before the catastrophic earthquake and tsunami of 2011, the Japanese were focused on “companion” robots, says Rodney Brooks, a former Panasonic Professor of Robotics at MIT, the founder and former technical officer of IRobot, and the founder, chairman and CTO of Rethink Robotics. The goal, says Brooks, was making robots that were analogues of human beings — constructs that could engage with people on a meaningful, emotional level. Cuteness was emphasized: a cybernetic, if much smarter, equivalent of Hello Kitty, seemed the paradigm.

But the multiple core meltdown at the Fukushima Daiichi nuclear complex following the 2011 tsunami changed that focus abruptly.

“Fukushima was a wake-up call for them,” says Brooks. “They needed robots that could do real work in highly radioactive environments, and instead they had robots that were focused on singing and dancing. I was with IRobot then, and they asked us for some help. They realized they needed to make a shift, and now they’re focusing on more pragmatic designs.”

Pragmatism was always the guiding principle for Brooks and his companies, and is currently manifest in Baxter, Rethink’s flagship product. Baxter is a breakthrough production robot for a number of reasons. Equipped with two articulated arms, it can perform a multitude of tasks. It requires no application code to start up, and no expensive software to function. No specialists are required to program it; workers with minimal technical background can “teach” the robot right on the production line through a graphical user interface and arm manipulation. Also, Baxter requires no cage — human laborers can work safely alongside it on the assembly line.

Moreover, it is cheap: about $25,000 per unit. It is thus the robotic equivalent of the Model T, and like the Model T, Baxter and its subsequent iterations will impose sweeping changes in the way people live and work.

“We’re at the point with production robots where we were with mobile robots in the late 1980s and early 1990s,” says Brooks. “The advances are accelerating dramatically.”

What’s the biggest selling point for this new breed of robot? Brooks sums it up in a single word: dignity.

“The era of cheap labor for factory line work is coming to a close, and that’s a good thing,” he says. “It’s grueling, and it can be dangerous. It strips people of their sense of worth. China is moving beyond the human factory line — as people there become more prosperous and educated, they aspire to more meaningful work. Robots like Baxter will take up the slack out of necessity.”

And not just for the assemblage of widgets and gizmos. Baxter-like robots will become essential in the health sector, opines Brooks — particularly in elder care. As the Baby Boom piglet continues its course through the demographic python, the need for attendants is outstripping supply. No wonder: the work is low-paid and demanding. Robots can fill this breach, says Brooks, doing everything from preparing and delivering meals to shuttling laundry, changing bedpans and mopping floors.

“Again, the basic issue is dignity,” Brooks said. “Robots can free people from the more menial and onerous aspects of elder care, and they can deliver an extremely high level of service, providing better quality of life for seniors.”

Ultimately, robots could be more app than hardware: the sexy operating system on Joaquin Phoenix’s mobile device in the recent film “Her” may not be far off the mark. Basically, you’ll carry a “robot app” on your smartphone. The phone can be docked to a compatible mechanism — say, a lawn mower, or car, or humanoid mannequin — resulting in an autonomous device ready to trim your greensward, chauffeur you to the opera, or mix your Mojitos.

YDreams Robotics, a company co-founded by Brooks protégé Artur Arsenio, is actively pursuing this line of research.

“It’s just a very efficient way of marketing robots to mass consumers,” says Arsenio. “Smartphones basically have everything you need, including cameras and sensors, to turn mere things into robots.”

YDream has its first product coming out in April: a lamp. It’s a very fine if utterly unaware desk lamp on its own, says Artur, but when you connect it to a smartphone loaded with the requisite app, it can do everything from intelligently adjusting lighting to gauging your emotional state.

“It uses its sensors to interface socially,” Artur says. “It can determine how you feel by your facial expressions and voice. In a video conference, it can tell you how other participants are feeling. Or if it senses you’re sad, it may Facebook your girlfriend that you need cheering up.”

Yikes. That may be a bit more interaction than you want from a desk lamp, but get used to it. Robots could intrude in ways that may seem a little off-putting at first — but that’s a marker of any new technology. Moreover, says Paul Saffo, a consulting professor at Stanford’s School of Engineering and a technology forecaster of repute, the highest use of robots won’t be doing old things better. It will be doing new things, things that haven’t been done before, things that weren’t possible before the development of key technology.

“Whenever we have new tech, we invariably try to use it to do old things in a new way — like paving cow paths,” says Saffo. “But the sooner we get over that — the sooner we look beyond the cow paths — the better off we’ll be. Right now, a lot of the thinking is, ‘Let’s have robots drive our cars, and look like people, and be physical objects.’

But the most important robots working today don’t have physical embodiments, says Saffo — think of them as ether-bots, if you will. Your credit application? It’s a disembodied robot that gets first crack at that. And the same goes for your resume when you apply for a job.

In short, robots already are embedded in our lives in ways we don’t think of as “robotic.” This trend will only accelerate. At a certain point, things may start feeling a little — well Singularity-ish. Not to worry — it’s highly unlikely Skynet will rain nuclear missiles down on us anytime soon. But the melding of robotic technology with dumb things nevertheless presents some profound challenges — mainly because robots and humans react on disparate time scales.

“The real questions now are authority and accountability,” says Saffo. “In other words, we have to figure out how to balance the autonomy systems need to function with the control we need to ensure safety.”

Saffo cites modern passenger planes like the Airbus 330 as an example.

“Essentially they’re flying robots,” he says. “And they fly beautifully, conserving fuel to the optimal degree and so forth. But the design limits are so tight — if they go too fast, they can fall apart; if they go too slow, they stall. And when something goes wrong, the pilot has perhaps 50 kilometers to respond. At typical speeds, that doesn’t add up to much reaction time.”

Saffo noted the crash of Air France Flight 447 in the mid-Atlantic in 2009 involved an Airbus 330. Investigations revealed the likely cause was turbulence complicated by the icing up of the plane’s speed sensors. This caused the autopilot to disengage, and the plane began to roll. The pilots had insufficient time to compensate, and the aircraft slammed into the water at 107 knots.

“The pilot figured out what was wrong — but it was 20 seconds too late,” says Saffo. “To me, it shows we need to devote real effort to defining boundary parameters on autonomous systems. We have to communicate with our robots better. Ideally, we want a human being constantly monitoring the system, so he or she can intervene when necessary. And we need to establish parameters that make intervention even possible.”

Rod Brooks will be speaking at the upcoming Solid Conference in May. If you are interested in robotics and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

January 24 2014

The lingering seduction of the page

In an earlier post in this series, I examined the articulatory relationship between information architecture and user interface design, and argued that the tools that have emerged for constructing information architectures on the web will only get us so far when it comes to expressing information systems across diverse digital touchpoints. Here, I want to look more closely at these traditional web IA tools in order to tease out two things: (1) ways we might rely on these tools moving forward, and (2) ways we’ll need to expand our approach to IA as we design for the Internet of Things.

First stop: the library

Information Architecture for the World Wide WebInformation Architecture for the World Wide WebThe seminal text for Information Architecture as it is practiced in the design of online information environments is Peter Morville’s and Louis Rosenfeld’s Information Architecture for the World Wide Web, affectionately known as “The Polar Bear Book.”

First published in 1998, The Polar Bear Book gave a name and a clear, effective methodology to a set of practices many designers and developers working on the web had already begun to encounter. Morville and Rosenfeld are both trained as professional librarians and were able to draw on this time-tested field in order to sort through many of the new information challenges coming out of the rapidly expanding web.

If we look at IA as two faces of the same coin, The Polar Bear Book focuses on the largely top-down “Internet Librarian” side of information design. The other side of the coin approaches the problems posed by data from the bottom up. In Everything is Miscellaneous: The Power of the New Digital Disorder, David Weinberger argues that the fundamental problem of the “second order” (think “card catalogue”) organization typical of library sciences-informed approaches is that they fail to recognize the key differentiator of digital information: that it can exist in multiple locations at once, without any single location being the “home” position. Weinberger argues that in the “third order” of digital information practices, “understanding is metaknowledge.” For Weinberger, “we understand something when we see how the pieces fit together.”

Successful approaches to organizing electronic data generally make liberal use of both top-down and bottom-up design tactics. Primary navigation (driven by top-down thinking) gives us a birds-eye view of the major categories on a website, allowing us to quickly focus on content related to politics, business, entertainment, technology, etc. The “You May Also Like” and “Related Stories” links come from work in the metadata-driven bottom-up space.

On the web, this textually mediated blend of top-down and bottom-up is usually pretty successful. This is no surprise: the web is, after all, primarily a textual medium. At its core, HTML is a language for marking up text-based documents. It makes them interpretable by machines (browsers) so they can be served up for human consumption. We’ve accommodated images and sounds in this information ecology by marking them up with tags (either by professional indexers or “folksonomically,” by whomever cares to pitch in).

There’s an important point here that often goes without saying: the IA we’ve inherited from the web is textual — it is based on the perception of the world mediated through the technology of writing; herin lies the limitation of the IA we know from the web as we begin to design for the Internet of Things.

Reading brains

We don’t often think of writing as “technology,” but inasmuch as technology constitutes the explicit modification of techniques and practices in order to solve a problem, writing definitely fits the bill. Language centers can be pinpointed in the brain — these are areas programmed into our genes that allow us to generate spoken language — but in order to read and write, our brains must create new connections not accounted for in our genetic makeup.

In Proust and the Squid, cognitive neuroscientist Maryanne Wolf describes the physical, neurological difference between a reading and writing brain and a pre-literate linguistic brain. Wolf writes that, with the invention of reading “we rearranged the very organization of our brain.” Whereas we learn to speak by being immersed in language, learning to read and write is a painstaking process of instruction, practice, and feedback. Though the two acts are linked by a common practice of language, writing involves a different cognitive process than speaking. It is one that relies on the technology of the written word. This technology is not transmitted through our genes; it is transmitted through our culture.

It is important to understand that writing is not simply a translation of speech. This distinction matters because it has profound consequences. Wolf writes that “the new circuits and pathways that the brain fashions in order to read become the foundation for being able to think in different, innovative ways.” As the ability to read becomes widespread, this new capacity for thinking differently, too, becomes widespread.

Though writing constitutes a major leap past speech in terms of cognitive process, it shares one very important common trait with spoken language: linearity. Writing, like speech, follows a syntagmatic structure in which meaning is constituted by the flow of elements in order — and in which alternate orders often convey alternate meanings.

When it comes to the design of information environments, this linearity is generally a foregone conclusion, a feature of the cognitive landscape which “goes without saying” and is therefore never called into question. Indeed, when we’re dealing primarily with text or text-based media, there is no need to call it into question.

In the case of embodied experience in physical space, however, we natively bring to bear a perceptual apparatus which goes well beyond the linear confines of written and spoken language. When we evaluate an event in our physical environment — a room, a person, a meaningful glance — we do so with a system of perception orders of magnitude more sophisticated than linear narrative. JJ Gibson describes this as the perceptual awareness resulting from a “flowing array of stimulation.” When we layer on top of that the non-linear nature of dynamic systems, it quickly becomes apparent that despite the immense gains in cognition brought about by the technology of writing, these advances still only partially equip us to adequately navigate immersive, physical connected environments.

The trouble with systems (and why they’re worth it)

Thinking in Systems: A PrimerThinking in Systems: A Primer

Photo: Andy Fitzgerald, of content from Thinking in Systems: A Primer, by Donella Meadows.

I have written elsewhere in more detail about challenges posed to linguistic thinkers by systems. To put all of that in a nutshell, complex systems baffle us because we have a limited capacity to track system-influencing inputs and outputs and system-changing flows. As systems thinking pioneer Donella Meadows characterizes them in her book Thinking in Systems: A Primer, self-organizing, nonlinear, feedback systems are “inherently unpredictable” and “understandable only in the most general way.”

According to Meadows, we learn to navigate systems by constructing models that approximate a simplified representation of the system’s operation and allow us to navigate it with more or less success. As more and more of our world — our information, our social networks, our devices, and our interactions with all of these — becomes connected, our systems become increasingly difficult (and undesirable) to compartmentalize. They also become less intrinsically reliant on linear textual mediation: our “smart” devices don’t need to translate their messages to each other into English (or French or Japanese) in order to interact.

This is both the great challenge and the great potential of the Internet of Things. We’re beginning to interact with our built information environments not only in a classically signified, textual way, but also in a physical-being-operating-in-the-world kind of way. The text remains — and the need to interact with that textual facet with the tools we’ve honed on the web (i.e. traditional IA) remains. But as the information environments we’re tasked with representing become less textual and more embodied, the tools we use to represent them must likewise evolve beyond our current text-based solutions.

Fumbling toward system literacy

In order to rise to meet this new context, we’re going to need as many semiotic paths as we can find — or create. And in order to do that, we will have to pay close attention to the cognitive support structures that normally “go without saying” in our conceptual models.

This will be hard work. The payoff, however, is potentially revolutionary. The threshold at   which we find ourselves is not merely another incremental step in technological advancement. The immersion in dynamic systems that the connected environment foreshadows holds the potential to re-shape the way we think — the way our brains are “wired” — much as reading and writing did. Though mediated by human-made, culturally transmitted technology (e.g. moveable type, or, in this case, Internet protocols), these changes hold the power to affect our core cognitive process, our very capacity to think.

What this kind of “system literacy” might look like is as puzzling to me now as reading and writing must have appeared to pre-literate societies. The potential of being able to grasp how our world is connected in its entirety — people, social systems, economies, trade, climate, mobility, marginalization — is both mesmerizing and terrifying. Mesmerizing because it seems almost magical; terrifying because it hints at how unsophisticated and parochial our current perspective must look from such a vantage point.

As information architects and interface designers, all of this means that we’re going to have to be nimble and creative in the way we approach design for these environments. We’re going to have to cast out beyond the tools and techniques we’re most comfortable with to find workable solutions to new problems of complexity. We aren’t the only ones working on this, but our role is an important one: engineers and users alike look to us to frame the rhetoric and usability of immersive digital spaces. We’re at a major transition in the way we conceive of putting together information environments. Much like Morville and Rosenfeld in 1998, we’re “to some degree all still making it up as we go along.” I don’t pretend to know what a fully developed information architecture for the Internet of Things might look like, but in the spirit of exploration, I’d like to offer a few pointers that might help nudge us in the right direction — a topic I’ll tackle in my next post.

January 23 2014

Podcast: Solid, tech for humans, and maybe a field trip?

We were snowed in, but the phones still worked, so Jon Bruner, Mike Loukides, and I got together on the phone to have a chat. We start off talking about the results of the Solid call for proposals, but as is often the case, the conversation meandered from there.

Here are some of the links we mention in this episode:

You can subscribe to O’Reilly Radar podcast through iTunes or SoundCloud, or directly through our podcast’s RSS feed.

January 08 2014

The emergence of the connected city

Photo: Millertime83Photo: Millertime83

If the modern city is a symbol for randomness — even chaos — the city of the near future is shaping up along opposite metaphorical lines. The urban environment is evolving rapidly, and a model is emerging that is more efficient, more functional, more — connected, in a word.

This will affect how we work, commute, and spend our leisure time. It may well influence how we relate to one another, and how we think about the world. Certainly, our lives will be augmented: better public transportation systems, quicker responses from police and fire services, more efficient energy consumption. But there could also be dystopian impacts: dwindling privacy and imperiled personal data. We could even lose some of the ferment that makes large cities such compelling places to live; chaos is stressful, but it can also be stimulating.

It will come as no surprise that converging digital technologies are driving cities toward connectedness. When conjoined, ISM band transmitters, sensors, and smart phone apps form networks that can make cities pretty darn smart — and maybe more hygienic. This latter possibility, at least, is proposed by Samrat Saha of the DCI Marketing Group in Milwaukee. Saha suggests “crowdsourcing” municipal trash pick-up via BLE modules, proximity sensors and custom mobile device apps.

“My idea is a bit tongue in cheek, but I think it shows how we can gain real efficiencies in urban settings by gathering information and relaying it via the Cloud,” Saha says. “First, you deploy sensors in garbage cans. Each can provides a rough estimate of its fill level and communicates that to a BLE 112 Module.”

BLE112_M_RGB_frontBLE112_M_RGB_front

As pedestrians who have downloaded custom “garbage can” apps on their BLE-capable iPhone or Android devices pass by, continues Saha, the information is collected from the module and relayed to a Cloud-hosted service for action — garbage pick-up for brimming cans, in other words. The process will also allow planners to optimize trash can placement, redeploying receptacles from areas where need is minimal to more garbage-rich environs.

“It should also allow greater efficiency in determining pick-up schedules,” said Saha. “For example, in some areas regular scheduled pick-ups may be best. But managers may find it’s also a good idea to put some trash collectors on a roving basis to service cans when they’re full. That could work well for areas where there’s a great deal of traffic and cans fill up quickly but unpredictably — and conversely, in low-traffic areas, where regular pick-up isn’t necessary. Both situations would benefit from rapid-response flexibility.”

Garbage can connectivity has larger implications than just, well, garbage. Brett Goldstein, the former Chief Data and Information Officer for the City of Chicago and a current lecturer at the University of Chicago, says city officials found clear patterns between damaged or missing garbage cans and rat problems.

“We found areas that showed an abnormal increase in missing or broken receptacles started getting rat outbreaks around seven days later,” Goldstein said. “That’s very valuable information. If you have sensors on enough garbage cans, you could get a temporal leading edge, allowing a response before there’s a problem. In urban planning, you want to emphasize prevention, not reaction.”

Such Cloud-based app-centric systems aren’t suited only for trash receptacles, of course. Companies such as Johnson Controls are now marketing apps for smart buildings — the base component for smart cities. (Johnson’s Metasys management system, for example, feeds data to its app-based Paoptix Platform to maximize energy efficiency in buildings.) In short, instrumented cities already are emerging. Smart nodes — including augmented buildings, utilities and public service systems — are establishing connections with one another, like axon-linked neurons.

But Goldstein, who was best known in Chicago for putting tremendous quantities of the city’s data online for public access, emphasizes instrumented cities are still in their infancy, and that their successful development will depend on how well we “parent” them.

“I hesitate to refer to ‘Big Data,’ because I think it’s a terribly overused term,” Goldstein said. “But the fact remains that we can now capture huge amounts of urban data. So, to me, the biggest challenge is transitioning the fields — merging public policy with computer science into functional networks.”

There are other obstacles to the development of the intelligent city, of course. Among them: how do you incentivize enough people to download apps sufficient to achieve a functional connected system? Indeed, the human element could prove the biggest fly in the ointment. We may resist quantifying ourselves to such a degree, even for the sake of our cities.

On the other hand, the connected city exists to serve people, not the other way around, observes Drew Conway, senior advisor to the New York City Mayor’s Office of Data Analytics and founder of the data community support group DataGotham. People ultimately act in their self-interest, and if the connected city brings boons, people will accept and support it. But attention must be paid to unintended consequences, emphasizes Conway.

“I never forget that humanity is behind all those bits of data I consume,” says Conway. “Who does the data serve, after all? Human beings decided why and where to put out those sensors, so data is inherently biased — and I always keep the human element in mind. And ultimately, we have to look at the true impacts of employing that data.”

As an example, continues Conway, “say the data tells you that an illegal conversion has created a fire hazard in a low-income residential building. You move the residents out, thus avoiding potential loss of life. But now you have poor people out on the street with no place to go. There has to be follow-through. When we talk of connections, we must ensure that some of those connections are between city services and social services.”

Like many technocrats, Conway also is concerned about possible threats to individual rights posed by data collected in the name of the commonwealth.

“One of New York’s most popular programs is expanding free public WiFi,” he says. “It’s a great initiative, and it has a lot of support. But what if an agency decided it wanted access to weblog data from high-crime areas? What are the implications for people not involved in any criminal activity? We haven’t done a good job of articulating where the lines should be, and we need to have that debate. Connected cities are the future, but I’d welcome informed skepticism on their development. I don’t think the real issue is the technical limitations — it’s the quid pro quo involved in getting the data and applying it to services. It’s about the trade-offs.”


For more on the convergence of software and hardware, check out our Solid Conference.

Register for the O'Reilly Solid ConferenceRegister for the O'Reilly Solid Conference

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl