Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 27 2014

Four short links: 27 February 2014

  1. Our Comrade, The Electron (Maciej Ceglowski) — a walk through the life of the inventor of the Theremin, with a pointed rant about how we came to build the surveillance state for the state. One of the best conference talks ever, and I was in the audience for it!
  2. go.cd — continuous deployment and delivery automation tool from Thoughtworks, nothing to do with the Go programming language. The name is difficult to search for, so naturally we needed the added confusion of two projects sharing the name. Continuous deployment is an important part of devops (“the job of our programmers is not to write code, it is to deploy working code into production”—who said this? I’ve lost the reference already).
  3. Apple iBeacon Developer Programme — info locked up behind registration. Sign an NDA to get the specs, free to use the name. Interesting because iBeacon and other Bluetooth LE implementations are promising steps to building a network of things. (via Beekn)
  4. ShareLaTeX — massively multiplayer online LaTeX. Open sourced.

February 26 2014

Hurdles to the Internet of Things prove more social than technical

Last Saturday’s IoT Festival at MIT became a meeting-ground for people connecting the physical world. Embedded systems developers, security experts, data scientists, and artists all joined in this event. Although it was called a festival, it had a typical conference format with speakers, slides, and question periods. Hallway discussions were intense.

However you define the Internet of Things (O’Reilly has its own take on it, in our Solid blog site and conference), a lot stands in the way of its promise. And these hurdles are more social than technical.

Participants eye O'Reilly booksParticipants eye O'Reilly books

Participants eye O’Reilly books (Credit: IoT Festival)

Some of the social discussion we have to have before we get the Internet of Things rolling are:

  • What effects will all this data collection, and the injection of intelligence into devices, have on privacy and personal autonomy? And how much of these precious values are we willing to risk to reap the IoT’s potential for improving life, saving resources, and lowering costs?

  • Another set of trade-offs involve competing technical goals. For instance, power consumption can constrain features, and security measures can interfere with latency and speed. What is the priority for each situation in which we are deploying devices? How do we make choices based on our ultimate goals for these things?

  • How do we persuade manufacturers to build standard communication protocols into everyday objects? For those manufacturers using proprietary protocols and keeping the data generated by the objects on private servers, how can we offer them business models that makes sharing data more appealing than hoarding it?

  • What data do we really want? Some industries are already drowning in data. (Others, such as health care, have huge amounts of potential data locked up in inaccessible silos.) We have to decide what we need from data and collect just what’s useful. Collection and analysis will probably be iterative: we’ll collect some data to see what we need, then go back and instrument things to collect different data.

  • How much can we trust the IoT? We all fly on planes that depend heavily on sensors, feedback, and automated controls. Will we trust similar controls to keep self-driving vehicles from colliding on the highway at 65 miles per hour? How much can we take humans out of the loop?

  • Similarly, because hubs in the IoT are collecting data and influencing outcomes at the edges, they require a trust relationship among the edges, and between the edges and the entity who is collecting and analyzing the data.

  • What do we need from government? Scads of data that is central to the IoT–people always cite the satellite GIS information as an example–comes from government sources. How much do we ask the government for help, how much do we ask it to get out of the way of private innovators, and how much can private innovators feed back to government to aid its own efforts to make our lives better?

  • It’s certain that IoT will displace workers, but likely that it will also create more employment opportunities. How can we retrain workers and move them to the new opportunities without too much disruption to their lives? Will the machine serve the man, or the other way around?

We already have a lot of the technology we need, but it has to be pulled together. It must be commercially feasible at a mass production level, and robust in real-life environments presenting all their unpredictability. Many problems need to be solved at what the Jim Gettys (famous for his work on the X Window System, OLPC, and bufferbloat) called “layer 8 of the Internet”: politics.

Large conference hallLarge conference hall

Large conference hall. I am in striped olive green shirt at right side of photo near rear (Credit: IoT Festival)

The conference’s audience was well-balanced in terms of age and included a good number of women, although still outnumbered by men. Attendance was about 175 and most of the conference was simulcast. Videos should be posted soon at the agenda site. Most impressive to me was the relatively small attrition that occurred during the long day.

Isis3D, a sponsor, set up a booth where their impressive 3D printer ratcheted out large and very finely detailed plastic artifacts. Texas Instruments’ University Program organized a number of demonstrations and presentations, including a large-scale give away of LaunchPads with their partner, Anaren.

Isis3D demos their 3D printerIsis3D demos their 3D printer

Isis3D demos their 3D printer (Credit: IoT Festival)

Does the IoT hold water?

One application of the IoT we could all applaud is reducing the use of water, pesticide, and fertilizer in agriculture. This application also illustrates some of the challenges the IoT faces. AgSmarts installs sensors that can check temperature, water saturation, and other factors to report which fields need more or less of a given resource. The service could be extended in two ways:

  • Pull in more environmental data from available sources besides the AgSmarts devices. For instance, Dr. Shawana Johnson of Global Marketing Insight suggested that satellite photos (we’re back to that oft-cited GIS data) could show which fields are dry.

  • Close the loop by running all the data through software that adjusts the delivery of water or fertilizer with little or no human intervention. This is a big leap in sophistication, reviving my earlier question about trusting the IoT. Closing the loop would require real-time analysis of data from hundreds of locations. It’s worth noting that AgSmarts offers a page about Analytics, but all it says right now is “Coming Soon.”

Lights, camera, action

A couple other interesting aspects of the IoT are responsive luminaires (light fixtures) and unmanned aerial vehicles, which can track things on the ground through cameras and other sensors.

Consumer products to control indoor lights are already available. Lighting expert John Luciani reported that outdoor lights are usually coordinated through the Zigbee wireless protocol. Some applications for control over outdoor lighting include:

  • Allowing police in their cruisers to brighten the lights during an emergency.

  • Increasing power to LEDs gradually over time, to compensate for their natural dimming.

  • Identifying power supplies that are nearing the end of their life, because LEDs can last much longer than power supplies.

Lights don’t suffer from several of the major constraints that developers struggle with when using many IoT devices. Because luminaires need a lot of power to cast their light, they always have a power source that is easily high enough to handle their radio communications and other computing needs. Second, there is plenty of room for a radio antenna. Finally, recent ANSI standards have allowed a light fixture to contain more control wires.

Thomas Almholt of Texas Instruments reported that Texas electric utilities have been installing smart meters. Theoretically, these could help customers and utilities save money and reduce usage, because rates change every 15 minutes and most customers have no window on these changes. But the biggest benefit they discovered from the smart meters was quite unexpected: teams came out to houses when the meters started to act in bizarre ways, and prevented a number of fires from being starting by the rogue meters.

A presentation on UAVs and other drones (not all are air-borne) highlighted uses for journalism, law enforcement, and environmental tracking, along with the risks of putting too much power in the hands of those inside or outside of government who possess drones.

MIT’s SENSEable City Lab, as an example of a positive use, measures water quality in the nearby Charles River through drones, producing data at a much more fine-gained level than what could be measured before, spacially and temporally.

Kade Crockford of the Massachusetts ACLU laid out (through a video simulation as scary as it was amusing) the ways UAVs could invade city life and engage in creepy tracking. Drones can go more places than other surveillance technologies (such as helicopters) and can detect our movements without us detecting the drones. The chilling effects of having these robots virtually breathe down our necks may outweigh the actual abuses perpetrated by governments or other actors. This is an illustration of my earlier point about trust between the center and the edges. (Crockford led a discussion on privacy in the conference, but I can’t report on it because I felt I needed to attend a different concurrent session.)

Protocols such as the 802.15 family and Zigbee permit communication among devices, but few of these networks have the processing power to analyze the data they produce and take action on their own. To store their data, base useful decisions on it, and allow humans to view it, we need to bring their data onto the larger Internet.

One tempting solution is to use the cellular network in which companies have invested so much. But there are several challenges to doing so.

  • Cell phone charges can build up quickly.

  • Because of the shortage of phone numbers, an embedded device must use a special mobile subscriber ISDN number (MSISDN), which comes with limitations.

  • To cross the distance to cell towers, radios on the devices need to use much more power than they need to communicate over local wireless options such as WiFi and Bluetooth.

A better option, if the environment permits, is to place a wireless router near a cluster of devices. The devices can use a low-power local area network to communicate with the router, which is connected to the Internet by cable or fiber.

When a hub needs to contact a device to send it a command, several options allow the device to avoid wasting power keeping its radio on. The device can wake up regularly to poll the router and get the command in the router’s response. Alternatively, the router can wake the device when necessary.

Government and the IoT platform

One theme coming out of State of the Union addresses and other White House communications is the role of innovation in raising living standards. As I’ve mentioned, there was a bit of discussion at the conference about jobs and what IoT might do to them. Job creation is one of the four goals of the SmartAmerica Challenge run by the Presidential Innovation Fellows program. The other three are saving lives, creating new business opportunities, and improving the economy.

I was quite disappointed that climate change and other environmental protections were left off the SmartAmerica list. Although I applaud the four goals, I noticed that each pleases a powerful lobbying constituency (the health industry, chambers of commerce, businesses, and unions). The environment is the world’s most urgent challenge, and one that millions of people care about, but big money isn’t behind the cause.

Two people presenting the SmartAmerica Challenge suggested that most of the technologies enabling IoT have been developed already. What we need are open and easy-to-use architectures, and open standards that turn each layer of the system into a platform on which others are free to innovate. The call for open standards–which can also create open markets–also came from Bill Curtis of ARM.

Another leading developer in the computer field, Bob Frankston, advised us to “look for resources, not solutions.” I take this to mean we can create flexible hardware and software infrastructure that many innovators can use as platforms for small, nimble solutions. Frankston’s philosophy, as he put it to me later, is “removing the barriers to rapid evolution and experimentation, thus removing the need for elaborate scenarios.”

Bob Frankston speakingBob Frankston speaking

Bob Frankston speaking (Credit: IoT Festival)

Dr. Johnson claimed that government agencies can’t share a lot of their applications for their data because of security concerns, and predicted that each time a tier of government data is published, it will be greeted by an explosion of innovation from the private sector.

However, government can play another useful role. One speaker pointed out that it can guide the industry to use standards by requiring those standards in its own procurements.

Standards we should all learn

Both Almholt and Curtis laid out protocol stacks for the IoT, with a lot of overlap. Both agreed that many Internet protocols in widespread use were inappropriate for IoT. For instance, connections on the IoT tend to be too noisy and unreliable for TCP to work well. Another complication is that an 802.15.4 packet has a maximum size of 127 bytes, and IPv6 (the addressing system of choice for mobile devices) takes up at least 40. Clever reuse of data between packets can reduce this overhead.

Curtis said that, because most Internet standards are too complex for the constrained devices in the IoT, these devices tend to run proprietary protocols, creating data silos. He also called for devices that use less than 10 milliwatts of power. Almholt said that devices will start harvesting electricity from light, vibration, and thermal sources, thus doing without batteries altogether and running for long periods without intervention.

Some of the protocols that show promise for the IoT include:

  • 6LoWPAN or successors for network-layer transmission.

  • The Constrained Access Protocol (CoAP) for RESTful message exchange. By running over UDP instead of TCP and using binary headers instead of ASCII, this protocol achieves most of the benefits of HTTP but is more appropriate for small devices.

  • The Time Synchronized Mesh Protocol (TSMP) for coordinating data transmissions in an environment where devices contend for wireless bandwidth, thus letting devices shut down their radios and save power when they are not transmitting.

  • RPL for defining and changing routes among devices. This protocol sacrifices speed for robustness.

  • eDTLS, a replacement for the TLS security standard that runs over UDP.

Adoption of these standards would lead to a communication chasm between the protocol stack used within the LAN and the one used with the larger outside world. Therefore, edge devices would be devoted to translating between the two networks. One speaker suggested that many devices will not be provided with IP addresses, for security reasons.

Challenges that remain

Research with impacts on the IoT is flourishing. I’ll finish this article with a potpourri of problems and how people are addressing them:

  • Gettys reported on the outrageous state of device security. In Brazil, an enormous botnet has been found running just on the consumer hubs that people install in their homes and businesses for Internet access. He recommended that manufacturers adopt OpenWRT firmware to protect devices, and that technically knowledgeable individuals install it themselves where possible.

  • Although talking over the Internet is a big advance for devices, talking to each other over local networks at the radio layer (using for instance the 802.15.4 standards) is more efficient and enables more rapid responses to the events they monitor. Unfortunately, according Almholt, interoperability is a decade away.

  • Mobility can cause us to lose context. A recent day-to-day example involves the 911 emergency call system, which is supposed to report the caller’s location to the responder. When people gave up their landlines and called 911 from cell phones, it became much harder to pinpoint where they were. The same thing happen in a factory or hospital when a staffer logs in from a mobile device.

Whether we want the IoT to turn on devices while we sit on the couch (not necessarily an indulgence for the lazy, but a way to let the elderly and disabled live more independently) or help solve the world’s most pressing problems in energy and water management, we need to take the societal and technological steps to solve these problems. Then let the fun begin.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter — and to learn more about the Solid Conference coming to San Francisco in May, visit the Solid website.

Four short links: 26 February 2014

  1. Librarybox 2.0fork of PirateBox for the TP-Link MR 3020, customized for educational, library, and other needs. Wifi hotspot with free and anonymous file sharing. v2 adds mesh networking and more. (via BoingBoing)
  2. Chicago PD’s Using Big Data to Justify Racial Profiling (Cory Doctorow) — The CPD refuses to share the names of the people on its secret watchlist, nor will it disclose the algorithm that put it there. [...] Asserting that you’re doing science but you can’t explain how you’re doing it is a nonsense on its face. Spot on.
  3. Cloudwash (BERG) — very good mockup of how and why your washing machine might be connected to the net and bound to your mobile phone. No face on it, though. They’re losing their touch.
  4. What’s Left of Nokia to Bet on Internet of Things (MIT Technology Review) — With the devices division gone, the Advanced Technologies business will cut licensing deals and perform advanced R&D with partners, with around 600 people around the globe, mainly in Silicon Valley and Finland. Hopefully will not devolve into being a patent troll. [...] “We are now talking about the idea of a programmable world. [...] If you believe in such a vision, as I do, then a lot of our technological assets will help in the future evolution of this world: global connectivity, our expertise in radio connectivity, materials, imaging and sensing technologies.”

February 25 2014

The technology and jobs debate raises complex questions

Editor’s note: Doug Hill and I recently had a conversation here on Radar about the impact of automation on jobs. In one of our exchanges, Doug mentioned a piece by James Bessen. James reached out to me and was kind enough to provide a response. What follows is their exchange.


JAMES BESSEN: I agree, Doug, that we cannot dismiss the concerns that technology might cause massive unemployment just because technology did not do this in the past. However, in the past, people also predicted that machines would cause mass unemployment. It might be helpful to understand why this didn’t happen in the past and to ask if anything today is fundamentally different.

Many people make a simple argument: 1) they observe that machines can perform job tasks and 2) they conclude that therefore humans will lose their jobs. I argued that this logic is too simple. During the 19th century, machines took more than 98% of the labor needed to weave a yard of cloth. But the number of weavers actually grew because the lower price of cloth increased demand. This is why, contrary to Marx, machines did not create mass unemployment.

Is something like that happening today? Yes, in some areas of white collar automation. The ATMs did not eliminate bank tellers, word processing software did not eliminate secretaries, etc. As I explain, ATMs increased the demand for bank tellers. In other cases, the work shifts to new occupations: there are fewer switchboard operators, but more receptionists; there are fewer typographers, but more graphic designers. As a whole, the number of jobs in office and sales occupations using computers has grown, not shrunk.

But that is not the whole story. First, new jobs require new skills. This was true in the past, too. I argued that the textile mills also hired workers who were adept at learning new skills and that they invested significantly in worker-learning on the job. You cites Dublin, but my research has corrected Dublin on this point (see Bessen, James (2003), “Technology and Learning by Factory Workers: The Stretch-Out at Lowell, 1842“, Journal of Economic History, 63, pp. 33-64 and also “Was Mechanization De-skilling?).” Moreover, my view is that the mill owners sought educated workers because it was in their business interests to do so, not for philanthropic reasons. And the mill girls saw the educational amenities of Lowell as an important attraction. For example, Lucy Larcom, who later taught college, compared the early years at Lowell to women’s colleges later in the century. I believe skills are also important for the new jobs being created today, and the difficulty of developing new skills and new labor markets is one reason wages are stagnant.

Second, the effect of technology in office occupations is very different from what is happening today in manufacturing occupations. As I noted, technology worked to reduce textile jobs during the 20th century, and for the last decade the number of manufacturing jobs has been shrinking. More than technology is at work here, but technology is not creating jobs on net in manufacturing. So, yes, there are two sides to technology’s effect on jobs, but it is far from obvious that the effect of technology is to eliminate jobs overall.

DOUG HILL: Thank you, James, for your response to the conversation Jim Stogdill and I have had over the last few weeks on the relationship between automation and jobs. I’ve read the two papers you referred to in your email, and they were quite helpful in fleshing out points you made in your article in the Washington Post. My conviction that your arguments overlook some crucial questions, however, hasn’t changed.

Two points and a quibble to mention:

  1. The quibble: Contrary to what you suggest in your email, I never claimed it’s “obvious” that the effect of technology is to eliminate jobs overall. To the contrary, I specifically said that technology can be counted on to cut both ways, and that we don’t know what will happen as automation evolves.
  2. All skills aren’t created equal. The gist of your argument is that new technologies require new skills and that having those new skills creates new opportunities for employment. Your example is the growth of textile manufacturing in Massachusetts over the course of the 19th century, an industry in which women workers played a substantial role.You say that the owners of the Massachusetts mills made significant investments in “human capital,” that is, they trained the women they hired so they acquired new skills needed in the mills. You never fully explain, though, exactly what those new skills were. Rather, you use increases in employee productivity (and some fancy equations) as prima facie evidence that new skills were acquired.This is looking at the situation from management’s perspective; it tells us nothing about what life was like for the women actually doing the work. That’s an important omission, I think, both in this specific case and in the general discussion regarding automation and jobs. By far, the bulk of that discussion has concerned the jobs automation might eliminate. What isn’t talked about nearly as much is the quality of jobs automation might leave behind.

    The fundamental method of industrial production is the breaking down of a given process into a series of discreet, repetitive tasks. This doesn’t necessarily mean they’re easy tasks, but by definition, they’re not complex. It’s an efficient way to make a commodity but not an especially satisfying way to make a living. I think of the worker Jim Stogdill met at the Ford electronics plant, who stood at his work station and shoved a capacitor into a circuit board all day. He seemed satisfied with the job, but the vast literature on the degradation workers experience on assembly lines (not to mention the suicides at Foxconn) shows that many, many workers feel otherwise, even when they’re well paid. See, for example, Ely Chiony’s Automobile Workers and the American Dream.

    Your analysis shows what “skill” often means where automation is concerned. The gains in productivity you celebrate came mainly from management’s demands that mill workers steadily increase their work load. Where they once tended two looms, “stretch-outs” required them to tend three, then four. “Speed-ups” periodically accelerated the speed of the machines as well. Experienced workers learned to handle the pace. At what cost, you don’t say.

    In one of his earlier emails, Stogdill cited the famous I Love Lucy episode where Lucy finds herself struggling to keep up with the conveyer belt in a chocolate factory. As funny as that bit is, it’s also evocative of the sorts of pressures faced by workers the world over who struggle to keep pace with machines. And, as I mentioned in my earlier email, getting more from less is a philosophy that isn’t limited to factories. A wide variety of automation techniques are widely and aggressively used to boost productivity in the knowledge and service industries as well.

  3. If it all depends on demand, we’re screwed. Increasing productivity in the Massachusetts mills meant it took fewer employees to produce the same amount of cloth, or more. Yet the number of jobs the mills provided steadily increased, as did the pay earned by the workers, at least for a while. Why? Because greater efficiencies of production and increased competition lowered the price of cloth, and demand for cheap cloth products grew. “This is why,” you say, “contrary to Marx, machines did not create mass unemployment.”The problem with this, as also mentioned in my earlier email, is that we’ve reached a point where we can no longer afford to base our national and global economies on an ever-increasing demand for an ever-increasing supply of consumables. That model is well on its way to killing the planet. Another model must be found, and quickly. I’m hoping some of those seeking to exploit the potential of automation might be taking that into consideration.

BESSEN: You raise some important issues about work satisfaction/alienation and environmental sustainability. I fail to see how the ATM machines have “degraded” bank tellers, subjected them to unwarranted “speed up” and are killing the planet. Nevertheless, my points are simply limited to the topic at hand, namely, whether technology creates more jobs than it destroys.

HILL: Here is what the Bureau of Labor Statistics currently says about bank tellers:

Job Outlook:

Employment of tellers is projected to show little or no change from 2012 to 2022.

Past job growth for tellers was driven by a rapid expansion of bank branches, where most tellers work. However, the growth of bank branches is expected to slow because of both changes to the industry and the abundance of banks in certain areas.

In addition, online and mobile banking allows customers to handle many transactions traditionally handled by tellers. As more people use online banking, fewer bank customers will visit the teller window. This will result in decreased demand for tellers. Some banks also are developing systems that allow customers to interact with tellers through webcams at ATMs. This technology will allow tellers to service a greater number of customers from one location, reducing the number of tellers needed for each bank.

The Bureau adds that tellers make an average of $11.99 an hour. A high school diploma and “short-term, on-the-job training” are required. Job prospects are expected to be excellent “because many workers leave this occupation.”

Number of bank tellers employed nationally, 2010: 556,310

Number of bank tellers employed nationally, 2012: 541,770

Jim Stogdill wrap-up

I want to thank James and Doug for engaging in this question and giving us the opportunity to publish it. However, if you read this carefully you’ll see that James and Doug might not have been having the same argument. Last week at Strata in Santa Clara, I moderated an Oxford Style debate on the proposition that automation creates more jobs than it destroys, and the same thing happened there. Our two teams of debaters squared off and promptly began arguing different questions. I hope to post some material from that discussion soon. It was super thought-provoking.

It’s not hard to see why this happens. It’s a complicated topic, really complicated in fact. Economists are the most unfortunate of scientists in that they are usually trying to answer some of the most complex questions we can pose with only the most rudimentary mathematical approximations to support their task.

When we ask the question, “does automation create more jobs than it destroys?” we might as well ask, “Are you an optimist or a pessimist?” Given the nearly unanswerable complexity of the question, it becomes a stand-in for a question of optimism.

But there’s another more fundamental problem, and this is where we start talking past each other. The question as posed is based on an assumption of scarcity. But if you think in terms of growing and/or future abundance, the question becomes either less important or moot. This is how the nano-techy singularists see it. “Lack of jobs aren’t the problem — figuring out what to do with your free time when everything is provided by your fabber is.”

We certainly haven’t answered any of these questions here, nor did we expect to. That was never my point in asking them either here or in the debate at Strata. I hope we have gotten a few people talking about it, though.

I’ve been dismayed by the “religion of disruptionism” in our industry. Creative destruction is how it works, but it seems like sometimes we are attracted to the destruction part of it for destruction’s sake. There is a whiff of schadenfreude in the valley’s approach to innovation that I find a bit sad.

There was a telling moment in our debate at Santa Clara when the “con” team looked out at the audience and said something like, “None of you need to worry about losing your jobs to automation; we’ve cleanly distinguished between the populations that represent the creative and destructive sides of Shumpeterian economics.”

In the end, we don’t know if history repeats itself forever and the creatively displaced can be retrained and find new jobs (or if it repeats itself and they can’t), but we do know that the pace of change matters. That an economy and a society can only absorb change so quickly and that pace can be affected by policy.

“Let them eat cake” isn’t policy.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter — and to learn more about the Solid Conference coming to San Francisco in May, visit the Solid website.

Four short links: 25 February 2014

  1. Bitcoin Markets Down — value of bitcoins plunges as market uncertain after largest bitcoin exchange goes insolvent after losing over 750k bitcoins because they didn’t update their software after a flaw was discovered in the signing of transactions.
  2. Flappy Bird for the Commodore 64 — the 1980s games platform meets the 2014 game. cf the machine learning hack where the flappy bird learns to play the game successfully.
  3. Air Hockey Robot — awesome hack.
  4. Run 30 Lab Tests on Only One Drop of Blood — automated lab processing to remove the human error in centrifuging, timing, etc. that added to variability of results.

February 24 2014

Four short links: 24 February 2014

  1. Understanding Understanding Source Code with Functional Magnetic Resonance Imaging (PDF) — we observed 17 participants inside an fMRI scanner while they were comprehending short source-code snippets, which we contrasted with locating syntax error. We found a clear, distinct activation pattern of five brain regions, which are related to working memory, attention, and language processing. I’m wary of fMRI studies but welcome more studies that try to identify what we do when we code. (Or, in this case, identify syntax errors—if they wanted to observe real programming, they’d watch subjects creating syntax errors) (via Slashdot)
  2. Oobleck Security (O’Reilly Radar) — if you missed or skimmed this, go back and reread it. The future will be defined by the objects that turn on us. 50s scifi was so close but instead of human-shaped positronic robots, it’ll be our cars, HVAC systems, light bulbs, and TVs. Reminds me of the excellent Old Paint by Megan Lindholm.
  3. Google Readying Android Watch — just as Samsung moves away from Android for smart watches and I buy me and my wife a Pebble watch each for our anniversary. Watches are in the same space as Goggles and other wearables: solutions hunting for a problem, a use case, a killer tap. “OK Google, show me offers from brands I love near me” isn’t it (and is a low-lying operating system function anyway, not a userland command).
  4. Most Winning A/B Test Results are Illusory (PDF) — Statisticians have known for almost a hundred years how to ensure that experimenters don’t get misled by their experiments [...] I’ll show how these methods ensure equally robust results when applied to A/B testing.

February 22 2014

The Industrial IoT isn’t the same as the consumer IoT

Reading Kipp Bradford’s recent article, The Industrial Internet of Things: The opportunity no one’s talking about, got me thinking about commonly held misconceptions about what the Industrial Internet of Things (IIoT) is — as well as what it’s not.

Misconception 1: The IIoT is the same as the consumer Internet of Things (IoT), except it’s located on a factory floor somewhere.

This misconception is easy to understand, given that both the IIoT and the consumer IoT have that “Internet of Things” term in common. Yes, the IIoT includes devices located in industrial settings: maybe a factory floor, or perhaps as part of a high-speed train system, or inside a hotel or restaurant, or a municipal lighting system, or within the energy grid itself.

But the industrial IoT has far more stringent requirements than the consumer IoT, including the need for no-compromise control, rock-solid security, unfailing reliability even in harsh (extremely hot or cold, dusty, humid, noisy, inconvenient) environments, and the ability to operate with little or no human intervention. And unlike more recently designed consumer-level devices, many of the billion or so industrial devices already operating on existing networks were put in place to withstand the test of time, often measured in decades.

The differences between the IIoT and IoT are not just a matter of slight degree or semantics. If your Fitbit or Nest device fails, it might be inconvenient. But if a train braking system fails, it could be a matter of life and death.

Misconception 2: The IIoT is always about, as Bradford describes it, devices that “push and pull status and command information from the networked world.”

The consumer IoT is synonymous with functions that affect human-perceived comfort, security, and efficiency. In contrast, many industrial networks have basic operating roles and needs that do not require, and in fact are not helped by, human intervention. Think of operations that must happen too quickly — or too reliably, or too frequently, or from too harsh or remote an environment — to make it practical to “push and pull status and command information” to or from any kind of centralized anything, be it an Internet server or the cloud.

A major goal for the IIoT, then, must be to help autonomous communities of devices to operate more effectively, peer to peer, without relying on exchanging data beyond their communities. One challenge now is that various communications protocols have been established for specific types of industrial devices (e.g., for building automation, or lighting, or transportation) that are incompatible with one another. Internet Protocol (IP) could serve as a unifying communications pathway within industrial communities of devices, improving peer-to-peer communications.

Misconception 3: The problem is that industrial device owners aren’t interested in, or actively resist, connecting our smart devices together.

If IP were extended all the way to industrial devices, another advantage is that they could also participate, as appropriate, beyond peer-to-peer communities. Individually, industrial devices generate the “small data” that, in the aggregate, combines to become the “big data” used for IoT analytics and intelligent control. IIoT devices that are IP-enabled could retain their ability to operate without human intervention, yet still receive input or provide small-data output via the IoT.

But unlike most consumer IoT scenarios, which involve digital devices that already have IP support built in or that can be IP enabled easily, typical IIoT scenarios involve pre-IP legacy devices. And unfortunately, IP enablement isn’t free. Industrial device owners need a direct economic benefit to justify IP enabling their non-IP devices. Alternatively, they need a way to gain the benefits of IP without giving up their investments in their existing industrial devices — that is, without stranding these valuable industrial assets.

Rather than seeing industrial device owners as barriers to progress, we should be looking for ways to help industrial devices become as connected as appropriate — for example, for improved peer-to-peer operation and to contribute their important small data to the larger big-data picture of the IoT.

So, what is the real IIoT opportunity no one’s talking about?

The real opportunity of the IIoT is not to pretend that it’s the same as the IoT, but rather to provide industrial device networks with an affordable and easy migration path to IP.

This approach is not an “excuse for trying to patch up, or hide behind, outdated technology” that Bradford talks about. Rather, it’s a chance to offer multi-protocol, multimedia solutions that recognize and embrace the special considerations and, yes, constraints of the industrial world — which include myriad existing protocols, devices installed for their reliability and longevity, the need for both wired and wireless connections in some environments, and so on. This approach will build bridges to the IIoT, so that any given community of devices can achieve its full potential — the IzoT platform I’ve been working on at Echelon aims to accomplish this.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter — and to learn more about the Solid Conference coming to San Francisco in May, visit the Solid website.

Slo-mo for the masses

Edgertronic EvolutionEdgertronic Evolution

Edgertronic version evolution, from initial prototype A through production-ready prototype C.

The connectivity of everything isn’t just about objects talking to each other via the Internet. It’s also about the accelerating democratization of formerly elite technology. Yes, it’s about putting powerful devices in touch with each other — but it’s also about putting powerful devices within the grasp of anyone who wants them. Case in point: the Edgertronic camera.

All kids cultivate particular interests growing up. For Michael Matter, Edgertronic’s co-inventor, it was high-speed flash photography. He got hooked when his father brought home a book of high-speed images taken by Harold Eugene “Doc” Edgerton, the legendary MIT professor of electrical engineering who pioneered stroboscopic photography.

“It was amazing stuff,” Matter recalls, “and I wanted to figure out how he did it. No, more than that I wanted to do it. I wanted to make those images. But then I looked into the cost of equipment, and it was thousands of dollars. I was 13 or 14, this was the 1970s, and my budget was pretty well restricted to my allowance.”

But Matter had a highly technical bent, and he was undaunted.

“I bought an off-the-shelf camera flash and put in a smaller value capacitor,” he recalls. “That allowed me to reduce the flash period from a millisecond to 20 or 30 microseconds. I had lower light output, of course, but I knew I could work with that. I got those short bursts I needed.”

Matter’s dad was an avid golfer, and he asked his son to take some shots of him teeing off — or rather, shots of the face of his trusty driver at the precise moment of ball impact. Matter took a spring from an old tape measure and jury-rigged a trigger for the flash.

“I’d put it ever-so-slightly in front of the ball,” he says, “and it worked pretty well. It’d stand up to four or five hits. We got some good images of the ball distorting, morphing into a kind of egg shape, on impact.”

Matter later matriculated at MIT, and he sent in images of those golf balls to Edgerton, along with a fan letter. He ended up taking a class from Edgerton, and his admiration for the idiosyncratic engineer only grew.

“He was the quintessential maker,” Matter says of his mentor. “Almost everything in his lab was slightly radioactive from all his field work photographing atom bomb and hydrogen bomb tests. He’d look at a problem and come up with a solution. Sometimes his solutions were crude and ugly, but they worked. That was part of his philosophy — better to have something crude and ugly that works than something elegant and expensive that doesn’t work as well.”

Edgerton had a camera capable of taking sub-microsecond exposures, but it was extremely bulky, and he let Matter tinker with it.

“It had huge vacuum tubes, and it had a notorious reputation around the lab for unreliability,” Matter said. ”I took it apart and was able to make some improvements on it. It worked better.”

And did Edgerton praise his protégé? Not at all.

“That wasn’t his style,” Matter said, “and I admired him for it. If you were in his lab, a certain level of competence was assumed.  That was better than praise.”

Matter spent the next 30 years designing electronic devices, including servers, computers, and consumer products. And he was good at what he did; he and business partner Juan Pineda were principal designers for the Apple Powerbook 500 series. But he never lost interest in high-speed photography, and he remained frustrated by the stratospherically high price of the equipment. Top-end high-speed cameras can cost $100,000 to $500,000 — even “cheap” models register in the mid five figures. But his conversations with Pineda inspired him to think about the technology in a highly disruptive way.

“Early in my work with Juan, he made an observation to me,” Matter recalls. “We were working on a mid-performance, low-price project at Apollo Computers. Another group we were talking with was focused on high-performance, high-end systems. Juan said, ‘You know, anybody can do a no-holds-barred design when money is no object. Sooner or later, you’ll get what you want. But the challenge is when you’re limited by expense. That’s when skill comes in.’”

That idea — relatively high performance at a low price point — stuck with Matter, and fused with his predilections for  deep tech tinkering. Ultimately, he was driven back to his childhood hobby. With Pineda, he spent two years skunk-working on high-speed videography. The Edgertronic was the result of all their brainstorming. At about $5,500, it is well within the means of anyone seriously interested in photography; professional-grade digital SLR cameras, after all, can sell for $10,000. And the fact that the Edgertronic runs Linux and has an Ethernet port makes it an effective liaison between the physical and virtual worlds.

Like all digital video cameras, the Edgertronic operates in a pretty straightforward fashion: light travels through a lens to a sensor, and from there is transmitted to a processor where images are translated as digital information, then stored in a memory component.

That said, the Edgertronic isn’t just another video camera. At the relatively low pixel resolution of 192×96, it can operate in excess of 17,000 frames per second (fps). At the maximum resolution of 1280×1024, the camera processes 500 fps — still impressive, given standard cinema speed is 18 fps. In an evaluation of the Edgertronic on tested.com, TV’s Mythbusters website, Norman Chan opined the Edgertronic’s “sweet spot” was 640×360 resolution at about 2,000 fps. On his review, Chan has posted a clip taken by an Edgertronic of a hammer smashing a light bulb at 5,000 fps. The video is crisp, and the glass shards fly about in a slow, stately and almost balletic manner:

In other words, the Edgertronic delivers on its promise: good quality high-speed video on a working person’s budget. How did they do it? Matter isn’t evasive, exactly, but he is loathe to discuss specifics. He emphasized the Edgertronic did not involve the “bleeding edge” of engineering — the ne plus ultra of videographic technology. Rather, he said, he and Pineda used existing components in new ways.

“We’ve done it before,” he observes. ”For example, we were once working on a graphics processing project, and we needed a chip that could do several multiples per second. Well, there wasn’t a chip like that out there made specifically for graphics processing. But Texas Instruments did have a $40 speech recognition chip that could do five multiples a second. Some people were using it for music synthesizers or mixing studios. Texas Instruments never envisioned it for work stations or graphics processing. But we did. And it worked.”

In his review, Chan reveals that Edgertronic uses an off-the-shelf 18x14mm CMOS sensor. While not dirt cheap — it accounts for about half the price of the camera — it costs exponentially less than any custom-designed sensor.

In any event, cheap sensors alone do not a functional $5,000 high-speed video camera make. As Matter intimates, the proprietary leavening in this particular technological loaf is engineering. But what about applications? Sure, you can record bullet flights in your home basement shooting range. But not many of us combine doomsday prep philosophy with an interest in high-speed photography.

“Our camera isn’t going to be used for slo-mo replays in the World Series or the Superbowl,” says Matter. “That’s not our intent. Obviously, it has application in the arts and filmmaking, and will appeal to amateurs and hobbyists for their personal projects. But we think industry and research will account for most of our orders.”

One example: production lines. Say somewhere in the bowels of your line, a widget isn’t being stuck to a doo-hickey in the most felicitous way possible. Some high-speed footage of the assembly process could tell you what’s wrong. But standard high-speed cameras are rather bulky, and most are not amenable to sticking into tight places. Plus, once again, they’re extremely pricey. At several hundred thousand dollars a pop, it could well make more sense to eschew the camera and troubleshoot the line by tearing it apart. Enter the Edgertronic.

“Unlike other high-speed cameras, ours are compact, no larger than an SLR camera,” says Matter, “so you can put them almost anywhere. Also, price won’t be an issue for even small companies. The same with research facilities. Say a small lab is fine-tuning a new industrial process. A high-speed camera could really help with that, but you couldn’t justify a $500,000 outlay. We think the Edgertronic will change that dynamic.”

Matter and Pineda thus promote their camera as a basic research and investigative tool. Edgertronics already have been purchased by researchers working on fluid dynamics, particle systems, and animal locomotion; by high schools and colleges for various subjects; and by labs conducting product torture testing.

“We think of it as a microscope for time,” says Matter. “We just want to help people see how things really work. When you slow things down, you see they often don’t function anything like you supposed. Technically-minded people get that. Our first batch of 60 cameras sold out quickly — one guy is on his third camera. We’re doubling up for our next production run. We’ve identified a need, and now we’re filling it.”


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter — and to learn more about the Solid Conference coming to San Francisco in May, visit the Solid website.

February 21 2014

Architecting the connected world

In the first two posts in this series, I examined the connection between information architecture and user interface design, and then looked closely at the opportunities and constraints inherent in information architecture as we’ve learned to practice it on the web. Here, I outline strategies and tactics that may help us think through these constraints and that may in turn help us lay the groundwork for an information architecture practice tailored and responsive to our increasingly connected physical environments.

The challenge at hand

NYU Media, Culture, and Communication professor Alexander Galloway examines the cultural and political impact of the Internet by tracing its history back to its postwar origins. Built on a distributed information model, “the Internet can survive [nuclear] attacks not because it is stronger than the opposition, but precisely because it is weaker. The Internet has a different diagram than a nuclear attack does; it is in a different shape.” For Galloway, the global distributed network trumped the most powerful weapon built to date not by being more powerful, but by usurping its assumptions about where power lies.

This “differently shaped” foundation is what drives many of the design challenges I’ve raised concerning the Internet of Things, the Industrial Internet and our otherwise connected physical environments. It is also what creates such depth of possibility. In order to accommodate this shape, we will need to take a wider look at the emergent shape of the world, and the way we interact with it. This means reaching out into other disciplines and branches of knowledge (e.g. in the humanities and physical sciences) for inspiration and to seek out new ways to create and communicate meaning.

The discipline I’ve leaned on most heavily in these last few posts is linguistics. From these explorations, a few tactics for information architecture for connected environments emerge: leveraging multiple modalities, practicing soft assembly, and embracing ambiguity.

Leverage multiple modalities

In my previous post, I examined the nature of writing as it relates to information architecture for the web. As a mode of communication – and of meaning making – writing is incredibly powerful, but that power does come with limitations. Language’s linearity is one of these limitations; its symbolic modality is another.

In linguistics, a sign’s “modality” refers to the nature of the relationship between a signifier and a signified, between the name of a thing and the thing itself. In Ferdinand de Saussure’s classic example (from the Course in General Linguistics), the thing “tree” is associated with the work “tree” only by arbitrary convention.

This symbolic modality, though by far the most common modality in language, isn’t the only way we make sense of the world. Linguist Charles Sanders Peirce identified two additional modes that offer alternatives to the arbitrary sign of spoken language: the iconic and the indexical modes.

Indexical modes link signifier and signified by concept association. Smoke signifies fire, thunder signifies a storm, footprints signify a past presence. The signifier “points” to the signified (yes, the reference is related to our index finger). Iconic modes, on the other hand, link signifier and signified by resemblance or imitation of salient features. A circle with two dots and lines in the right places signifies the human face to humans – our perceptual apparatus is tuned to recognize this pattern as a face (a pre-programmed disposition which easily becomes a source of amusement when comparing the power outlets of various regions).

Before you write off this scrutiny of modes as esoteric navel-gazing – or juvenile visual punning – let’s take an example many of us come into contact with daily: the Apple MacBook trackpad. Prior to the Mountain Lion operating system, moving your finger “down” the trackpad (i.e. from the space bar toward the edge of the computer body) caused a page on screen to scroll down – which is to say, the elements you perceived on the screen moved up (so you could see what had been concealed “below” the bottom edge of the screen).

The root of this spatial metaphor is enlightening: if you think back a few years to the scroll wheel mouse (or perhaps you’re still using one?), you can see where the metaphor originates. When you scroll the wheel toward you, it is as if the little wheel is in contact with a sheet of paper sitting underneath the mouse. You move the wheel toward you, the wheel moves the paper up. It is an easy mechanical model that we can understand based on our embodied relationship to physical things.

This model was translated as-is to the first trackpads — until another kinetic model replaced it: the touchscreen. On a touchscreen, you’re no longer mediating movement with a little wheel; you’re touching it and moving it around directly. Once this model became sufficiently widespread, it became readily accessible to users of other devices – such as the trackpad – and, eventually, became the default trackpad setting for Apple devices.

OutletsOutlets

Left: Danish electrical plugs, on Wikimedia Commons. Right: US electrical plugs, photo by Andy Fitzgerald.

Here we can see different modes at play. The trackpad isn’t strictly symbolic, nor is it iconic. Its relationship to the action it accomplishes is inferred by our embodied understanding of the physical world. This is signification in the indexical mode.

This thought exercise isn’t purely academic: by basing these kinds of interactions on mechanical models that we understand in a physical and embodied way (we know in our bodies what it is like to touch and move a thing with our fingers), we root them in a symbolic experience that is more deeply rooted in our understanding of the physical world. In the case of iconic signification, for example in recognizing faces in power outlets (and even dispositions – compare, for instance, the plug shapes of North America and Denmark), our meaning making is even more deeply rooted in our innate perceptual apparatus.

Practice soft assembly

Attention to modalities is important because the more deeply a pattern or behavior is rooted in embodied experience, the more durable it is – and the more reliably it can be leveraged in design. Stewart Brand’s concept of “pace layers” helps to explain why this is the case. Brand argues in How Buildings Learn that the structures that make up our experienced environment don’t all change at the same rate. Nature is very slow to change; culture changes more quickly. Commerce changes more quickly than the layers below it, and technology changes faster than all of these. Fashion, relative to these other layers, changes most quickly of all.

Pace LayersPace Layers

Illustration: Pace layers, by Preston Grubbs, illustrator at Deloitte Digital.

When we discuss meaning making, our pace layer model doesn’t describe rate of change; it describes depth of embodiment. Remembering the “ctrl-z to undo” function is a learned symbolic (hence arbitrary) association. Touching an icon on a touchscreen (or, indexically, on a trackpad) and sliding it up is an associative embodied action. If we get these actions right, we’ve tapped into a level of meaning making that is rooted in one of the slowest of the physical pace layers: the fundamental way in which we perceive our natural world.

This has obvious advantages to usability. Intuitive interfaces are instantly useable because they capitalize on the operational knowledge we already have. When an interface is completely new, in order to be intuitive it must “borrow” knowledge from another sphere of experience (what Donald Norman in The Psychology of Everyday Things refers to as “knowledge in the world”). The touchscreen interface popularized by the iPhone is a ready example of this. More recently, Nest Protect’s ”wave to hush” interaction provides an example that builds on learned physical interactions (waving smoke away from a smoke detector to try to shut it up) in an entirely new but instantly comprehensible way.

An additional and often overlooked advantage of tapping into deep layers of meaning making is that by leveraging more embodied associations, we’re able to design for systems that fit together loosely and in a natural way. By “natural” here, I mean associations that don’t need to be overwritten by an arbitrary, symbolic association in order to signify; associations that are rooted in our experience of the world and in our innate perceptual abilities. Our models become more stable and, ironically, more fluid at once: remapping one’s use of the trackpad is as simple as swapping out one easily accessible mental model (the wheel metaphor) for another (the touchscreen metaphor).

This loose coupling allows for structural alternatives to rigid (and often brittle) complex top-down organizational approaches. In Beyond the Brain, University of Lethbridge Psychology professor Louise Barrett uses this concept of “soft assembly” to explain how in both animals and robots “a whole variety of local control factors effectively exploit specific local (often temporary) conditions, along with the intrinsic dynamics of an animal’s body, to come up with effective behavior ‘on the fly.’” Barrett describes how soft assembly accounts for complex behavior in simple organisms (her examples include ants and pre-microprocessor robots), then extends those examples to show how complex instances of human and animal perception can likewise be explained by taking into account the fundamental constitutive elements of perception.

For those of us tasked with designing the architectures and interaction models of networked physical spaces, searching hard for the most fundamental level at which an association is understood (in whatever mode it is most basically communicated), and then articulating that association in a way that exploits the intrinsic dynamics of its environment, allows us to build physical and information structures that don’t have to be held together by force, convention, or rote learning, but which fit together by the nature of their core structures.

Embrace ambiguity

Alexander Galloway claims that the Internet is a different shape than war. I have been making the argument that the Internet of Things is a different shape than textuality. The systems that comprise our increasingly connected physical environments are, for the moment, often broader than we can understand fluidly. Embracing ambiguity – embracing the possibility of not understanding exactly how the pieces fit together, but trusting them to come together because they are rooted in deep layers of perception and understanding – opens up the possibility of designing systems that surpass our expectations of them. “Soft assembling” those systems based on cognitively and perceptually deep patterns and insight ensures that our design decision aren’t just guesses or luck. In a word, we set the stage for emergence.

Embracing ambiguity is hard. Especially if you’re doing work for someone else – and even more so if that someone else is not a fan of ambiguity (and this includes just about everyone on the client side of the design relationship). By all means, be specific whenever possible. But when you’re designing for a range of contexts, users, and uses – and especially when those design solutions are in an emerging space – specificity can artificially limit a design.

In practical terms, embracing ambiguity means searching for solutions beyond those we can easily identify with language. It means digging for the modes of meaning making that are buried in our physical interactions with the built and the natural environment. This allows us the possibility of designing for systems which fit together with themselves (even though we can’t see all the parts) and with us (even though there’s much about our own behavior we still don’t understand), potentially reshaping the way we perceive and act in our world as a result.

Architecting connected physical environments requires that we think differently about how people think and about how they understand the world we live in. Textuality is a tool – and a powerful tool – but it represents only one way of making sense of information environments. We can add additional approaches to designing for these environments by thinking beyond text and beyond language. We accomplish this by leveraging our embodied experience (and embodied understanding of the world) in our design process: a combination of considering modalities, practicing soft assembly, and embracing ambiguity will help us move beyond the limits of language and think through ways to architect physical environments that make this new context enriching and empowering.

As we’re quickly discovering, the nature of connected physical environments imbricates the digital and the analog in ways that were unimaginable a decade ago. We’re at a revolutionary information crossroads, one where our symbolic and physical worlds are coming together in an unprecedented way. Our temptation thus far has been to drive ahead with technology and to try to fit all the pieces together with the tried and true methods of literacy and engineering. Accepting that the shape of this new world is not the same as what we have known up until now does not mean we have to give up attempts to shape it to our common good. It does, however, mean that we will have to rethink our approach to “control” in light of the deep structures that form the basis of embodied human experience.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

Oobleck security

I’ve been thinking (and writing) a lot lately about the intersection of hardware and software, and how standing at that crossroads does not fit neatly into our mental models of how to approach the world. Previously, there was hardware and there was software, and the two didn’t really mix. When trying to describe my thinking to a colleague at work, the best way to describe the world was that it’s becoming “oobleck,” the mixture of cornstarch and water that isn’t quite a solid but isn’t quite a liquid, named after the Dr. Seuss book Bartholomew and the Oobleck.  (If you want to know how to make it, check out this video.)

One of the reasons I liked the metaphor of oobleck for the melding of hardware and software is that it can rapidly change on you when you’re not looking. It pours like a liquid, but it can “snap” like a solid. If you place the material under stress, as might happen if you set it on a speaker cone, it changes state and acts much more like a weird solid.

This “phase change” effect may also occur as we connect highly specialized embedded computers (really, “sensors”) to the Internet. As Bruce Schneier recently observed, patching embedded systems is hard. As if on cue, a security software company published a report that thousands of TVs and refrigerators may have been compromised to send spam. Embedded systems are easy to overlook from a security perspective because at the dawn of the Internet, security was relatively easy: install a firewall to sit between your network and the Internet, and carefully monitor all connections in and out of the network. Security was spliced into the protocol stack at the network layer. One of my earliest projects in understanding the network layer was working with early versions of Linux to configure a firewall with the then-new technology of address translation. As a result, my early career took place in and around network firewalls.

Most firewalls at the time worked at the network and transport layers of the protocol stack, and would operate on IP addresses and ports. A single firewall could shield an entire network of unpatched machines from attack, and in some ways, made it safe to use Windows on Internet-connected machines. Firewalls worked well in their heyday of the 1990s. Technology innovation took place inside the firewall, so the strict perimeter controls they imposed did not interfere with the use of technology.

By the mid-2000s, though, services like Skype and Google apps contended with firewalls as an obstacle to be traversed. (2001 even saw the publication of RFC 3093, a firewall “enhancement” protocol to allow routed IP packets through the firewall. Read it, but note the date before you dive into the details.) Staff inside the firewall needed to access services outside the firewall, and these services now included critical applications like Salesforce.com. Applications became firewall aware and started working around them. Had applications not evolved, it is likely the firewall would have throttled much of the innovation of the last decade.

To bring this back to embedded systems: what is the security model for a world with loads of sensors? (After all, remember that Bartholomew almost drowned in the Oobleck, and we’d like to avoid that fate.) As Kipp Bradford recently pointed out, when every HVAC compressor is connected to the Internet, that’s a lot of devices. By definition, real-time automation models assume continuous connectivity to gather data, and more importantly, send commands down to the infrastructure.

Many of these systems have proven to have poor security; for a recent example, consider this report from last fall about vulnerabilities in electrical and water systems. How do we add security to a sensor network, especially a network that is built around me as a person?Do I need to wear a firewall device to keep my sensors secure? Is the firewall an app that runs on my phone? (Oops, maybe I don’t want it running on my phone, given the architecture of most phone software…) Operating just at the network transport layer probably isn’t enough. Real-time data is typically transmitted using UDP, so many of the unwritten rules we have for TCP-based networking won’t apply. Besides, the firewall probably needs to operate at the API level to guard against any action I don’t want to be taken. What kind of security needs to be put around the data set? A strong firewall and cryptography that prevents my data in flight is no good if it’s easy to tap into the data store they all report to.

Finally, there’s the matter of user interface. Simply saying that users will have to figure it out won’t cut it. Default passwords — or even passwords as a concept — are insufficient. Does authentication have to become biometric? If so, what does that mean for registering devices the first time? As is probably apparent, I don’t have many answers yet.

Considering security with these types of deployments is like trying to deal with oobleck. It’s almost concrete, but just when you think it is solid enough to grasp, it turns into a liquid and runs through your fingers. With apologies to Shakespeare, this brave new world that has such machines in it needs a new security model. We need to defend against the classic network-based attacks of the past as well as new data-driven attacks or subversions of a distributed command-and-control system.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

Four short links: 21 February 2014

  1. Mapping Twitter Topic Networks (Pew Internet) — Conversations on Twitter create networks with identifiable contours as people reply to and mention one another in their tweets. These conversational structures differ, depending on the subject and the people driving the conversation. Six structures are regularly observed: divided, unified, fragmented, clustered, and inward and outward hub and spoke structures. These are created as individuals choose whom to reply to or mention in their Twitter messages and the structures tell a story about the nature of the conversation. (via Washington Post)
  2. yaspa fully functional web-based assembler development environment, including a real assembler, emulator and debugger. The assembler dialect is a custom which is held very simple so as to keep the learning curve as shallow as possible.
  3. The 12-Factor App — twelve habits of highly successful web developers, essentially.
  4. Fast Approximation of Betweenness Centrality through Sampling (PDF) — Betweenness centrality is a fundamental measure in social network analysis, expressing the importance or influence of individual vertices in a network in terms of the fraction of shortest paths that pass through them. Exact computation in large networks is prohibitively expensive and fast approximation algorithms are required in these cases. We present two efficient randomized algorithms for betweenness estimation.

February 20 2014

Four short links: 20 February 2014

  1. Practical Typography — informative and elegant.
  2. Nokia Treasure Tag — Bluetooth-chatty locators for keyrings, wallets, etc.
  3. Stanford Guidelines for Web Credibility — signals for those looking to identify dodgy content, as well as hygiene factors for those looking to provide it.
  4. App Trains You to See Farther (Popular Mechanics) — UltimEyes exercises the visual cortex, the part of our brain that controls vision. Brain researchers have discovered that the visual cortex breaks down the incoming information from our eyes into fuzzy patterns called Gabor stimuli. The theory behind UltimEyes is that by directly confronting the eyes with Gabor stimuli, you can train your brain to process them more efficiently—which, over time, improves your brain’s ability to create clear vision at farther distances. The app shows you ever fuzzier and fainter Gabor stimuli.

February 19 2014

Bitcoin security model: trust by computation

Bitcoin is a distributed consensus network that maintains a secure and trusted distributed ledger through a process called “proof-of-work.”

Bitcoin fundamentally inverts the trust mechanism of a distributed system. Traditionally, as we see in payment and banking systems, trust is achieved through access control, by carefully vetting participants and excluding bad actors. This method of trust requires encryption, firewalls, strong authentication and careful vetting. The network requires investing trust in those gaining access.

The result is that such systems tend to be closed and small networks by necessity. By contrast, bitcoin implements a trust model of trust by computation. Trust in the network is ensured by requiring participants to demonstrate proof-of-work, by solving a computationally difficult problem. The cumulative computing power of thousands of participants, accumulated over time in a chain of increasing-difficulty proofs, ensures that no actor or even collection of actors can cheat, as they lack the computation to override the trust. As proof-of-work accumulates on the chain of highest difficulty (the blockchain), it becomes harder and harder to dispute. In bitcoin, a new proof-of-work is added every 10 minutes, with each subsequent proof making it exponentially more difficult to invalidate the previous results.

Here’s the most important effect of this new trust model of trust-by-computation: no one actor is trusted, and no one needs to be trusted. There is no central authority or trusted third party in a distributed consensus network. That fact opens up a completely new network model, as the network no longer needs to be closed, access-controlled or encrypted. Trust does not depend on excluding bad actors, as they cannot “fake” trust. They cannot pretend to be the trusted party, as there is none. They cannot steal the central keys as there are none. They cannot pull the levers of control at the core of the system, as there is no core and no levers of control.

As a result, the network can be open to all; the transactions can be broadcast on any medium, unencrypted; and applications can be added at the edge without vetting or approval. In other words, bitcoin is not just money for the Internet, it is the Internet of money — an open, de-centralized, standards-based network where innovation can occur at the edge without permission and where the network itself is simply a neutral and open transport layer.

Like the Internet and other open networks, blockchain-based crypto-currency networks are susceptible to denial-of-service and other nuisance attacks. Attacks that cannot violate the trust of the distributed asset ledger, but can clog the pipes and attempt to confuse the participants. When such attacks occur, they can cause deep concern among those who have a predilection for the security model of access control. If a bad actor gains access to a closed financial network, the results are catastrophic. Open access and trust are fundamentally at odds in a closed centralized network based on access control. Therefore, within that context, a denial of service attack or any bad actors on the network have dire consequences and signify a compromise of security and a failure of the trust model.

On bitcoin and other open crypto-currency networks, however, bad actors on the network are inconsequential because the trust model does not depend on excluding them. The bad actors are not trusted any more than any other user of the network and their access does not grant them any special rights. The trust model depends on computation and the demonstration of computation through proof-of-work. As long as good actors form the majority of the computation used for forming consensus, the bad actors cannot change the trusted ledger.

It will take time for the idea of decentralized trust through computation to become a part of mainstream consciousness, and until then, the idea creates cognitive dissonance for those accustomed to centralized trust systems. With thousands of years of practical use, centralized systems of trust are accepted unconditionally and without much thought as the only model of trust.

Until recently, decentralized trust at scale was not possible. Now that it is, it conflicts with most people’s understanding of the world. That’s why when you explain crypto-currencies to people, they immediately search for the central actor or authority that establishes the trust, establishes the value or has the control: “Yes, I see it is decentralized, but who runs it? Who controls it? Can’t someone take over?” These questions reveal the context of trust centralization, which is deeply embedded in our culture and our thinking. We’ve been taught to fear the bad actor and look for self-interested “trusted” individuals; we no longer have to do that.

Gradually, decentralized trust will be accepted as a new and effective trust model. We have seen this evolution of understanding before — on the Internet. The Internet led to the decentralization of authority-of-opinion, by making it possible for anyone to be a publisher without a multi-story building-sized printing press. At first, this challenged our expectations and forced us to reconsider the source of authority. If anyone could have an opinion and publish it, how can we tell which opinions are important? We had used the centralization of printing presses and distribution and the purchasing of ink by the barrel as a proxy metric of authority, to help us filter our news and opinions. Suddenly, we were thrust into a new world in which these anchors of authority were swept away and each opinion had to be judged by its merits, not the size of the publisher’s press.

Now, we must rethink the source of trust in networks and the source of monetary value of currencies, disconnected from the issuer, without a central authority and without the need for access control. The trust model has already changed, but it will take a while for society to accept that a new model is possible.

The home automation paradox

Editor’s note: This post originally appeared on Scott Jenson’s blog, Exploring the World Beyond Mobile. This lightly edited version is republished here with permission.

The level of hype around the “Internet of Things” (IoT) is getting a bit out of control. It may be the technology that crashes into Gartner’s trough of disillusionment faster than any other. But that doesn’t mean we can’t figure things out. Quite the contrary — as the trade press collectively loses its mind over the IoT, I’m spurred on further to try and figure this out. In my mind, the biggest barrier we have to making the IoT work comes from us. We are being naive as our overly simplistic understanding of how we control the IoT is likely going to fail and generate a huge consumer backlash.

But let’s backup just a bit. The Internet of Things is a vast sprawling concept. Most people refer to just the consumer side of things: smart devices for your home and office. This is more precisely referred to as “home automation,” but to most folks, that sounds just a bit boring. Nevertheless, when some writer trots out that tired old chestnut: “My alarm clock turns on my coffee machine!”, that is home automation.

But of course, it’s much more than just coffee machines. Door locks are turning on music, moisture sensors are turning on yard sprinklers, and motion sensors are turning on lights. The entire house will flower into responsive activities, making our lives easier, more secure and even more fun.

The luddites will always crawl out and claim this is creepy or scary in the same way that answering machines were dehumanizing. Which, of course, is entirely missing the point. Just because something is more mechanical doesn’t mean it won’t have an enormous and yet human impact. My answering machine, as robotic as it may be, allows me to get critical messages from my son’s school. That is definitely not creepy.

However, I am deeply concerned these home automation scenarios are too simplistic. As a UX designer, I know how quixotic and down right goofy humans can be. The simple rule-based “if this, then that” style scenarios trotted out are doomed to fail. Well, maybe fail is too strong of a word. They won’t fail as in a “face plant into burning lava” fail. In fact, I’ll admit that they might even work 90% of the time. To many people that may seem fine, but just try using a voice recognition system with a 10% failure rate. It’s the small mistakes that will drive you crazy.

I’m reminded of one of the key learnings of the artificial intelligence (AI) community. It was called Moravec’s Paradox:

It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.

Moravec’s paradox created two type of AI problems: HardEasy and EasyHard.

HardEasy problems were assumed to be very hard to accomplish, such as playing chess. The assumption was that you’d have to replicate human cunning and experience in order to play chess well. It turns out this was completely wrong, as a simple brute force approach was able to do quite well. This was a hard problem that turned out to be (relatively) easy.

The EasyHard problem is exactly the opposite: a problem that everyone expects to be simple but turns out to quite hard indeed. The classic example here is language translation. The engineers at the time expected the hardest problem was just finding a big enough dictionary. All you had to do was look up the words just plop them down in perfect order. Obviously, that problem is something we’re still working on today. An EasyHard problem is one that seems simple but never….quite….works…..the….way….you…..want.

I claim that home automation is an EasyHard problem. The engineer in all of us assumes it is going to be simple: walk in a room, turn on the lights. What’s the big deal? Now, I’ll admit, this rule does indeed work most of the time, but here are series of exceptions that break down:

Problem: I walk into the room and my wife is sleeping; turning on the lights wakes her up.
Solution: More sensors — detect someone on the bed.

Problem: I walk into the room and my dog is sleeping on the bed; my room lights don’t turn on.
Solution: Better sensors — detect human vs pets.

Problem: I walk into the room, my wife is watching TV on the bed. She wants me to hand her a book, but as the the room is dark, I can’t see it.
Solution: Read my mind.

Don’t misunderstand my intentions here. I’m not luddite! I do strongly feel that we are going to eventually get to home automation. My point is that as an EasyHard problem, we don’t treat home automation with the respect it deserves. Just because we can automate our homes doesn’t mean we’ll automate them correctly. The real work with home automation isn’t with the IoT connectivity; it’s the control system that will make it do the right thing at the right time.

Let’s take a look at my three scenarios above and discuss how they will impact our eventual solutions to home automation.

More Sensors

Almost every scenario today is built on a very fault-intolerant structure. A single sensor controls the lights. A single door knob alerts the house I’m coming in. This has the obvious  error condition that if that sensor fails, the entire action breaks down. But the second more likely case is that it just infers the wrong conclusion. A single motion sensor in my room assumes that I am the only thing that matters, my sleeping wife is a comfort casualty. I can guarantee you that as smart homes roll out, saying “sorry dear, that shouldn’t have happened” is going to wear very thin.

The solution, of course, is to have more sensors that can reason and know how many people are in a room. This isn’t exactly that hard, but it will take a lot more work as you need to build up a model of the house, populate it with proxies — creating, in effect, a simulation of your home. This will surely come, but it will just take a little time for it to become robust and tolerant of our oh-so-human capability to act in unexpected ways.

Better Sensors

This, too, should be soon in coming. There are already sensors that can tell the difference from humans and pets; they just aren’t widely used yet. This will feed into the software simulation of my house, knowing where people, pets and things are throughout the space. This is starting to sound a bit like an AI system, modeling my life and making decisions based on what it thinks is needed at the time. Again, not exactly impossible, but tricky stuff that will, over time, get better and better.

Read my mind

But at some point we reach a limit. When do you turn on the lights so I can find the book and when do I just muddle through because I don’t want to bother my wife? This is where the software has to have the “humility” to stop and just ask. I discussed this a bit in my UX grid of IoT post: background swarms of smart devices will do as much of the “easy stuff” as they can but will eventually need me to signal intent so they can cascade a complex set of actions that fulfill my goal.

Take the book example again. I walk into the room; the AI detects my wife on the bed. It could even detect the TV is on but still know she is not sleeping. But because it’s not clearly reasonable to turn on the lights to full brightness, it just turns on the low baseboard lighting so I can navigate. So far so good — the automatic system is being helpful but conservative. When I walk up to my wife and she asks for the book, I just have to say “lights” and the system turns on the lights, which could be a complex set of commands turning on five different lights at different intensities.

At some point, a button, an utterance, a gesture will be needed because human interaction is too subtle to be fully encapsulated by an AI. Humans can’t always do it; what makes us think computers can?

Summary

My point here is to emphatically support the idea of home automation. However, the UX designer in me is far too painfully aware that humans are messy, illogical beasts, and simplistic if/then rules are going to create a backlash against this technology. It isn’t until we take the coordinated control of these IoT devices seriously that we’ll start building more nuanced and error-tolerant systems. They will certainly be simplistic at first and still fail, but at least we’ll be on the right path. We must create systems that expect us to be human, not punish us for when we are.

Four short links: 19 February 2014

  1. 1746 Slippy Map of London — very nice use of Google Maps to recontextualise historic maps. (via USvTh3m)
  2. TPP Comic — the comic explaining TPP that you’ve been waiting for. (via BoingBoing)
  3. Synthetic Biology Investor’s Lament — some hypotheses about why synbio is so slow to fire.
  4. vizcities — open source 3D (OpenGL) city and data visualisation platform, using open data.

February 18 2014

I, Cyborg

There is an existential unease lying at the root of the Internet of Things — a sense that we may emerge not less than human, certainly, but other than human.

Kelsey BresemanKelsey Breseman

Kelsey Breseman, engineer at Technical Machine.

Well, not to worry. As Kelsey Breseman, engineer at Technical Machine, points out, we don’t need to fret about becoming cyborgs. We’re already cyborgs: biological matrices augmented by wirelessly connected silicon arrays of various configurations. The problem is that we’re pretty clunky as cyborgs go. We rely on screens and mobile devices to extend our powers beyond the biological. That leads to everything from atrophying social skills as face-to-face interactions decline to fatal encounters with garbage trucks as we wander, texting and oblivious, into traffic.

So, if we’re going to be cyborgs, argues Breseman, let’s be competent, sophisticated cyborgs. For one thing, it’s now in our ability to upgrade beyond the screen. For another, being better cyborgs may make us — paradoxically — more human.

“I’m really concerned about how we integrate human beings into the growing web of technology,” says Breseman, who will speak at O’Reilly’s upcoming Solid conference in San Francisco in May. “It’s easy to get caught up in the ‘cool new thing’ mentality, but you can end up with a situation where the point for the technology is the technology, not the human being using it. It becomes closed rather than inclusive — an ‘app developers developing apps for app developers to develop apps’ kind of thing.”

Those concerns have led Breseman and her colleagues at Technical Machine to the development of the Tessel: an open-source Arduino-style microcontroller that runs JavaScript and allows hardware project prototyping. And not, Breseman emphasizes, the mere prototyping of ‘cool new things’ — rather, the prototyping of things that will connect people to the emerging Internet of Things in ways that have nothing to do with screens or smart phones.

“I’m not talking about smart watches or smart clothing,” explains Breseman. “In a way, they’re already passé. The product line hasn’t caught up with the technology. Think about epidermal circuits — you apply them to your skin in the same way you apply a temporary tattoo. They’ve been around for a couple of years. Something like that has so many potential applications — take the Quantified Self movement, for example. Smart micro devices attached right to the skin would make everything now in use for Quantified Self seem antiquated, trivial.”

Breseman looks to a visionary of the past to extrapolate the future: “In the late 1980s, Mark Weiser coined the term ‘ubiquitous computing‘ to describe a society where computers were so common, so omnipresent, that people would ultimately stop interfacing with them,” Breseman says. “In other words, computers would be everywhere, embedded in the environment. You wouldn’t rely on a specific device for information. The data would be available to you on an ongoing basis, through a variety of non-intrusive — even invisible — sources.”

Weiser described such an era as “… the age of calm technology, when technology recedes into the background of our lives…” That trope — calm technology — is extremely appealing, says Breseman.

“We could stop interacting with our devices, stop staring at screens, and start looking at each other, start talking to each other again,” she says. “I’d find that tremendously exciting.”

Breseman is concerned that the Internet of Things is seen only as a new and shiny buzz phrase. “We should be looking at it as a way to address our needs as human beings,” she says, “to connect people to the Internet more elegantly, not just as a source for more toys. Yes, we are now dependent on information technology. It has expanded our lives, and we don’t want to give it up. But we’re not applying it very well. We could do it so much better.”

Part of the problem has been the bifurcation of engineering into software and hardware camps, she says. Software engineers type into screens, and hardware engineers design physical things, and there have been few — if any — places that the twain have met. The two disciplines are poised to merge in the Internet of Things — but it won’t be an easy melding, Breseman allows. Each field carves different neural pathways, inculcates different values.

“Because of that, it has been really hard to figure out things that let people engage with the Internet in a physical sense,” Breseman says. “When we were designing Tessel, we discovered how hugely difficult it is to make an interactive Internet device.”

Still, Tessel and devices like it ultimately will become the machine tools of the Internet of Everything: the forges and lathes where the new infrastructure is built. That’s what Breseman hopes, anyway.

“What we would like,” she muses, “is for people to figure out their needs first and then order Tessels rather than the other way around. By that I mean you should first determine why and how connecting to the Internet physically would augment your life, make it better. Then get a Tessel to help you with your prototypes. We’ll see more and better products that way, and it keeps the emphasis where it belongs — on human beings, not the devices.”


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

Four short links: 18 February 2014

  1. Offensive Computer Security — 2014 class notes, lectures, etc. from FSU. All CC-licensed.
  2. Twitter I Love You But You’re Bringing Me Down (Quinn Norton) — The net doesn’t make social problems. It amplifies them until they can’t be ignored. And many other words of wisdom. When you eruditely stop using a service, that’s called sage-quitting.
  3. Inside Google’s Mysterious Ethics Board (Forbes) — nails the three risk to Google’s AI ethics board: (a) compliance-focus, (b) internally-staffed, and (c) only for show.
  4. 10 Things We Forgot to Monitor — devops war stories explaining ten things that bitly now monitors.

February 17 2014

Four short links: 17 February 2014

  1. imsg — use iMessage from the commandline.
  2. Facebook Data Science Team Posts About Love — I tell people, “this is what you look like to SkyNet.”
  3. A System for Detecting Software Plagiarism — the research behind the undergraduate bete noir.
  4. 3D GIFs — this is awesome because brain.

February 14 2014

Four short links: 14 February 2014

  1. Bitcoin: Understanding and Assessing Potential Opportunities (Slideshare) — VC deck on Bitcoin market and opportunities, long-term and short-term. Interesting lens on the development and gaps.
  2. Queensland Police Map Crime Scenes with 3D Scanner (ComputerWorld) — can’t wait for the 3D printed merchandise from famous trials.
  3. Atheer LabsAn immersive 3D display, over a million apps, sub-mm 3D hand interaction, all in 75 grams.
  4. libcloudPython library for interacting with many of the popular cloud service providers using a unified API.

February 13 2014

Four short links: 13 February 2014

  1. The Common Crawl WWW Ranking — open data, open methodology, behind an open ranking of the top sites on the web. Preprint paper available. (via Slashdot)
  2. Felton’s Sensors (Quartz) — inside the gadgets Nicholas Felton uses to quantify himself.
  3. Myo Armband (IEEE Spectrum) — armband input device with eight EMG (electromyography) muscle activity sensors along with a nine-axis inertial measurement unit (that’s three axes each for accelerometer, gyro, and magnetometer), meaning that you get forearm gesture sensing along with relative motion sensing (as opposed to absolute position). The EMG sensors pick up on the electrical potential generated by muscle cells, and with the Myo on your forearm, the sensors can read all of the muscles that control your fingers, letting them spy on finger position as well as grip strength.
  4. Bitcoin Exchanges Under Massive and Concerted Attack — he who lives by the network dies by the network. a DDoS attack is taking Bitcoin’s transaction malleability problem and applying it to many transactions in the network, simultaneously. “So as transactions are being created, malformed/parallel transactions are also being created so as to create a fog of confusion over the entire network, which then affects almost every single implementation out there,” he added. Antonopoulos went on to say that Blockchain.info’s implementation is not affected, but some exchanges have been affected – their internal accounting systems are gradually going out of sync with the network.
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl