Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

December 25 2013

Four short links: 25 December 2013

  1. Inside Netflix’s HR (HBR) — Which idea in the culture deck was the hardest sell with employees? “Adequate performance gets a generous severance package.” It’s a pretty blunt statement of our hunger for excellence. They talk about how those conversations play out in practice.
  2. Top Science Longreads for 2013 (Ed Yong) — for your Christmas reading.
  3. CocoaSPDY — open source library for SPDY (fast HTTP replacement, supported in Chrome) for iOS and OS X.
  4. The Internet of Things Will Replace the Web — invisible buttons loaded with anticipatory actions keyed from mined sensor data. And we’ll complain it’s slow and doesn’t know that I don’t like The Beatles before my coffee and who wrote this crap anyway?

December 18 2013

Four short links: 18 December 2013

  1. Cyberpunk 2013 — a roleplaying game shows a Gibsonian view of 2013 from 1988. (via Ben Hammersley)
  2. The Future Computer Utility — 1967 prediction of the current state. There are several reasons why some form of regulation may be required. Consider one of the more dramatic ones, that of privacy and freedom from tampering. Highly sensitive personal and important business information will be stored in many of the contemplated systems. Information will be exchanged over easy-to-tap telephone lines. At best, nothing more than trust—or, at best, a lack of technical sophistication—stands in the way of a would-be eavesdropper. All data flow over the lines of the commercial telephone system. Hanky-panky by an imaginative computer designer, operator, technician, communications worker, or programmer could have disastrous consequences. As time-shared computers come into wider use, and hold more sensitive information, these problems can only increase. Today we lack the mechanisms to insure adequate safeguards. Because of the difficulty in rebuilding complex systems to incorporate safeguards at a later date, it appears desirable to anticipate these problems. (via New Yorker)
  3. Lantronix XPort Pro Lx6a secure embedded device server supporting IPv6, that barely larger than an RJ45 connector. The device runs Linux or the company’s Evolution OS, and is destined to be used in wired industrial IoT / M2M applications.
  4. Pond — interesting post-NSA experiment in forward secure, asynchronous messaging for the discerning. Pond messages are asynchronous, but are not a record; they expire automatically a week after they are received. Pond seeks to prevent leaking traffic information against everyone except a global passive attacker. (via Morgan Mayhem)

December 09 2013

Who will upgrade the telecom foundation of the Internet?

Although readers of this blog know quite well the role that the Internet can play in our lives, we may forget that its most promising contributions — telemedicine, the smart electrical grid, distance education, etc. — depend on a rock-solid and speedy telecommunications network, and therefore that relatively few people can actually take advantage of the shining future the Internet offers.

Worries over sputtering advances in bandwidth in the US, as well as an actual drop in reliability, spurred the FCC to create the Technology Transitions Policy Task Force, and to drive discussion of what they like to call the “IP transition”.

Last week, I attended a conference on the IP transition in Boston, one of a series being held around the country. While we tussled with the problems of reliability and competition, one urgent question loomed over the conference: who will actually make advances happen?

What’s at stake and why bids are coming in so low

It’s not hard to tally up the promise of fast, reliable Internet connections. Popular futures include:

  • Delivering TV and movie content on demand
  • Checking on your lights, refrigerator, thermostat, etc., and adjusting them remotely
  • Hooking up rural patients with health care experts in major health centers for diagnosis and consultation
  • Urgent information updates during a disaster, to aid both victims and responders

I could go on and on, but already one can see the outline of the problem: how do we get there? Who is going to actually create a telecom structure that enables everyone (not just a few privileged affluent residents of big cities) to do these things?

Costs are high, but the payoff is worthwhile. Ultimately, the applications I listed will lower the costs of the services they replace or improve life enough to justify an investment many times over. Rural areas — where investment is currently hardest to get — could probably benefit the most from the services because the Internet would give them access to resources that more centrally located people can walk or drive to.

The problem is that none of the likely players can seize the initiative. Let’s look at each one:

Telecom and cable companies
The upgrading of facilities is mostly in their hands right now, but they can’t see beyond the first item in the previous list. Distributing TV and movies is a familiar business, but they don’t know how to extract value from any of the other applications. In fact, most of the benefits of the other services go to people at the endpoints, not to the owners of the network. This has been a sore point with the telecom companies ever since the Internet took off, and spurs them on constant attempts to hold Internet users hostage and shake them down for more cash.

Given the limitations of the telecom and cable business models, it’s no surprise they’ve rolled out fiber in the areas they want and are actually de-investing in many other geographic areas. Hurricane Sandy brought this to public consciousness, but the problem has actually been mounting in rural areas for some time.

Angela Kronenberg of COMPTEL, an industry association of competitive communications companies, pointed out that it’s hard to make a business case for broadband in many parts of the United States. We have a funny demographic: we’re not as densely populated as the Netherlands or South Korea (both famous for blazingly fast Internet service), nor as concentrated as Canada and Australia, where it’s feasible to spend a lot of money getting service to the few remote users outside major population centers. There’s no easy way to reach everybody in the US.

Governments
Although governments subsidize network construction in many ways — half a dozen subsidies were reeled off by keynote speaker Cameron Kerry, former Acting Secretary of the Department of Commerce — such stimuli can only nudge the upgrade process along, not control it completely. Government funding has certainly enabled plenty of big projects (Internet access is often compared to the highway system, for instance), but it tends to go toward familiar technologies that the government finds safe, and therefore misses opportunities for radical disruption. It’s no coincidence that these safe, familiar technologies are provided by established companies with lobbyists all over DC.

As an example of how help can come from unusual sources, Sharon Gillett mentioned on her panel the use of unlicensed spectrum by small, rural ISPs to deliver Internet to areas that otherwise had only dial-up access. The FCC ruling that opened up “white space” spectrum in the TV band to such use has greatly empowered these mavericks.

Individual consumers
Although we are the ultimate beneficiaries of new technology (and will ultimately pay for it somehow, through fees or taxes) hardly anyone can plunk down the cash for it in advance: the vision is too murky and the reward too far down the road. John Burke, Commissioner of the Vermont Public Service Board, flatly said that consumers choose the phone service almost entirely on the basis of price and don’t really find out its reliability and features until later.

Basically, consumers can’t bet that all the pieces of the IP transition will fall in place during their lifetimes, and rolling out services one consumer at a time is incredibly inefficient.

Internet companies
Google Fiber came up once or twice at the conference, but their initiatives are just a proof of concept. Even if Google became the lynchpin it wants to be in our lives, it would not have enough funds to wire the world.

What’s the way forward, then? I find it in community efforts, which I’ll explore at the end of this article.

Practiced dance steps

Few of the insights in this article came up directly in the Boston conference. The panelists were old hands who had crossed each other’s paths repeatedly, gliding between companies, regulatory agencies, and academia for decades. At the conference, they pulled their punches and hid their agendas under platitudes. The few controversies I saw on stage seemed to be launched for entertainment purposes, distracting from the real issues.

From what I could see, the audience of about 75 people came almost entirely from the telecom industry. I saw just one representative of what you might call the new Internet industries (Microsoft strategist Sharon Gillett, who went to that company after an august regulatory career) and two people who represent the public interest outside of regulatory agencies (speaker Harold Feld of Public Knowledge and Fred Goldstein of Interisle Consulting Group).

Can I get through to you?

Everyone knows that Internet technologies, such as voice over IP, are less reliable than plain old telephone service, but few realize how soon reliability of any sort will be a thing of the past. When a telecom company signs you up for a fancy new fiber connection, you are no longer connected to a power source at the telephone company’s central office. Instead, you get a battery that can last eight hours in case of a power failure. A local power failure may let you stay in contact with outsiders if the nearby mobile phone towers stay up, but a larger failure will take out everything.

These issues have a big impact on public safety, a concern raised at the beginning of the conference by Gregory Bialecki in his role as a Massachusetts official, and repeated by many others during the day.

There are ways around the new unreliability through redundant networks, as Feld pointed out during his panel. But the public and regulators must take a stand for reliability, as the post-Sandy victims have done. The issue in that case was whether a community could be served by wireless connections. At this point, they just don’t deliver either the reliability or the bandwidth that modern consumers need.

Mark Reilly of Comcast claimed at the conference that 94% of American consumers now have access to at least one broadband provider. I’m suspicious of this statistic because the telecom and cable companies have a very weak definition of “broadband” and may be including mobile phones in the count. Meanwhile, we face the possibility of a whole new digital divide consisting of people relegated to wireless service, on top of the old digital divide involving dial-up access.

We’ll take that market if you’re not interested

In a healthy market, at least three companies would be racing to roll out new services at affordable prices, but every new product or service must provide a migration path from the old ones it hopes to replace. Nowhere is this more true than in networks because their whole purpose is to let you reach other people. Competition in telecom has been a battle cry since the first work on the law that became the 1996 Telecom Act (and which many speakers at the conference say needs an upgrade).

Most of the 20th century accustomed people to thinking of telecom as a boring, predictable utility business, the kind that “little old ladies” bought stock in. The Telecom Act was supposed to knock the Bell companies out of that model and turn them into fierce innovators with a bunch of other competitors. Some people actually want to reverse the process and essentially nationalize the telecom infrastructure, but that would put innovation at risk.

The Telecom Act, especially as interpreted later by the FCC, fumbled the chance to enforce competition. According to Goldstein, the FCC decided that a duopoly (baby Bells and cable companies) were enough competition.

The nail in the coffin may have been the FCC ruling that any new fiber providing IP service was exempt from the requirements for interconnection. The sleight of hand that the FCC used to make this switch was a redefinition of the Internet: they conflated the use of IP on the carrier layer with the bits traveling around above, which most people think of as “the Internet.” But the industry and the FCC had a bevy of arguments (including the looser regulation of cable companies, now full-fledged competitors of the incumbent telecom companies), so the ruling stands. The issue then got mixed in with a number of other controversies involving competition and control on the Internet, often muddled together under the term “network neutrality.”

Ironically, one of the selling points that helps maintain a competitive company, such as Granite Telecom, is reselling existing copper. Many small businesses find that the advantages of fiber are outweighed by the costs, which may include expensive quality-of-service upgrades (such as MPLS), new handsets to handle VoIP, and rewiring the whole office. Thus, Senior Vice President Sam Kline announced at the conference that Granite Telecom is adding a thousand new copper POTS lines every day.

This reinforces the point I made earlier about depending on consumers to drive change. The calculus that leads small businesses to stick with copper may be dangerous in the long run. Besides lost opportunities, it means sticking with a technology that is aging and decaying by the year. Most of the staff (known familiarly as Bellheads) who designed, built, and maintain the old POTS network are retiring, and the phone companies don’t want to bear the increasing costs of maintenance, so reliability is likely to decline. Kline said he would like to find a way to make fiber more attractive, but the benefits are still vaporware.

At this point, the major companies and the smaller competing ones are both cherry picking in different ways. The big guys are upgrading very selectively and even giving up on some areas, whereas the small companies look for niches, as Granite Telecom has. If universal service is to become a reality, a whole different actor must step up to the podium.

A beautiful day in the neighborhood

One hope for change is through municipal and regional government bodies, linked to local citizen groups who know where the need for service is. Freenets, which go back to 1984, drew on local volunteers to provide free Internet access to everyone with a dial-up line, and mesh networks have powered similar efforts in Catalonia and elsewhere. In the 1990s, a number of towns in the US started creating their own networks, usually because they had been left off the list of areas that telecom companies wanted to upgrade.

Despite legal initiatives by the telecom companies to squelch municipal networks, they are gradually catching on. The logistics involve quite a bit of compromise (often, a commercial vendor builds and runs the network, contracting with the city to do so), but many town managers swear that advantages in public safety and staff communications make the investment worthwhile.

The limited regulations that cities have over cable companies (a control that sometimes is taken away) is a crude instrument, like a potter trying to manipulate clay with tongs. To craft a beautiful work, you need to get your hands right on the material. Ideally, citizens would design their own future. The creation of networks should involve companies and local governments, but also the direct input of citizens.

National governments and international bodies still have roles to play. Burke pointed out that public safety issues, such as 911 service, can’t be fixed by the market, and developing nations have very little fiber infrastructure. So, we need large-scale projects to achieve universal access.

Several speakers also lauded state regulators as the most effective centers to handle customer complaints, but I think the IP transition will be increasingly a group effort at the local level.

Back to school

Education emerged at the conference as one of the key responsibilities that companies and governments share. The transition to digital TV was accompanied by a massive education budget, but in my home town, there are still people confused by it. And it’s a minuscule issue compared to the task of going to fiber, wireless, and IP services.

I had my own chance to join the educational effort on the evening following the conference. Friends from Western Massachusetts phoned me because they were holding a service for an elderly man who had died. They lacked the traditional 10 Jews (the minyan) required by Jewish law to say the prayer for the deceased, and asked me to Skype in. I told them that remote participation would not satisfy the law, but they seemed to feel better if I did it. So I said, “If Skype will satisfy you, why can’t I just participate by phone? It’s the same network.” See, FCC? I’m doing my part.

November 20 2013

IoT meets agriculture, Intellistreets, immersive opera, and high-tech pencils

The Radar team does a lot of sharing in the backchannel. Here’s a look at a selection of stories and innovation highlights from around the web that have caught our recent attention. Have an interesting tidbit to contribute to the conversation? Join the discussion in the comments section, send me an email or ping me on Twitter.

  • Pencil — FiftyThree’s new Pencil tool is a super cool gadget that exhibits interesting technology from a software meets hardware perspective. (Via Mike Loukides)
  • littleBits Synth KitlittleBits and Korg have teamed up to bring DIY to music with a modular synthesizer kit. From John Paul Titlow on Fast Company: “Hobbyists have been doing this for many years, either through prepackaged kits or off-the-shelf components. The difference here is that no soldering or wiring is required. Each circuit piece’s magnetized and color-coded end makes them effortlessly easy to snap together and pull apart … Notably, the components of the kit are compatible with other littleBits modules, so it’s possible to build a synthesizer that integrates with other types of sensors, lights, and whatever else littleBits cooks up down the line.” (Via Jim Stogdill)
  • Why Curated Experiences Are The New Future Of Marketing — From immersive opera to extreme kidnapping to a bungee-jumping car, companies are learning that the key to their customers’ attention (and wallets) lies in the experience. From Krisztina “Z” Holly on Forbes: “There is something about a moment in time that can’t be replicated, an experience that is your very own, an adventure with others that is deeply personal and memorable. It is something that can’t be achieved by a high-budget celebrity endorsement or a large ad buy.” (Via Sara Winge)
  • The Internet of things, stalk by stalk — IoT meets agriculture with Air Tractor planes that can change fertilizer mixtures in-flight and the Rosphere farm robot that monitors crops. From Paul Rako on GigaOm: “Sensors on the robot could monitor each and every stalk of corn. Those robots can communicate with each other over a mesh network. A mesh network is like a chat room for gadgets. They identify themselves and their capabilities, and are then a shared resource.” (Via Jenn Webb)
  • What happens in Vegas DOESN’T stay in Vegas — Vegas’ new wireless street light system, Intellistreets, can record video and audio. From the Daily Mail: “The wireless, LED lighting, computer-operated lights are not only capable of illuminating streets, they can also play music, interact with pedestrians and are equipped with video screens, which can display police alerts, weather alerts and traffic information. The high tech lights can also stream live video of activity in the surrounding area. … These new street lights, being rolled out with the aid of government funding, are also capable of recording video and audio.” (Via Tim O’Reilly)

Additionally, at the recent Techonomy conference, Tim O’Reilly and Max Levchin discussed “Innovation and the Coming Shape of Social Transformation” — have a look:

Four short links: 20 November 2013

  1. Innovation and the Coming Shape of Social Transformation (Techonomy) — great interview with Tim O’Reilly and Max Levchin. in electronics and in our devices, we’re getting more and more a sense of how to fix things, where they break. And yet as a culture, what we have chosen to do is to make those devices more disposable, not last forever. And why do you think it will be different with people? To me one of the real risks is, yes, we get this technology of life extension, and it’s reserved for a very few, very rich people, and everybody else becomes more disposable.
  2. Attending a Conference via a Telepresence Robot (IEEE) — interesting idea, and I look forward to giving it a try. The mark of success for the idea, alas, is two bots facing each other having a conversation.
  3. Drone Imagery for OpenStreetMap — 100 acres of 4cm/pixel imagery, in less than an hour.
  4. LG Smart TV Phones Home with Shows and Played Files — welcome to the Internet of Manufacturer Malware.

November 18 2013

Four short links: 18 November 2013

  1. The Virtuous Pipeline of Code (Public Resource) — Chicago partnering with Public Resource to open its legal codes for good. “This is great! What can we do to help?” Bravo Chicago, and everyone else—take note!
  2. Smithsonian’s 3D Data — models of 21 objects, from a gunboat to the Wright Brothers’ plane, to a wooly mammoth skeleton, to Lincoln’s life masks. I wasn’t able to find a rights statement on the site which explicitly governed the 3D models. (via Smithsonian Magazine)
  3. Anki’s Robot Cars (Xconomy) — The common characteristics of these future products, in Sofman’s mind: “Relatively simple and elegant hardware; incredibly complicated software; and Web and wireless connectivity to be able to continually expand the experience over time.” (via Slashdot)
  4. An Empirical Evaluation of TCP Performance in Online GamesWe show that because TCP was originally designed for unidirectional and network-limited bulk data transfers, it cannot adapt well to MMORPG traffic. In particular, the window-based congestion control and the fast retransmit algorithm for loss recovery are ineffective. Furthermore, TCP is overkill, as not every game packet needs to be transmitted in a reliably and orderly manner. We also show that the degraded network performance did impact users’ willingness to continue a game.

November 04 2013

Software, hardware, everywhere

Real and virtual are crashing together. On one side is hardware that acts like software: IP-addressable, controllable with JavaScript APIs, able to be stitched into loosely-coupled systems—the mashups of a new era. On the other is software that’s newly capable of dealing with the complex subtleties of the physical world—ingesting huge amounts of data, learning from it, and making decisions in real time.

The result is an entirely new medium that’s just beginning to emerge. We can see it in Ars Electronica Futurelab’s Spaxels, which use drones to render a three-dimensional pixel field; in Baxter, which layers emotive software onto an industrial robot so that anyone can operate it safely and efficiently; in OpenXC, which gives even hobbyist-level programmers access to the software in their cars; in SmartThings, which ties Web services to light switches.

The new medium is something broader than terms like “Internet of Things,” “Industrial Internet,” or “connected devices” suggest. It’s an entirely new discipline that’s being built by software developers, roboticists, manufacturers, hardware engineers, artists, and designers.

Ten years ago, building something as simple as a networked thermometer required some understanding of electrical engineering. Now it’s a Saturday-afternoon project for a beginner. It’s a shift we’ve already seen in programming, where procedural languages have become more powerful and communities have arisen to offer free help with programming problems. As the blending of hardware and software continues, the physical world will become democratized: the ranks of people who can address physical challenges from lots of different backgrounds will swell.

The outcome of all of this combining and broadening, I hope, will be a world that’s safer, cleaner, more efficient, and more accessible. It may also be a world that’s more intrusive, less private, and more vulnerable to ill-intentioned interference. That’s why it’s crucial that we develop a strong community from the new discipline.

Solid, which Joi Ito and I will present on May 21 and 22 next year, will bring members of the new discipline together to discuss this new medium at the blurred line between real and virtual. We’ll talk about design beyond the computer screen; software that understands and controls the physical world; new hardware tools that will become the building blocks of the connected world; frameworks for prototyping and manufacturing that make it possible for anyone to create physical devices; and anything else that touches both the concrete and abstract worlds.

Solid’s call for proposals is open to the public, as is the call for applications to the Solid Fellowships—a new program that comes with a stipend, a free pass to Solid, and help with travel expenses for students and independent innovators.

The business implications of the new discipline are just beginning to play out. Software companies are eyeing hardware as a way to extend their offerings into the physical world—think, for instance, of Google’s acquisition of Motorola and its work on a driverless car—and companies that build physical machines see software as a crucial component of their products. The physical world as a service, a business model that’s something like software as a service, promises to upend the way we buy and use machines, with huge implications for accessibility and efficiency. These types of service frameworks, along with new prototyping tools and open-source models, are making hardware design and manufacturing vastly easier.

A few interrelated concepts that I’ve been thinking about as we’ve sketched out the idea for Solid:

  • APIs for the physical world. Abstraction, modularity, and loosely-coupled services—the characteristics that make the Web accessible and robust—are coming to the physical world. Open-source libraries for sensors and microcontrollers are bringing easy-to-use and easy-to-integrate software interfaces to everything from weather stations to cars. Networked machines are defining a new physical graph, much like the Web’s information graph. These models are starting to completely reorder our physical environment. It’s becoming easier to trade off functionalities between hardware and software; expect the proportion of intelligence residing in software to increase over time.
  • Manufacturing made frictionless. Amazon’s EC2 made it possible to start writing and selling software with practically no capital investment. New manufacturing-as-a-service frameworks bring the same approach to building things, making factory work fast and capital-light. Development costs are plunging, and it’s becoming easier to serve niches with specialized hardware that’s designed for a single purpose. The pace of innovation in hardware is increasing as the field becomes easier for entrepreneurs to work in and financing becomes available through new platforms like Kickstarter. Companies are emerging now that will become the Amazon Web Services of manufacturing.
  • Software intelligence in the physical world. Machine learning and data-driven optimization have revolutionized the way that companies work with the Web, but the kind of sophisticated knowledge that Amazon and Netflix have accumulated has been elusive in the offline world. Hardware lets software reach beyond the computer screen to bring those kinds of intelligence to the concrete world, gathering data through networked sensors and exerting real-time control in order to optimize complicated systems. Many of the machines around us could become more efficient simply through intelligent control: a furnace can save oil when software, knowing that homeowners are away, turns down the thermostat; a car can save gas when Google Maps, polling its users’ smartphones, discovers a traffic jam and suggests an alternative route—the promise of software intelligence that works above the level of a single machine. The Internet stack now reaches all the way down to the phone in your pocket, the watch on your wrist, and the thermostat on your wall.
  • Every company is a software company. Software is becoming an essential component of big machines for both the builders and the users of those machines. Any company that owns big capital machines needs to get as much out of them as possible by optimizing their operation with software, and any company that builds machines must improve and extend them with layers of software in order to be competitive. As a result, a software startup with promising technology might just as easily be bought by a big industrial company as by a Silicon Valley software firm. This has important organizational, cultural, and competency impact.
  • Complex systems democratized. The physical world is becoming accessible to innovators at every level of expertise. Just as it’s possible to build a Web page with only a few hours’ learning, it’s becoming easier for anyone to build things, whether electronic or not. The result: realms like the urban environment that used to be under centralized control by governments and big companies are now open to innovation from anyone. New economic models and communities will emerge in the physical world just as they’ve emerged online in the last twenty years.
  • The physical world as a service. Anything from an Uber car to a railroad locomotive can be sold as a service, provided that it’s adequately instrumented and dispatched by intelligent software. Good data from the physical world brings about efficient markets, makes cheating difficult, and improves quality of service. And it will revolutionize business models in every industry as service guarantees replace straightforward equipment sales. Instead of just selling electricity, a utility could sell heating and cooling—promising to keep a homeowner’s house at 70 degrees year round. That sales model could improve efficiency and quality of life, bringing about incentive for the utility to invest in more efficient equipment and letting it take advantage of economies of scale.
  • Design after the screen. Our interaction with software no longer needs to be mediated through a keyboard and screen. In the connected world, computers gather data through multiple inputs outside of human awareness and intuit our preferences. The software interface is now a dispersed collection of conventional computers, mobile phones, and embedded sensors, and it acts back onto the world through networked microcontrollers. Computing happens everywhere, and it’s aware of physical-world context.
  • Software replaces physical complexity. A home security system is no longer a closed network of motion sensors and door alarms; it’s software connected to generic sensors that decides when something is amiss. In 2009, Alon Halevy, Peter Norvig, and Fernando Pereira wrote that having lots and lots of data can be more valuable than having the most elegant model. In the connected world, having lots and lots of sensors attached to some clever software will start to win out over single-purpose systems.

These are some rough thoughts about an area that we’ll all spend the next few years trying to understand. This is an open discussion, and we welcome thoughts on it from anyone.

November 03 2013

Podcast: the Internet of Things should work like the Internet

At our OSCON conference this summer, Jon Bruner, Renee DiResta and I sat down with Alasdair Allen, a hardware hacker and O’Reilly author; Josh Marinacci, a researcher with Nokia; and Tony Santos, a user experience designer with Mozilla. Our discussion focused on the future of UI/UX design, from the perils of designing from the top down to declining diversity in washing machines to controlling your car from anywhere in the world.

Here are some highlights from our chat:

  • Alasdair’s Ignite talk on the bad design of UX in the Internet of Things: the more widgets and dials and sliders that you add on are delayed design decisions that you’re putting onto the user. (1:55 mark)
  • Looking at startups working in the Internet of Things, design seems to be “pretty far down on the general level of importance.” Much of the innovation is happening on Kickstarter and is driven by hardware hackers, many of whom don’t have design experience — and products are often designed as an end to themselves, as opposed to parts of a connected ecosystem. “We’re not building an Internet of Things, we’re building a series of islands…we should be looking at systems.” (3:23)
  • Top-down approach in the Internet of Things isn’t going to work — large companies are developing proprietary systems of things designed to lock in users. (6:05)
  • Consumer inconvenience is becoming an issue — “We’re creating an experience where I have to buy seven things to track me…not to mention to track my house….at some point there’s going to be a tipping point where people say, ‘that’s enough!’” (8:30)
  • Anti-overload Internet of Things user experience — mini printers and bicycle barometers. “The Internet of Things has to work like the Internet.” (11:07)
  • Is the Internet of Things following suit with the declining diversity we’ve seen in the PC space? Apple has imposed a design quality onto other company’s laptops…in the same vein, will we see a slowly declining diversity in washing machines, for instance? Or have we wrung out all the mechanical improvements that can possibly be made to a washing machine with current technology, opening the door for innovation through software enhancements? (17:30)
  • Tesla’s cars have a restful API; you can monitor and control the car from anywhere — “You can pop the sunroof while you’re doing 90 mph down the motorway, or worse than that, you can pop someone else’s sunroof.” Security holes are an increasing issue in the industrial Internet. (22:42)
  • “The whole point of the Internet of Things — and technology in general — should be to reduce the amount of friction in our lives.” (26:46)
  • Are we heading toward a world of zero devices, where the objects themselves are the interfaces? For the mainstream, it will depend on design and the speed of maturing technology. (30:53)
  • Sometimes the utility of a thing supersedes the poor user experience to become widely adopted — we’re looking at your history, WiFi. (33:19)
  • The speed of technology turnover is vastly increasing — in a 100 years, platforms and services like Kickstarter will be viewed as major drivers of the second industrial revolution. “We’re really going back to the core of innovation — if you offer up an idea, there will be enough people out there who want that idea to build it and make it, and that will cause someone else to have another idea.” (34:08)

Links related to our discussion:

Subscribe to the O’Reilly Radar Podcast through iTunes, SoundCloud, or directly through our podcast’s RSS feed.

October 29 2013

Four short links: 29 October 2013

  1. Mozilla Web Literacy Standard — things you should be able to do if you’re to be trusted to be on the web unsupervised. (via BoingBoing)
  2. Berg Cloud Platform — hardware (shield), local network, and cloud glue. Caution: magic ahead!
  3. Sharka large-scale data warehouse system for Spark designed to be compatible with Apache Hive. It can execute Hive QL queries up to 100 times faster than Hive without any modification to the existing data or queries. Shark supports Hive’s query language, metastore, serialization formats, and user-defined functions, providing seamless integration with existing Hive deployments and a familiar, more powerful option for new ones. (via Strata)
  4. The Malware of Thingsa technician opening up an iron included in a batch of Chinese imports to find a “spy chip” with what he called “a little microphone”. Its correspondent said the hidden devices were mostly being used to spread viruses, by connecting to any computer within a 200m (656ft) radius which were using unprotected Wi-Fi networks.

October 28 2013

Four short links: 28 October 2013

  1. A Cyber Attack Against Israel Shut Down a RoadThe hackers targeted the Tunnels’ camera system which put the roadway into an immediate lockdown mode, shutting it down for twenty minutes. The next day the attackers managed to break in for even longer during the heavy morning rush hour, shutting the entire system for eight hours. Because all that is digital melts into code, and code is an unsolved problem.
  2. Random Decision Forests (PDF) — “Due to the nature of the algorithm, most Random Decision Forest implementations provide an extraordinary amount of information about the final state of the classifier and how it derived from the training data.” (via Greg Borenstein)
  3. BITalino — 149 Euro microcontroller board full of physiological sensors: muscles, skin conductivity, light, acceleration, and heartbeat. A platform for healthcare hardware hacking?
  4. How to Be a Programmer — a braindump from a guru.

October 07 2013

Four short links: 7 October 2013

  1. The Thing Systemconnects to Things in your home, whether those things are media players such as the Sonos or the Apple TV, your Nest thermostat, your INSTEON home control system, or your Philips Hue lightbulbs — whether your things are connected together via Wi-Fi, USB or Bluetooth Low Energy (BLE). The steward will find them and bring them together so they can talk to one another and perform magic.
  2. The Eye Tribe — $99 eye-tracker with SDK.
  3. Line Mode — CERN emulator for the original web client. I remember coding for this, and hacking new features into it. Roar says the dinosaur, in 80×24 pixelated glory.
  4. 2M Person Internet Filter — (BBC) China apparently employs 2 million people to read Weibo and other Internet content sites, to identify critical opinions. That’s 40% of my country’s population. Crikey.

September 24 2013

Four short links: 24 September 2013

  1. Measuring Heart Rate with a Smartphone Camera — not yet realtime, but promising sensor development.
  2. iBeaconslow-power, short-distance location monitoring beacons. Any iOS device that supports Bluetooth Low Energy can become an iBeacon, and can detect other iBeacons when they are nearby. Apps can be notified when iBeacons move in and out of range of the device, and can monitor the proximity of iBeacons as their proximity changes over time.
  3. Analysis: The Quantified Self (BBC) — radio show on QS. Good introduction for the novice.
  4. Tinke — heart rate, blood oxygen, respiration rate, and heart rate variability in a single small sensor that plugs into your iOS device.
  5. September 04 2013

    Four short links: 4 September 2013

    1. MegaPWN (GitHub) — Your MEGA master key is supposed to be a secret, but MEGA or anyone else with access to your computer can easily find it without you noticing. Browser crypto is only as secure as the browser and the code it runs.
    2. hammer.js (GitHub) — a Javascript library for multitouch gestures.
    3. When Smart Homes Get Hacked (Forbes) — Insteon’s flaw was worse in that it allowed access to any one via the Internet. The researchers could see the exposed systems online but weren’t comfortable poking around further. I was — but I was definitely nervous about it and made sure I had Insteon users’ permission before flickering their lights.
    4. A Stick Figure Guide to Advanced Encryption Standard (AES) — exactly what it says.

    August 20 2013

    If This/Then That (IFTTT) and the Belkin WeMo

    Like most good technologists, I am lazy.  In practice, this sometimes means that I will work quite hard with a computer to automate a task that, for all intents and purposes, just isn’t that hard.  In fits and starts for the past 10 years, I have been automating my house in various ways.  It makes my life easier when I am at home, though it does mean that friends who watch my house when I’m gone need to be briefed on how to use it.  If you are expecting to come into my house and use light switches and the TV as you do every place else, well, that’s why you need a personalized orientation to the house.

    In this post, I’ll talk briefly about one of the most basic automation tasks I’ve carried out, which is about how the lights in my house are controlled.

    The humble light switch was invented in the late 19th century, with the “modern” toggle switch following in the early 20th century.  The toggle switch has not changed in about 100 years because it does exactly what is needed and is well understood.  The only disadvantage to the toggle switch is that you have to touch it to operate it, and that means getting off the couch.

    My current home is an older home that was previously a storefront.  All the plumbing is on one wall of the house, which I sometimes jokingly call “the wall of modernity.”  The telephone demarc is on the opposite side, though I’ve made some significant modifications to the telephone wiring from the days when I was using Asterisk more extensively.  As a large open space, there is not much built-in lighting, so most of the lights are freestanding floor lamps.

    The initial “itch” that I was scratching with my first home automation project was a consistent forgetfulness to turn off lights when I went to bed — I would settle into bed up in the loft and then realize that I needed to get up and turn off a light.  At that point, remote controlled lights seemed pretty darn attractive.

    My first lighting automation was done with Insteon, a superset of the venerable 1970s-era X10 protocol.  With Insteon controls, I could use an RF remote and toggle lights on and off from a few locations within the house.

    With the hardware in place, I turned to finding software.  An RF remote is a good start, but I wanted to eliminate even that.  What I found is that the home automation community doesn’t really have off-the-shelf software that “just works” and lets you start doing things out of the box.  I used Mister House for a while, but at the time I was using it, the Insteon support was pretty new.  I worked with a friend writing some automation code — well to be honest, being a tester for him, but we were solving problems unique to our homes, not writing something that was ultimately going to be the answer for our parents.

    Earlier this year, I was introduced to IFTTT, which stands for “If This, Then That.”  It is the closest thing I have seen to a generic software stack that can readily be used for home automation purposes.  IFTTT can be used for much more than just home automation, but I’m going to stick to that restricted use for the purpose of this post.

    There is much to like about IFTTT, so I’ll focus on just a few attributes:

    • Programming without knowing programming.  I’ve taken enough computer science courses to know how to write a program, but I’m not good enough to be a professional programmer.  IFTTT lets you write rules — essentially, programs that control objects in the physical world — without the high bar of learning specialized syntax or all the system administration work that goes along with running a program.
    • Extending control beyond my home.  Traditional home automation has been based on sensors inside the home.  In order to respond to something, there needs to be a sensor to gather that data.  To turn the lights on after dark, you need a sensor that tells your house it’s dark.  To do something when it’s raining, you need a sensor that gets wet, and so forth.  As you’ll see, IFTTT lets you choose a location and use information from Internet services to assemble the context.
    • One of the cleanest and easiest designs I’ve seen.  I know everybody brags about design, but IFTTT is so simple that a colleague of mine in sales uses IFTTT extensively.  (Insert your own joke about the technical competence of sales representatives here.)

    To act in the physical realm of my house, though, IFTTT requires devices that can take action.  The Belkin WeMo is a family of products, including a remote-controlled electrical outlet, light switch, and motion sensor.  The WeMo uses Wi-Fi to connect to my home network, so it can be controlled by any device on my network, or any service on the Internet.

    So, let’s start out with a simple idea: I have a light, and I’d like to turn the light on when it’s dark.  In traditional home automation, I can easily do that by picking a time to turn the light on and using the same time every day.  If I pick a time that is appropriate for the winter, though, the light will come on too early in June.  If I pick a time that is right for June, I’ll have to get up and turn the lights on in December when it gets dark.  Or, using traditional home automation, I could find a source of sunset times, pull them in manually, and hope the data feed I’m using never changes.

    That is, until IFTTT.  In this example, I’ll show you how to set up a WeMo to have a light that turns on at sunset.  IFTTT mini-programs are called “Recipes” and consist of a trigger and an action.  Building a recipe is a straightforward guided process that begins by picking the trigger:

    IFTTT mini-programsIFTTT mini-programs

    IFTTT’s interaction with the world is organized into “channels,” and more channels are being added on a routine basis.  The first step in creating a recipe is to choose the trigger:

    choose the triggerchoose the trigger

    Triggers are organized alphabetically.  I cut the screen shot off after F because I wanted to get down to W for Weather and WeMo.  The first time you select the weather channel, you’ll be prompted to set a location; I chose my home in San Francisco:

    set a locationset a location

    The weather channel has a number of components that you can choose to use as a trigger.  Some are expected, such as the weather forecast.  However, the weather channel also allows you to take actions when conditions change.  The weather data used by IFTTT is also rich enough to have pollen count and UV index, which are not always readily available from many forecasts.  For the purpose of our example, we’ll be using the “sunset” option, though it’s easy to imagine creating triggers to control lights when the sky becomes cloudy or turning on air filtration when the pollen count rises:

    the sunset optionthe sunset option

    Some triggers require additional configuration.  Sunset is straightforward because it’s known for the location that you chose:

    additional configurationadditional configuration

    Here at step three, the first part of the rule is done.  We have set up a rule that will fire every day at sunset.  IFTTT helpfully fills in the “this” part of our rule in a way that makes it obvious what we’re doing:

    first part of the rule is donefirst part of the rule is done

    The second part of writing a rule is to lay out the action (the “that”) to take when the trigger fires.  Once again, we choose a channel to take the action, choosing from the large number of channels that are available:

    lay out the actionlay out the action

    We’re trying to control lights plugged into a Belkin WeMo, so we’ll scroll down to “W” and pick the WeMo.  It has the actions that you might expect from what is essentially a power switch: turn off the outlet, turn on the outlet, blink the outlet, or toggle the outlet.  In our case, we want to turn on the outlet to turn on the light:

    turn the outlet to turn the light onturn the outlet to turn the light on

    From the menu of options, choose “turn on.”  IFTTT supports having multiple WeMo switches, each of which can be named.  A drop-down allows you to choose the switch being controlled by the rule so that many different devices can be controlled.  For example, a coffeepot might be turned on when an alarm goes off, different lights might be controlled by different rules, and so forth:

    drop-down allows you to choose the switch being controlleddrop-down allows you to choose the switch being controlled

    The last step in creating a rule is to finalize the action.  IFTTT helpfully displays the rule in full in a nice simple form.  Is there any doubt what the rule we’ve just created will do:

    finalize the actionfinalize the action

    In my personal setup, I use a rule that turns off the light at 10 p.m. as a reminder to go to bed.  There is not yet an easy way to say “turn off the light two hours after it turned on” because IFTTT doesn’t hold much state (yet).

    One aspect of IFTTT that I recently appreciated was how easy it is to change the rule.  When going out of town, I decided to have the lights on all night so that pet sitters wouldn’t have to figure out all the ways in which lights could be turned on.  I simply deactivated the “turn off at 10 p.m.” rule that was like this:

    deactivated the “turn off at 10 pm” ruledeactivated the “turn off at 10 pm” rule

    I then replaced it with a rule that turned off the lights at sunrise.  (The lights in question are a low-power strand of LED lights.)

    replaced it with a rule that turned off the lights at sunrisereplaced it with a rule that turned off the lights at sunrise

    Could I have accomplished the same tasks in other home automation systems?  Absolutely, but it would have taken me much longer to get to my end goal, and I would have had to do significantly more testing to believe that my automation would behave as expected.  The combination of IFTTT and the WeMo makes setup much easier and more accessible.

    August 06 2013

    The end of integrated systems

    I always travel with a pair of binoculars and, to the puzzlement of my fellow airline passengers, spend part of every flight gazing through them at whatever happens to be below us: Midwestern towns, Pennsylvania strip mines, rural railroads that stretch across the Nevada desert. Over the last 175 years or so, industrialized America has been molded into a collection of human patterns–not just organic New England villages in haphazard layouts, but also the Colorado farm settlements surveyed in strict grids. A close look at the rectangular shapes below reveals minor variations, though. Every jog that a street takes is a testament to some compromise between mankind and our environment that results in a deviation from mathematical perfection. The world, and our interaction with it, can’t conform strictly to top-down ideals.

    We’re about to enter a new era in which computers will interact aggressively with our physical environment. Part of the challenge in building the programmable world will be finding ways to make it interact gracefully with the world as it exists today. In what you might call the virtual Internet, human information has been squeezed into the formulations of computer scientists: rigid relational databases, glyphs drawn from UTF-8, information addressed through the Domain Name System. To participate in it, we converse in the language and structures of computers. The physical Internet will need to understand much more flexibly the vagaries of human behavior and the conventions of the built environment.


    A few weeks ago, I found myself explaining the premise of the programmable world to a stranger. He caught on when the conversation turned to driverless cars and intelligent infrastructure. “Aha!” he exclaimed, “They have these smart traffic lights in Los Angeles now; they could talk to the cars!”

    The idea that centralized traffic computers will someday communicate wirelessly with vehicles, measuring traffic conditions and directing cars, was widely discussed a few years ago, but now it struck me as outdated. Computers can easily read traffic signals with machine vision the same way that humans do–by interpreting red, yellow, and green lights in the context of an intersection’s layout. On the other side, traffic signals often use cameras to analyze volumes and adjust to them. With these kinds of flexible-sensing technologies perfected, there will be no need for rigid, integrated systems to link cars to traffic lights. The example will repeat itself in other cases where software gathers data from sensors and controls physical devices.

    The world is becoming programmable, with physical devices drawing intelligence from layers of software that can connect to widespread sensor networks and optimize entire systems. Cars drive themselves, thermostats silently adjust to our apparent preferences, wind turbines gird themselves for stormy gusts based on data collected from other turbines. These services are increasingly enabled not by rigid software infrastructure that connects giant systems end-to-end, but by ambient data-gathering, Web-like connections, and machine-learning techniques that help computers interact with the same user interfaces that humans rely on.

    We’re headed for a re-hash of the API/Web scraping debate, and even more than on the Web, scraping is poised for an advantage since physical-world conventions have been under refinement for millennia. It’s remarkably easy to differentiate a men’s room door from a women’s room door, or to tell whether a door is locked by looking at the deadbolt, and software that can understand these distinctions is already available.

    There will, of course, still be a very big place for integrated systems in the physical Internet. Environments built from scratch; machines that operate in formalized, well-contained environments; and systems whose first priority is reliability and clarity will all make good use of traditional APIs. Trains will always communicate with dispatchers through formal integrated systems, just as Twitter’s API is likely the best foundation for a Twitter app regardless of how sophisticated your Twitter-scraping software might be.

    But building a dispatching system for a railroad is nothing like developing a car that could communicate, machine-to-machine, with any traffic light it might encounter. Governments would need to invest billions in retrofitting their signaling systems, cars would need to accommodate the multitude of standards that would emerge from the process, and in order for adoption to be practical, traffic-signal retrofits would need to be universal. It’s better to gently build systems layer by layer that can accept human convention as it already exists, and the technology to do that is emerging now.

    A couple of other observations about the superiority of loosely-coupled services in the physical world:

    • Google Maps, Waze, and Inrix all harvest traffic data ambiently from connected devices that constantly report location and speed. The alternative, described before networked navigation devices were common, was to gather this data through connected infrastructure. State departments of transportation, spurred by the SAFETEA-LU transportation bill of 2005 (surely the most tortured acronym in Washington), invested heavily in “smart transportation” infrastructure like roadway sensors and digital overhead signs, with the aim of, for instance, directing traffic around congestion. Compared to any of the Web services, these systems are slow to react, have poor coverage, are inflexible, and are incredibly expensive. The Web services make collateral use of infrastructure–gathering data quietly from devices that were principally installed for other purposes.
    • A few weeks ago I spent a storm-drenched six hours in one of O’Hare Airport’s lower-ring terminals. As we waited for the storm to clear and flights to resume, we watched the data screen over the gate–the Virgil in our inferno–which periodically showed a weather map and updated our predicted departure time. It was clear to anyone with a smartphone, though, that the information it displayed, at the end of a long chain of data transmission and reformatting that went from weather services and the airline’s management system through the airport’s display system, was out of date. The flight updates arrived on the airline’s Web site 10 to 15 minutes before they appeared on the gate screen; the weather map, which showed our storm crossing the Mississippi River even as it appeared over the airfield, was a consistent hour behind. By following updates on the Internet, customers were able to subvert an otherwise-rigid and hierarchical information flow, finding data where it was freshest and most accurate among loosely-coupled services online. Software that can automatically seek out the highest-quality information, flexibly switching between sources, will in turn provide the highest-quality intelligence.

    The API represents the world on the computers’ terms: human output squished constrained by programming conventions. Computers are on their way to participating in the world in human terms, accepting winding roads and handwritten signs. In other words, they’ll soon meet us halfway.

    July 23 2013

    Interactive map: bike movements in New York City and Washington, D.C.

    From midnight to 7:30 A.M., New York is uncharacteristically quiet, its Citibikes–the city’s new shared bicycles–largely stationary and clustered in residential neighborhoods. Then things begin to move: commuters check out the bikes en masse in residential areas across Manhattan and, over the next two hours, relocate them to Midtown, the Flatiron district, SoHo, and Wall Street. There they remain concentrated, mostly used for local trips, until they start to move back outward around 5 P.M.

    Washington, D.C.’s bike-share program exhibits a similar pattern, though, as you’d expect, the movement starts a little earlier in the morning. On my animated map, both cities look like they’re breathing–inhaling and then exhaling once over the course of 12 hours or so.

    The map below shows availability at bike stations in New York City and the Washington, D.C. area across the course of the day. Solid blue dots represent completely-full bike stations; white dots indicate empty bike stations. Click on any station to see a graph of average availability over time. I’ve written a few thoughts on what this means about the program below the graphic.

    We can see some interesting patterns in the bike share data here. First of all, use of bikes for commuting is evidently highest in the residential areas immediately adjacent to dense commercial areas. That makes sense; a bike commute from the East Village to Union Square is extremely easy, and that’s also the sort of trip that tends to be surprisingly difficult by subway. The more remote bike stations in Brooklyn and Arlington exhibit fairly flat availability profiles over the course of the day, suggesting that to the degree they’re used at all, it’s mostly for local trips.

    A bit about the map: I built this by scraping the data feeds that underlie the New York and Washington real-time availability maps every 10 minutes and storing them in a database. (Here is New York’s feed; here is Washington’s.) I averaged availability by station in 10-minute increments over seven weekdays of collected data. The map uses JavaScript (mostly jQuery) to manipulate an SVG image–changing opacity of bike-share stations depending to represent availability and rendering a graph every time a station is clicked. I used Python and MySQL for the back-end work of collecting the data, aggregating it, and publishing it to a JSON file that the front-end code downloads and parses.

    This map, by the way, is an extremely simple example of what’s possible when the physical world is instrumented and programmable. I’ve written about sensor-laden machinery in my research report on the industrial internet, and we plan to continue our work on the programmable world in the coming months.

    July 17 2013

    Four short links: 17 July 2013

    1. Hideout — augmented reality books. (via Hacker News)
    2. Patterns and Practices for Open Source Software Success (Stephen Walli) — Successful FOSS projects grow their communities outward to drive contribution to the core project. To build that community, a project needs to develop three onramps for software users, developers, and contributors, and ultimately commercial contributors.
    3. How to Act on LKML — Linus’s tantrums are called out by one of the kernel developers in a clear and positive way.
    4. Beyond the Coming Age of Networked Matter (BoingBoing) — Bruce Sterling’s speculative short story, written for the Institute For The Future. “Stephen Wolfram was right about everything. Wolfram is the greatest physicist since Isaac Newton. Since Plato, even. Our meager, blind physics is just a subset of Wolfram’s new-kind-of- science metaphysics. He deserves fifty Nobels.” “How many people have read that Wolfram book?” I asked him. “I hear that his book is, like, huge, cranky, occult, and it drives readers mad.” “I read the forbidden book,” said Crawferd.

    June 26 2013

    $20,000 and a trip to Shenzhen

    Manufacturing is rapidly becoming more accessible to people whose expertise lies elsewhere. The change is most apparent at the small scale, where it’s become easy to order prototypes made on high-quality 3D printers and electronics in small batches from domestic factories. High-volume Chinese manufacturing has been tougher to get into.

    A new incubator launching today, and led by our former O’Reilly colleague Brady Forrest, is aimed at lowering the barriers to getting physical goods manufactured fast and in high volumes. Highway1 will prepare nascent hardware companies to enter the accelerator pipeline of the Sino-Irish supply-chain giant PCH International. It offers portfolio companies up to $20,000 and a hardware crash-course that includes a trip to the factories of Shenzhen. Forrest says his curriculum will eventually be made public (minus the China junket, of course).

    The successful companies that progress to PCH’s accelerator will have PCH as both an investor and supply-chain manager, essentially drawing from the same network that supplies some of Silicon Valley’s bestsellers.

    Forrest put it to me this way: “There is no Amazon Web Services for hardware, but we’re the closest thing to it.”

    June 17 2013

    Four short links: 17 June 2013

    1. Weekend Reads on Deep Learning (Alex Dong) — an article and two videos unpacking “deep learning” such as multilayer neural networks.
    2. The Internet of Actual Things“I have 10 reliable activations remaining,” your bulb will report via some ridiculous light-bulbs app on your phone. “Now just nine. Remember me when I’m gone.” (via Andy Baio)
    3. Announcing the Mozilla Science Lab (Kaitlin Thaney) — We also want to find ways of supporting and innovating with the research community – building bridges between projects, running experiments of our own, and building community. We have an initial idea of where to start, but want to start an open dialogue to figure out together how to best do that, and where we can be of most value..
    4. NAND to TetrisThe site contains all the software tools and project materials necessary to build a general-purpose computer system from the ground up. We also provide a set of lectures designed to support a typical course on the subject. (via Hacker News)

    June 14 2013

    Networked Things?

    Well over a decade ago, Bill Joy was mocked for talking about a future that included network-enabled refrigerators. That was both unfair and unproductive, and since then, I’ve been interested in a related game: take the most unlikely household product you can and figure out what you could do if it were network-enabled. That might have been a futuristic exercise in 1998, but the future is here. Now. And there are few reasons we couldn’t have had that future back then, if we’d have the vision.

    So, what are some of the devices that could be Internet-enabled, and what would that mean? We’re already familiar with the Nest; who would have thought even five years ago that we’d have Internet-enabled thermostats?

    From the thermostat, it’s an easy jump to the furnace. A furnace may be the dullest appliance in the house. Except when it breaks. The first November after we bought our house, the furnace broke down. The furnace guy came and fixed it. And fixed it again a few days later. And fixed it again a few days later … seven times during the month of November, until he finally said, “I have a bad feeling about this,” and replaced the part he had replaced on the first service call, which turned out to be defective in exactly the same way as the original.

    Fortunately, we had a service contract and got most of a new furnace out of the deal. But what impressed me was that I didn’t really appreciate waking up at 2 a.m. wondering why it was cold, noticing that the furnace wasn’t running, calling the answering service, and waiting for the guy to get here. And I wondered why the furnace couldn’t have a little Internet-connected controller that would notice it wasn’t working and send the repair service a message saying “fix me” — bonus points for sending diagnostic information so the technician could make sure he had the right part. Then the tech could call me and say, “We’ll be at your house in 15 minutes; just unlock the door and go back to bed.” That’s a good reason for the phone to ring at 2 a.m.

    So that’s one. My next Internet-enabled device fantasy is our water system. Like many residents of formerly rural suburbs, I have a private well. And one of the things I dread is groundwater pollution: every few years, you read about an area where the well water goes bad because someone dumped pollutants (dry cleaning fluid, used motor oil, you name it) that eventually made it to the water table. The polluter has frequently gone out of business, so there’s nothing you can do but try to persuade the water company to extend service to your area. Wouldn’t it be worthwhile if the state were flooded with water sensors (one on every well) that would allow you to trace these events and predict their spread? Wouldn’t it be great if the water company could start running the pipes before your tap water started smelling like tetrachloroethylene? That’s easily within modern sensor network technology. A daily update to a central database from each of a few hundred thousand homes isn’t a big deal. I bet the sensors themselves would be quite simple, and I’m sure the local water companies already have sensors (and networks) on their wells.

    Let’s move downstream from the well to more familiar appliances. What about dishwashers? At Foo camp last year, Jason Huggins (@hugs), founder of Sauce Labs and BitBeam, said, “Robotics is always in the future. Every once in a while, a piece of it breaks off and becomes part of the present. Then we get used to having it around, and it stops being robotics. But someone who lived in the 40s would be amazed by a modern dishwasher.” So, how smart could a dishwasher really be? Loading and unloading sounds like a hard problem. But for those occasions when you’ve just left home and realized that you’ve forgotten to start the dishwasher, being able to start it remotely would be useful. Being able to sense what was in it and adjust the cycle accordingly would also be useful. An Internet-enabled dishwasher could also access a rate API from the utility company and run itself at times when power is least expensive.

    Bread makers: what if a bread maker or ice cream maker could download recipes from the web? (There is an XML-based microformat for recipes.) Then making bread would be as simple as loading up the machine with ingredients; it would take care of everything else by itself, based on a recipe you’ve selected from an online library. For that matter, what about your stove and oven? Why get up to turn off the oven when the timer goes off — why can’t it know what you’re cooking and turn itself off at the right time? There’s also a recipe standard for beer (BeerML); I can imagine fully automated brewing. I’ve seen a fully automated Internet-enabled still (at a properly licensed distillery).

    What makes these ideas interesting isn’t the intelligence, it’s the connectivity. Furnaces newer than mine have microprocessor-based controllers that probably display error codes. I’m sure that modern dishwashers have microprocessors to regulate temperature and water usage; and while bread makers went out of fashion a while ago, my rather elderly microwave has a “sensor” mode that will automatically turn itself off when it thinks the food is cooked. I suspect it would be hard to find any modern device that doesn’t have at least one microprocessor. The magic starts when you add the ability to communicate over a network. Given that these devices already have processors, adding a simple network connection is trivial.

    None of these examples are staggeringly inventive. A dishwasher that can be turned on from remote locations and optimize energy consumption is almost dull. So is the Nest thermostat, if you think about what it does rather than what it means. A thermostat is supposed to be dull, and the smarter a thermostat is, the more you can ignore it. The Nest is most certainly a part of the future that has broken off and now belongs to the present. So, here’s the challenge: rather than think about what can’t be done or how silly it would be to have a networked refrigerator, let’s take what we already have and think about what we might be able to do if our things were part of an Internet of Things. What “Things” would you Internet-enable?

    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!

    Schweinderl