Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

December 18 2013

Democratizing technology and the road to empowerment

Advancements in technology are making what once was relegated only to highly educated scientists, engineers and developers accessible to — and affordable for — the mainstream. This democratization of technology and the empowerment it affords was an underlying thread through many of the stories at this year’s Business Innovation Factory (BIF) summit. From allowing hobbyists and makers to innovate and develop on an advanced level to enabling individuals to take control of their personal health data to using space suits to help children with cerebral palsy, technological advancements are beginning to empower — and enrich — at scale.

With the rise of quantified self, for example, people have begun amassing personal data based on their activities and behaviors. Some argue that QS doesn’t go quite far enough and that a more complete story can be told by incorporating emotional data, our sense of experience. While it’s empowering in many ways to be able to collect and control all this personal big data, what to do with this onslaught of information and how to process it remains a question for many.

Alexander Tsiaras, who founded theVisualMD, argued in his talk at BIF9 that “story gives a soul to the data,” and that it’s time to change the paradigm, to start using technology to create ecosystems to empower people to understand what’s going on inside their bodies as a result of their behaviors.

Using visualization and interactive media, personal big data — medical records, test results, lab reports, diagnoses, and exercise and eating habits, for instance — are deconstructed, as Tsiaras explained, to “demystify” the data: “The beauty of visualization is that it speaks to everyone,” he said. From stories to explain test results to stories to help patients visualize the processes going on inside their bodies when they eat particular foods or when they exercise, people are able to turn their personal big data into stories, whether to better understand a chronic condition or to understand how their behaviors play into prevention. “This is the most important thing,” Tsiaras argued, “the moment you take control, that empowerment is huge.”

Arguably, one of the most democratizing and empowering of technological innovations is 3D printing — innovators can now manufacture the products they conceive, even at scale. BIF storyteller Ping Fu emphasized the potential of the technology through a powerful personal story of how her past experiences led her to computer science — a breakthrough, she explained, that changed her life personally and professionally, leading her to co-found 3D printing and design company Geomagic. Fu defined innovation as “imagination applied” and shared examples of innovations in 3D printing, including the Smithsonian’s plan to scan and print artifacts from its collection (which can now be achieved by individuals at home), custom prosthetics designed as mirror images of actual limbs, and the digital preservation of UNESCO World Heritage sites. Fu stressed that the technology should not be viewed as a platform for printing tchotchkes, that real-world, useful products are being produced.

This argument was further supported by storyteller Easton LaChapelle, a 17-year-old high school student who has used 3D printing technology in coordination with advancements in (and some creativity with) engineering materials to create a robotic hand that’s wirelessly controlled by a robotic glove — complete with haptic feedback — and a 3D-printed brain-wave-controlled robotic arm.

Affordable access to 3D printing, LaChapelle said, was key to his ability to move forward with his designs, and he noted that 3D printing is a driving force for innovation: “I can design something in my room and hit print, and within an hour, it’s in front of me; that alone is really fascinating, that you’re able to design something and have it physically in front of you; it’s remarkable in today’s world — it’s a whole evolving technology.”

Advancements in technology aren’t only empowering LaChapelle to create innovative designs in robotics; they’re empowering him to help humanity, and in turn, empowering humanity. He explained during his story:

“When I was at the science fair in Colorado, I had the first generation of the arm there for a public viewing, and a 7-year-old girl came up to me. She had a prosthetic limb from the elbow to the fingertip, with one motion — open-close — and one sensor. That alone was $80,000. That was really the moment that touched my heart, the “ah-ha” moment, that I could take what I’m already doing, transfer it directly to prosthetics, and potentially make people’s lives better.”

The final iteration of the arm is completely 3D printed, making it lightweight — from fingertip to shoulder, it’s in line to weigh less than five pounds — and cost about $400 to produce. “The third generation arm is the final [iteration of the arm] and the point where it can change people’s lives,” said LaChapelle. He’s currently working on developing exoskeleton legs for a friend who was paralyzed in a car accident, technology that he’ll make available to anyone with paralysis, MS or any other condition that impairs movement: “I want to solve this. My approach is to give them something they can afford and something they can use easily.”

You can watch LaChapelle’s inspiring talk in the following video:

In a similar vein, storyteller Dava Newman, professor of Aeronautics and Astronautics and Engineering Systems at MIT, explained how her team uses advances in materials technology to develop exoskeleton space suits to address particular issues astronauts experience in space, such as combating bone density loss, and increasing mobility and flexibility (see the gravity loading countermeasure suit, which is also used to help children with cerebral palsy perform daily activities).

Suits are also designed with high-precision EGain sensors to help provide muscular protection and measure hot spots and pressure while astronauts are training, in hopes of preventing shoulder injuries, for instance. The ultimate goal is developing the BioSuit, which uses electrospun materials, dielectric elastomers and shape memory alloys to provide a skin-tight, pressurized but flexible suit environment for astronauts. Newman stressed that the importance of our work as scientists, designers, researchers, artists, mathematicians, etc., is to take care of one another:

“…200 miles, 400 kilometers — Boston to New York — that’s where we’re living in space now; it’s low Earth orbit, and it’s fantastic and it’s great, but it’s been 40 years since we’ve been to another planetary body. I think, with all my great students and all the great dreamers in the world, we’ll get to the Moon, we’ll get to Mars, and we’re going for the search of life. It’ll be humans and rovers and robots all working together. The scientific benefit will be great, but the most important thing is that we learn about ourselves — it’s in the reflection and thinking about who we are and who humanity is.”

As technology advances and empowers more and more people to create, build and produce more and more innovative products and experiences, a discussion has begun as to the responsibility we have as engineers, scientists, designers, etc., to consider the implications and ramifications of our work, to using our designs for the benefit of humanity, to further social progress in a positive direction. It’s clear through these and other storytellers from BIF9 that doing so results in a more enriching, empowering experience for everyone.

Robots will remain forever in the future

(Note: this post first appeared on Forbes; this lightly edited version is re-posted here with permission.)

We’ve watched the rising interest in robotics for the past few years. It may have started with the birth of FIRST Robotics competitions, continued with the iRobot and the Roomba, and more recently with Google’s driverless cars. But in the last few weeks, there has been a big change. Suddenly, everybody’s talking about robots and robotics.

It might have been Jeff Bezos’ remark about using autonomous drones to deliver products by air. It’s a cool idea, though I think it’s farfetched, but that’s another story. Amazon Prime isn’t Amazon’s first venture into robotics: a year and a half ago, they bought Kiva Systems, which builds robots that Amazon uses in their massive warehouses. (Personally, I think package delivery by drone is unlikely for many, many reasons, but that’s another story, and certainly no reason for Amazon not to play with delivery in their labs.)

But what really lit the fire was Google’s acquisition of Boston Dynamics, a DARPA contractor that makes some of the most impressive mobile robots anywhere. It’s hard to watch their videos without falling in love with what their robots can do. Or becoming very scared. Or both. And, of course, Boston Dynamics isn’t a one-time buy. It’s the most recent in a series of eight robotics acquisitions, and I’d bet that it’s not the last in the series.

Google is clearly doing something big, but what? Unlike Jeff Bezos, Larry Page and Sergei Brin haven’t started talking about delivering packages with drones or anything like that. Neither has Andy Rubin, who is running the new robotics division. The NSA probably knows, to Google’s chagrin, but we won’t until they’re ready to tell us. Google has launched a number of insanely ambitious “moon shot” projects recently; I suspect this is another.

Whatever is coming from Google, we’ll certainly see even greater integration of robots into everyday life. Those robots will quickly become so much a part of our lives that we’ll cease to think of them as robots; they’ll just be the things we live with. At O’Reilly’s Foo Camp in 2012, Jason Huggins, creator of this Angry Birds-playing bot, remarked that robots are always part of the future. Little bits of that future break off and become part of the present, but when that happens, those bits cease to be “robots.” In 1945, a modern dishwasher would have been a miracle, as exotic as the space-age appliances in The Jetsons. But now, it’s just a dishwasher, and we’re trying to think of ways to make it more intelligent and network-enabled. Dishwashers, refrigerators, vacuums, stoves: is the mythical Internet-enabled refrigerator that orders milk when you’re running low a robot? What about a voice-controlled baking machine, where you walk up and tell it what kind of bread you want? Will we think of these as robots?

I doubt it. Much has been made of Google’s autonomous vehicles. Impressive as they are, autonomous robots are nowhere near as interesting as assistive robots, robots that assist humans in some difficult task. Driving around town is one thing, but BMW already has automatic parallel parking. But do we call these “robotic cars”? What about anti-lock brakes and other forms of computer-assisted driving that have been around for years? A modern airliner essentially flies itself from one airport to another, but do we see a Boeing 777 as a “robot”? We prefer not to, perhaps because we cherish the illusion that a human pilot is doing the flying. Robots are everywhere already; we’ve just trained ourselves not to see them.

We can get some more ideas about what the future holds by thinking about some of Google’s statements in other contexts. Last April, Slate reported that Google was obsessed with building the Star Trek computer. It’s a wonderful article that contains real insight into the way Google thinks about technology. Search is all about context: it’s not about the two or three words you type into the browser; it’s about understanding what you’re looking for, understanding your language rather than an arcane query language. The Star Trek computer does that; it anticipates what Kirk wants and answers his questions, even if they’re ill-formed or ambiguous. Let’s assume that Google can build that kind of search engine. Once you have the Star Trek computer doing your searches, the next step is obvious: don’t just do a search; get me the stuff I want. Find me my keys. Put the groceries away. Water the plants. I don’t think robotic helpers like these are as far off as they seem; most of the technologies we need to build them already exist. And while it may take a supercomputer to recognize that a carton of eggs is a carton of eggs, that supercomputer is only an Internet connection away.

But will we recognize these devices as robots once they’ve been around for a year or two? Or will they be “the finder,” “the unpacker,” “the gardener,” while robots remain implausibly futuristic? The latter, I think. Garden-variety text search, whether it’s Android or Siri, is an amazing application of artificial intelligence, but these days, it’s just something that phones do.

I have no doubt that Google’s robotics team is working on something amazing and mind-blowing. Should they succeed, and should that success become a product, though, whatever they do will almost certainly fade into the woodwork and become part of normal, everyday reality. And robots will remain forever in the future. We might have found Rosie, the Jetsons’ robotic maid, impressive. But the Jetsons didn’t.

December 10 2013

Podcast: news that reaches beyond the screen

Reporters, editors and designers are looking for new ways to interact with readers and with the physical world–drawing data in through sensors and expressing it through new immersive formats.

In this episode of the Radar podcast, recorded at News Foo Camp in Phoenix on November 10, Jenn and I talk with three people who are working on new modes of interaction:

Along the way:

For more on the intersection of software and the physical world, be sure to check out Solid, O’Reilly’s new conference program about the collision of real and virtual.

Subscribe to the O’Reilly Radar Podcast through iTunesSoundCloud, or directly through our podcast’s RSS feed.

December 04 2013

Wearable computing and automation

In a recent conversation, I described my phone as “everything that Compaq marketing promised the iPAQ was going to be.” It was the first device I really carried around and used as an extension of my normal computing activities. Of course, everything I did on the iPAQ can be done much more easily on a smartphone these days, so my iPAQ sits in a closet, hoping that one day I might notice and run Linux on it.

In the decade and a half since the iPAQ hit the market, battery capacity has improved and power consumption has gone down for many types of computing devices. In the Wi-Fi arena, we’ve turned phones into sensors to track motion throughout public spaces, and, in essence, “outsourced” the sensor to individual customers.

Phones, however, are relatively large devices, and the I/O capabilities of the phone aren’t needed in most sensor operations. A smartphone today can measure motion and acceleration, and even position through GPS. However, in many cases, display isn’t needed on the sensor itself, and the data to be collected might need another type of sensor. Many inexpensive sensors are available today to measure temperature, humidity, or even air quality. By moving the I/O from the sensor itself onto a centralized device, the battery power can be devoted almost entirely to collecting data.

Miniature low-power sensors can be purpose-built for a specific application. Because I’m interested in wearable computing, I recently started using a Jawbone UP band, one of the many fitness sensors available that also tracks sleep. Years ago, I thought about enrolling in a study at the Stanford Sleep Clinic because I’d read about the work in a book and wanted to learn more about my own sleep cycle. The UP band is worn on a wrist and translates motion it senses into pedometer data and sleep quality data. Will an UP band give me the same quality of information as a world-leading sleep clinic? I doubt it, but it’s also way more affordable (and I get to use my own bed).

The UP is based around accelerometers that monitor motion in all three axes. A small button on the side is used to switch between the waking mode, in which the band collects pedometer data, and the sleeping mode, in which the band monitors the depth of sleep. It’s not necessary to take the band out of sleep mode in the morning, but it will not automatically go into sleep mode at night. (I did wonder if an ambient light sensor might help switch modes, but I didn’t ask Jawbone if that was in development.)

I was interested in the UP because it also integrates with If This/Then That (IFTTT), which readers might recognize from a post I wrote on lighting automation. Because the UP tracks sleep patterns, it is naturally suited to work with my home lighting, triggering the lights to turn on in the morning; in fact, that’s exactly what I did. (You can access the IFTTT recipe here.) Since I’ve already set up the lights to be controlled by IFTTT, all I needed to do was create another trigger. When new sleep is logged with the UP band — which means that I am awake and have taken action to log the sleep — the power to the lights is turned on.

ifttt-up-fig1ifttt-up-fig1

I have the non-wireless version of the UP band, which synchronizes by plugging directly into the phone.  To get data out of the band, I plug it into the phone, grab the data, and an app on the phone loads the data into the Jawbone service.  The entire process takes a couple of seconds and two taps on the screen.  It does mean that the trigger won’t fire until I connect the band to my phone in the morning, but I’m now practiced enough to accomplish that maneuver in the dark. (Jawbone has since released an improved version of the band, the UP24, that synchronizes data wirelessly.  With the new model, I would move the band into the awake mode and the lights would turn on without needing to involve my phone at all.)

The second reason I was interested in the UP is that it’s easy to log the data it collects in a spreadsheet. Whenever sleep gets logged by the band, the details are saved in a Google Drive spreadsheet. In addition to logging time, the band is able to tell the difference between deep and shallow sleep, as well as the number of times you wake up. I recently attended a Wi-Fi Alliance meeting in Europe, and the data on sleep quality was helpful in monitoring how well I was adjusting to the new time zone.

ifttt-up-fig2ifttt-up-fig2

What I most liked about spending the time with UP is that it shows the promise available in all kinds of wearable sensors. Just as many functions came together to create today’s smartphone — a web device, music player, and general-purpose computer — there is a wearable device out there with good battery life, the ability to control remote devices, and monitor physical activity. The Pebble was only the beginning.

As I go through the data in the future, I may do some of my own number crunching to see what other insights can be gleaned. And just for laughs, my favorite IFTTT recipe for use with UP is to warn somebody by e-mail if their sleep time is low.

November 09 2013

A connected world is a better world. Right?

We are more connected now than ever:

  • You can chat with your kids when you’re on the road, so can a pedophile.
  • You can access your bank account in your pajamas, so can the RBN.
  • Your healthcare data is always readily at hand, but not just to your hands.
  • You can tell your political representatives what you think; they can whip the “base” into a frenzy with what they want them to think.
  • You can talk to people all over the world and share ideas with people with different points of view, but you probably don’t.
  • You can easily organize against an unjust government (not just yours!), and they can watch you do it.
  • You can stay in touch with friends no matter where they live, and companies can co-opt you into influencing their purchases.
  • You can find information on any possible disease you suspect you might have, and your network and search provider will cheer for your recovery with pharmaceutical ads.
  • You can live a rich, fulfilling expanded life experience outside of the constraints of physical space, without fourth amendment protections (or even locks on your doors).
  • Everything is democratized, except for, it seems, our democracy.

Recently I was in New York City for our Strata conference and had the opportunity to host an Oxford-style debate. We debated the proposition that “A connected world is a better world.” In our preparations leading up to the debate, I asked our debaters not to hold back. I told them they could really help crystalize an understanding of the arguments by leaving nuance and subtlety at the door. I encouraged them to dive in, elbows out. I needn’t have. They would have anyway. These were real, not dramatic, passions on display. Take a look:

A spirited debate in a conference meeting room is a lot of fun, but I think this is a question of deadly importance. As the moderator, it wasn’t my role to take a position, but I can’t help but think about this question a lot. And I’m not a natural optimist.

Maybe it’s just that the magic of a connected world quickly becomes the new normal, an unremarkable part of our everyday experience. While I’m unthinkingly enjoying the connection with family and the access to knowledge afforded by connection, the pitfalls of connection loom large in my mind. They are the scaffolding of a dystopian future yet to be realized but terrifying in its possibility. It could be that our risk / utility curve for connectivity is bent, just like it is for money.

Or, maybe I just think too much about historical analogies. We live in a time of increasingly extremist politics that maybe aren’t caused by the filter bubbles we cocoon in, but they are most definitely amplified by them.

The last time our politics changed at scale was about 150 years ago at the nexus of migratory patterns and the rise of mass media. The industrial revolution moved agrarian workers into factories at massive scale. For the first time in history, large numbers of people were densely packed together in cities, and our “social graph” changed overnight. This newly concentrated network supported a different kind of politics — the mass movement — and all of the “isms” we talk about today were born in those new networks.

The arrival of mass media right on the heels of that concentration further amplified the signals projected into those connected webs, while effectively connecting them together into something even bigger: a national people. This was demonstrated convincingly by Speer’s Roman imagery, masterfully presented by Riefenstahl while the Führer’s speeches radiated from radio towers in real time across a continent.

I’m not suggesting that any new technology automatically spawns evil; I’m arguing that by changing the reach of our connections and the dynamics of our interactions, new technologies change our social physics. Imagine if the relative strengths of the strong nuclear force and electromagnetic forces suddenly changed throughout the universe. Material science would be turned on its head as matter rearranged itself in an instant.

Fundamental changes in how we communicate and connect do that to us. They rearrange the matter of society in fundamental and unpredictable ways. Things once held together fly apart, or new things clump into new forms. We can be sure only of change, and change of that sort is disruptive regardless of what form it takes.

Coming back to the question for a moment: our panel argued hard (technically the cons won the debate), but the answer is most likely “there is no answer.”

Because, of course, a connected world is a better world — and, of course, it’s also worse. And the ways it’s better and worse aren’t the same kind of thing. They can’t be easily summed into a neatly netted out answer. It’s better and worse at the same time, in different ways.

Thomas Jefferson struggled with this conundrum while considering America’s entry into the industrial age. He valued the independence and “virtue” of our previously agrarian existence, but he recognized the value of England’s mechanization to both quality of life and national economic positioning. (See Doug Hill’s new book for a deeper discussion of Jefferson and industrialization).

And, anyway, are choices like this even choices? Or does game theory make techno determinism a certainty? That which can be done, will be?

Where the pessimist in me raises its ugly head is in considering the trade-off between benefit and harm. What if the benefits are incremental but the harm is ultimately existential? We face something like that now with industrialization, as it runs its course and anthropomorphic climate change becomes harder and harder to ignore. Industrialization makes life better for a lot of people, right up until it doesn’t? And then it kills them?

As I write this, the biggest typhoon ever to make landfall is coming ashore on the Philippines, and it’s not likely to be the last of its size.

What is the “global warming” of connection?

The connected world is a world that is both more democratic and more concentrated, at the same time. Which of these forces “wins” — and under what circumstances? Perhaps Iran’s Green Revolution couldn’t have happened without the network, but in the end, the State used the network even more effectively than the revolution.

Like our universe, does the connected world keep expanding with forces of democratization? Or does it collapse into concentrated plutocracy under those who have privileged positions on the network? We don’t even know what dark matter to weigh to answer the question.

November 05 2013

Podcast: the democratization of manufacturing

Manufacturing is hard, but it’s getting easier. In every stage of the manufacturing process–prototyping, small runs, large runs, marketing, fulfillment–cheap tools and service models have become available, dramatically decreasing the amount of capital required to start building something and the expense of revising and improving a product once it’s in production.

In this episode of the Radar podcast, we speak with Chris Anderson, CEO and co-founder of 3D Robotics; Nick Pinkston, a manufacturing expert who’s working to make building things easy for anyone; and Jie Qi, a student at the MIT Media Lab whose recent research has focused on the factories of Shenzhen.

Along the way we talk about the differences between Tesla’s auto plant and its previous incarnation as the NUMMI plant; the differences between on-shoring, re-shoring and near-shoring; and how the innovative energy of Kickstarter and the Maker movement can be brought to underprivileged populations.

Many of these topics will come up at Solid, O’Reilly’s new conference about the intersection of software and the physical world. Solid’s call for proposals open through December 9. We’re planning a series of Solid meet-ups, plant tours, and books about the collision of real and virtual; if you’ve got an idea for something the series should explore, please reach out!

Subscribe to the O’Reilly Radar Podcast through iTunesSoundCloud, or directly through our podcast’s RSS feed.

November 04 2013

Software, hardware, everywhere

Real and virtual are crashing together. On one side is hardware that acts like software: IP-addressable, controllable with JavaScript APIs, able to be stitched into loosely-coupled systems—the mashups of a new era. On the other is software that’s newly capable of dealing with the complex subtleties of the physical world—ingesting huge amounts of data, learning from it, and making decisions in real time.

The result is an entirely new medium that’s just beginning to emerge. We can see it in Ars Electronica Futurelab’s Spaxels, which use drones to render a three-dimensional pixel field; in Baxter, which layers emotive software onto an industrial robot so that anyone can operate it safely and efficiently; in OpenXC, which gives even hobbyist-level programmers access to the software in their cars; in SmartThings, which ties Web services to light switches.

The new medium is something broader than terms like “Internet of Things,” “Industrial Internet,” or “connected devices” suggest. It’s an entirely new discipline that’s being built by software developers, roboticists, manufacturers, hardware engineers, artists, and designers.

Ten years ago, building something as simple as a networked thermometer required some understanding of electrical engineering. Now it’s a Saturday-afternoon project for a beginner. It’s a shift we’ve already seen in programming, where procedural languages have become more powerful and communities have arisen to offer free help with programming problems. As the blending of hardware and software continues, the physical world will become democratized: the ranks of people who can address physical challenges from lots of different backgrounds will swell.

The outcome of all of this combining and broadening, I hope, will be a world that’s safer, cleaner, more efficient, and more accessible. It may also be a world that’s more intrusive, less private, and more vulnerable to ill-intentioned interference. That’s why it’s crucial that we develop a strong community from the new discipline.

Solid, which Joi Ito and I will present on May 21 and 22 next year, will bring members of the new discipline together to discuss this new medium at the blurred line between real and virtual. We’ll talk about design beyond the computer screen; software that understands and controls the physical world; new hardware tools that will become the building blocks of the connected world; frameworks for prototyping and manufacturing that make it possible for anyone to create physical devices; and anything else that touches both the concrete and abstract worlds.

Solid’s call for proposals is open to the public, as is the call for applications to the Solid Fellowships—a new program that comes with a stipend, a free pass to Solid, and help with travel expenses for students and independent innovators.

The business implications of the new discipline are just beginning to play out. Software companies are eyeing hardware as a way to extend their offerings into the physical world—think, for instance, of Google’s acquisition of Motorola and its work on a driverless car—and companies that build physical machines see software as a crucial component of their products. The physical world as a service, a business model that’s something like software as a service, promises to upend the way we buy and use machines, with huge implications for accessibility and efficiency. These types of service frameworks, along with new prototyping tools and open-source models, are making hardware design and manufacturing vastly easier.

A few interrelated concepts that I’ve been thinking about as we’ve sketched out the idea for Solid:

  • APIs for the physical world. Abstraction, modularity, and loosely-coupled services—the characteristics that make the Web accessible and robust—are coming to the physical world. Open-source libraries for sensors and microcontrollers are bringing easy-to-use and easy-to-integrate software interfaces to everything from weather stations to cars. Networked machines are defining a new physical graph, much like the Web’s information graph. These models are starting to completely reorder our physical environment. It’s becoming easier to trade off functionalities between hardware and software; expect the proportion of intelligence residing in software to increase over time.
  • Manufacturing made frictionless. Amazon’s EC2 made it possible to start writing and selling software with practically no capital investment. New manufacturing-as-a-service frameworks bring the same approach to building things, making factory work fast and capital-light. Development costs are plunging, and it’s becoming easier to serve niches with specialized hardware that’s designed for a single purpose. The pace of innovation in hardware is increasing as the field becomes easier for entrepreneurs to work in and financing becomes available through new platforms like Kickstarter. Companies are emerging now that will become the Amazon Web Services of manufacturing.
  • Software intelligence in the physical world. Machine learning and data-driven optimization have revolutionized the way that companies work with the Web, but the kind of sophisticated knowledge that Amazon and Netflix have accumulated has been elusive in the offline world. Hardware lets software reach beyond the computer screen to bring those kinds of intelligence to the concrete world, gathering data through networked sensors and exerting real-time control in order to optimize complicated systems. Many of the machines around us could become more efficient simply through intelligent control: a furnace can save oil when software, knowing that homeowners are away, turns down the thermostat; a car can save gas when Google Maps, polling its users’ smartphones, discovers a traffic jam and suggests an alternative route—the promise of software intelligence that works above the level of a single machine. The Internet stack now reaches all the way down to the phone in your pocket, the watch on your wrist, and the thermostat on your wall.
  • Every company is a software company. Software is becoming an essential component of big machines for both the builders and the users of those machines. Any company that owns big capital machines needs to get as much out of them as possible by optimizing their operation with software, and any company that builds machines must improve and extend them with layers of software in order to be competitive. As a result, a software startup with promising technology might just as easily be bought by a big industrial company as by a Silicon Valley software firm. This has important organizational, cultural, and competency impact.
  • Complex systems democratized. The physical world is becoming accessible to innovators at every level of expertise. Just as it’s possible to build a Web page with only a few hours’ learning, it’s becoming easier for anyone to build things, whether electronic or not. The result: realms like the urban environment that used to be under centralized control by governments and big companies are now open to innovation from anyone. New economic models and communities will emerge in the physical world just as they’ve emerged online in the last twenty years.
  • The physical world as a service. Anything from an Uber car to a railroad locomotive can be sold as a service, provided that it’s adequately instrumented and dispatched by intelligent software. Good data from the physical world brings about efficient markets, makes cheating difficult, and improves quality of service. And it will revolutionize business models in every industry as service guarantees replace straightforward equipment sales. Instead of just selling electricity, a utility could sell heating and cooling—promising to keep a homeowner’s house at 70 degrees year round. That sales model could improve efficiency and quality of life, bringing about incentive for the utility to invest in more efficient equipment and letting it take advantage of economies of scale.
  • Design after the screen. Our interaction with software no longer needs to be mediated through a keyboard and screen. In the connected world, computers gather data through multiple inputs outside of human awareness and intuit our preferences. The software interface is now a dispersed collection of conventional computers, mobile phones, and embedded sensors, and it acts back onto the world through networked microcontrollers. Computing happens everywhere, and it’s aware of physical-world context.
  • Software replaces physical complexity. A home security system is no longer a closed network of motion sensors and door alarms; it’s software connected to generic sensors that decides when something is amiss. In 2009, Alon Halevy, Peter Norvig, and Fernando Pereira wrote that having lots and lots of data can be more valuable than having the most elegant model. In the connected world, having lots and lots of sensors attached to some clever software will start to win out over single-purpose systems.

These are some rough thoughts about an area that we’ll all spend the next few years trying to understand. This is an open discussion, and we welcome thoughts on it from anyone.

November 03 2013

Podcast: the Internet of Things should work like the Internet

At our OSCON conference this summer, Jon Bruner, Renee DiResta and I sat down with Alasdair Allen, a hardware hacker and O’Reilly author; Josh Marinacci, a researcher with Nokia; and Tony Santos, a user experience designer with Mozilla. Our discussion focused on the future of UI/UX design, from the perils of designing from the top down to declining diversity in washing machines to controlling your car from anywhere in the world.

Here are some highlights from our chat:

  • Alasdair’s Ignite talk on the bad design of UX in the Internet of Things: the more widgets and dials and sliders that you add on are delayed design decisions that you’re putting onto the user. (1:55 mark)
  • Looking at startups working in the Internet of Things, design seems to be “pretty far down on the general level of importance.” Much of the innovation is happening on Kickstarter and is driven by hardware hackers, many of whom don’t have design experience — and products are often designed as an end to themselves, as opposed to parts of a connected ecosystem. “We’re not building an Internet of Things, we’re building a series of islands…we should be looking at systems.” (3:23)
  • Top-down approach in the Internet of Things isn’t going to work — large companies are developing proprietary systems of things designed to lock in users. (6:05)
  • Consumer inconvenience is becoming an issue — “We’re creating an experience where I have to buy seven things to track me…not to mention to track my house….at some point there’s going to be a tipping point where people say, ‘that’s enough!’” (8:30)
  • Anti-overload Internet of Things user experience — mini printers and bicycle barometers. “The Internet of Things has to work like the Internet.” (11:07)
  • Is the Internet of Things following suit with the declining diversity we’ve seen in the PC space? Apple has imposed a design quality onto other company’s laptops…in the same vein, will we see a slowly declining diversity in washing machines, for instance? Or have we wrung out all the mechanical improvements that can possibly be made to a washing machine with current technology, opening the door for innovation through software enhancements? (17:30)
  • Tesla’s cars have a restful API; you can monitor and control the car from anywhere — “You can pop the sunroof while you’re doing 90 mph down the motorway, or worse than that, you can pop someone else’s sunroof.” Security holes are an increasing issue in the industrial Internet. (22:42)
  • “The whole point of the Internet of Things — and technology in general — should be to reduce the amount of friction in our lives.” (26:46)
  • Are we heading toward a world of zero devices, where the objects themselves are the interfaces? For the mainstream, it will depend on design and the speed of maturing technology. (30:53)
  • Sometimes the utility of a thing supersedes the poor user experience to become widely adopted — we’re looking at your history, WiFi. (33:19)
  • The speed of technology turnover is vastly increasing — in a 100 years, platforms and services like Kickstarter will be viewed as major drivers of the second industrial revolution. “We’re really going back to the core of innovation — if you offer up an idea, there will be enough people out there who want that idea to build it and make it, and that will cause someone else to have another idea.” (34:08)

Links related to our discussion:

Subscribe to the O’Reilly Radar Podcast through iTunes, SoundCloud, or directly through our podcast’s RSS feed.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
Get rid of the ads (sfw)

Don't be the product, buy the product!

Schweinderl