Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 14 2014

Four short links: 14 February 2014

  1. Bitcoin: Understanding and Assessing Potential Opportunities (Slideshare) — VC deck on Bitcoin market and opportunities, long-term and short-term. Interesting lens on the development and gaps.
  2. Queensland Police Map Crime Scenes with 3D Scanner (ComputerWorld) — can’t wait for the 3D printed merchandise from famous trials.
  3. Atheer LabsAn immersive 3D display, over a million apps, sub-mm 3D hand interaction, all in 75 grams.
  4. libcloudPython library for interacting with many of the popular cloud service providers using a unified API.

February 13 2014

Four short links: 13 February 2014

  1. The Common Crawl WWW Ranking — open data, open methodology, behind an open ranking of the top sites on the web. Preprint paper available. (via Slashdot)
  2. Felton’s Sensors (Quartz) — inside the gadgets Nicholas Felton uses to quantify himself.
  3. Myo Armband (IEEE Spectrum) — armband input device with eight EMG (electromyography) muscle activity sensors along with a nine-axis inertial measurement unit (that’s three axes each for accelerometer, gyro, and magnetometer), meaning that you get forearm gesture sensing along with relative motion sensing (as opposed to absolute position). The EMG sensors pick up on the electrical potential generated by muscle cells, and with the Myo on your forearm, the sensors can read all of the muscles that control your fingers, letting them spy on finger position as well as grip strength.
  4. Bitcoin Exchanges Under Massive and Concerted Attack — he who lives by the network dies by the network. a DDoS attack is taking Bitcoin’s transaction malleability problem and applying it to many transactions in the network, simultaneously. “So as transactions are being created, malformed/parallel transactions are also being created so as to create a fog of confusion over the entire network, which then affects almost every single implementation out there,” he added. Antonopoulos went on to say that Blockchain.info’s implementation is not affected, but some exchanges have been affected – their internal accounting systems are gradually going out of sync with the network.

January 29 2014

Four short links: 29 January 2014

  1. Bounce Explorer — throwable sensor (video, CO2, etc) for first responders.
  2. Sintering Patent Expires Today — key patent expires, though there are others in the field. Sintering is where the printer fuses powder with a laser, which produces smooth surfaces and works for ceramics and other materials beyond plastic. Hope is that sintering printers will see same massive growth that FDM (current tech) printers saw after the FDM patent expired 5 years ago.
  3. Internet is the Greatest Legal Facilitator of Inequality in Human History (The Atlantic) — hyperbole aside, this piece does a good job of outlining “why they hate us” and what the systemic challenges are.
  4. First Carbon Fiber 3D Printer Announced — $5000 price tag. Nice!

January 17 2014

Four short links: 17 January 2014

  1. Making Remote WorkThe real­ity of a remote work­place is that the con­nec­tions are largely arti­fi­cial con­structs. Peo­ple can be very, very iso­lated. A person’s default behav­ior when they go into a funk is to avoid seek­ing out inter­ac­tions, which is effec­tively the same as actively with­draw­ing in a remote work envi­ron­ment. It takes a tremen­dous effort to get on video chats, use our text based com­mu­ni­ca­tion tools, or even call some­one dur­ing a dark time. Very good to see this addressed in a post about remote work.
  2. Google Big Picture Group — public output from the visualization research group at Google.
  3. Using CMOS Sensors in a Cellphone for Gamma Detection and Classification (Arxiv) — another sense in your pocket. The CMOS camera found in many cellphones is sensitive to ionized electrons. Gamma rays penetrate into the phone and produce ionized electrons that are then detected by the camera. Thermal noise and other noise needs to be removed on the phone, which requires an algorithm that has relatively low memory and computational requirements. The continuous high-delta algorithm described fits those requirements. (via Medium)
  4. Affordable Arduino-Compatible Centimeter-Level GPS Accuracy (IndieGogo) — for less than $20. (via DIY Drones)

January 09 2014

Four short links: 9 January 2014

  1. Artificial Labour and Ubiquitous Interactive Machine Learning (Greg Borenstein) — in which design fiction, actual machine learning, legal discovery, and comics meet. One of the major themes to emerge in the 2H2K project is something we’ve taken to calling “artificial labor”. While we’re skeptical of the claims of artificial intelligence, we do imagine ever-more sophisticated forms of automation transforming the landscape of work and economics. Or, as John puts it, robots are Marxist.
  2. Clear Flexible Circuit on a Contact Lens (Smithsonian) — ends up about 1/60th as thick as a human hair, and is as flexible.
  3. Confide (GigaOm) — Enterprise SnapChat. A Sarbanes-Oxley Litigation Printer. It’s the Internet of Undiscoverable Things. Looking forward to Enterprise Omegle.
  4. FLIR One — thermal imaging in phone form factor, another sensor for your panopticon. (via DIY Drones)

January 08 2014

The emergence of the connected city

Photo: Millertime83Photo: Millertime83

If the modern city is a symbol for randomness — even chaos — the city of the near future is shaping up along opposite metaphorical lines. The urban environment is evolving rapidly, and a model is emerging that is more efficient, more functional, more — connected, in a word.

This will affect how we work, commute, and spend our leisure time. It may well influence how we relate to one another, and how we think about the world. Certainly, our lives will be augmented: better public transportation systems, quicker responses from police and fire services, more efficient energy consumption. But there could also be dystopian impacts: dwindling privacy and imperiled personal data. We could even lose some of the ferment that makes large cities such compelling places to live; chaos is stressful, but it can also be stimulating.

It will come as no surprise that converging digital technologies are driving cities toward connectedness. When conjoined, ISM band transmitters, sensors, and smart phone apps form networks that can make cities pretty darn smart — and maybe more hygienic. This latter possibility, at least, is proposed by Samrat Saha of the DCI Marketing Group in Milwaukee. Saha suggests “crowdsourcing” municipal trash pick-up via BLE modules, proximity sensors and custom mobile device apps.

“My idea is a bit tongue in cheek, but I think it shows how we can gain real efficiencies in urban settings by gathering information and relaying it via the Cloud,” Saha says. “First, you deploy sensors in garbage cans. Each can provides a rough estimate of its fill level and communicates that to a BLE 112 Module.”

BLE112_M_RGB_frontBLE112_M_RGB_front

As pedestrians who have downloaded custom “garbage can” apps on their BLE-capable iPhone or Android devices pass by, continues Saha, the information is collected from the module and relayed to a Cloud-hosted service for action — garbage pick-up for brimming cans, in other words. The process will also allow planners to optimize trash can placement, redeploying receptacles from areas where need is minimal to more garbage-rich environs.

“It should also allow greater efficiency in determining pick-up schedules,” said Saha. “For example, in some areas regular scheduled pick-ups may be best. But managers may find it’s also a good idea to put some trash collectors on a roving basis to service cans when they’re full. That could work well for areas where there’s a great deal of traffic and cans fill up quickly but unpredictably — and conversely, in low-traffic areas, where regular pick-up isn’t necessary. Both situations would benefit from rapid-response flexibility.”

Garbage can connectivity has larger implications than just, well, garbage. Brett Goldstein, the former Chief Data and Information Officer for the City of Chicago and a current lecturer at the University of Chicago, says city officials found clear patterns between damaged or missing garbage cans and rat problems.

“We found areas that showed an abnormal increase in missing or broken receptacles started getting rat outbreaks around seven days later,” Goldstein said. “That’s very valuable information. If you have sensors on enough garbage cans, you could get a temporal leading edge, allowing a response before there’s a problem. In urban planning, you want to emphasize prevention, not reaction.”

Such Cloud-based app-centric systems aren’t suited only for trash receptacles, of course. Companies such as Johnson Controls are now marketing apps for smart buildings — the base component for smart cities. (Johnson’s Metasys management system, for example, feeds data to its app-based Paoptix Platform to maximize energy efficiency in buildings.) In short, instrumented cities already are emerging. Smart nodes — including augmented buildings, utilities and public service systems — are establishing connections with one another, like axon-linked neurons.

But Goldstein, who was best known in Chicago for putting tremendous quantities of the city’s data online for public access, emphasizes instrumented cities are still in their infancy, and that their successful development will depend on how well we “parent” them.

“I hesitate to refer to ‘Big Data,’ because I think it’s a terribly overused term,” Goldstein said. “But the fact remains that we can now capture huge amounts of urban data. So, to me, the biggest challenge is transitioning the fields — merging public policy with computer science into functional networks.”

There are other obstacles to the development of the intelligent city, of course. Among them: how do you incentivize enough people to download apps sufficient to achieve a functional connected system? Indeed, the human element could prove the biggest fly in the ointment. We may resist quantifying ourselves to such a degree, even for the sake of our cities.

On the other hand, the connected city exists to serve people, not the other way around, observes Drew Conway, senior advisor to the New York City Mayor’s Office of Data Analytics and founder of the data community support group DataGotham. People ultimately act in their self-interest, and if the connected city brings boons, people will accept and support it. But attention must be paid to unintended consequences, emphasizes Conway.

“I never forget that humanity is behind all those bits of data I consume,” says Conway. “Who does the data serve, after all? Human beings decided why and where to put out those sensors, so data is inherently biased — and I always keep the human element in mind. And ultimately, we have to look at the true impacts of employing that data.”

As an example, continues Conway, “say the data tells you that an illegal conversion has created a fire hazard in a low-income residential building. You move the residents out, thus avoiding potential loss of life. But now you have poor people out on the street with no place to go. There has to be follow-through. When we talk of connections, we must ensure that some of those connections are between city services and social services.”

Like many technocrats, Conway also is concerned about possible threats to individual rights posed by data collected in the name of the commonwealth.

“One of New York’s most popular programs is expanding free public WiFi,” he says. “It’s a great initiative, and it has a lot of support. But what if an agency decided it wanted access to weblog data from high-crime areas? What are the implications for people not involved in any criminal activity? We haven’t done a good job of articulating where the lines should be, and we need to have that debate. Connected cities are the future, but I’d welcome informed skepticism on their development. I don’t think the real issue is the technical limitations — it’s the quid pro quo involved in getting the data and applying it to services. It’s about the trade-offs.”


For more on the convergence of software and hardware, check out our Solid Conference.

Register for the O'Reilly Solid ConferenceRegister for the O'Reilly Solid Conference

January 06 2014

Four short links: 6 January 2014

  1. 4043-byte 8086 Emulator manages to implement most of the hardware in a 1980’s era IBM-PC using a few hundred fewer bits than the total number of transistors used to implement the original 8086 CPU. Entry in the obfuscated C contest.
  2. Hacking the CES Scavenger HuntAt which point—now you have your own iBeacon hardware—you can just go ahead and set the UUID, Major and Minor numbers of your beacon to each of the CES scavenger hunt beacon identities in turn, and then bring your beacon into range of your cell phone running which should be running the CES mobile app. Once you’ve shown the app all of the beacons, you’ll have “finished” the scavenger hunt and can claim your prize. Of course doing that isn’t legal. It’s called fraud and will probably land you in serious trouble. iBeacons have great possibilities, but with great possibilities come easy hacks when they’re misused.
  3. Filtering: Seven Principles — JP Rangaswami laying down some basic principles on which filters should be built. 1. Filters should be built such that they are selectable by subscriber, not publisher. I think the basic is: 0: Customers should be able to run their own filters across the information you’re showing them.
  4. Tremor-Correcting Steadicam — brilliant use of technology. Sensors + microcontrollers + actuators = a genuinely better life. Beats figuring out better algorithms to pimp eyeballs to Brands You Love. (via BoingBoing)

January 01 2014

Four short links: Jan 1 2014

  1. Witracktracks the 3D motion of a user from the radio signals reflected off her body. It works even if the person is occluded from the WiTrack device or in a different room. WiTrack does not require the user to carry any wireless device, yet its accuracy exceeds current RF localization systems, which require the user to hold a transceiver. It transmits wireless signals whose power is 100 times smaller than Wi-Fi and 1000 times smaller than cellphone transmissions.
  2. A Linux Christmas — Linux drives pretty much all of Amazon’s top-selling consumer electronics.
  3. Techno Panic Timeline — chart from Exposing the War on Fun showing the fears of technology from 1493 to the modern day.
  4. Best Paper Awards in CS Since 1996 (Jeff Huang) — fantastic resource for your holiday reading.

December 20 2013

Four short links: 20 December 2013

  1. A History of the Future in 100 Objects — is out! It’s design fiction, describing the future of technology in faux Wired-like product writeups. Amazon already beating the timeline.
  2. Projects and Priorities Without Managers (Ryan Carson) — love what he’s doing with Treehouse. Very Googley. The more I read about these low-touch systems, the more obviously important self-reporting is. It is vital that everyone posts daily updates on what they’re working on or this whole idea will fall down.
  3. Intellectual Ventures Patent Collection — astonishing collection, ready to be sliced and diced in Cambia’s Lens tool. See the accompanying blog post for charts, graphs, and explanation of where the data came from.
  4. Smokio Electronic Cigarette — the quantified cigarette (not yet announced) for measuring your (electronic) cigarette consumption and uploading the data (natch) to your smartphone. Soon your cigarette will have an IPv6 address, a bluetooth connection, and firmware to be pwned.

December 04 2013

Wearable computing and automation

In a recent conversation, I described my phone as “everything that Compaq marketing promised the iPAQ was going to be.” It was the first device I really carried around and used as an extension of my normal computing activities. Of course, everything I did on the iPAQ can be done much more easily on a smartphone these days, so my iPAQ sits in a closet, hoping that one day I might notice and run Linux on it.

In the decade and a half since the iPAQ hit the market, battery capacity has improved and power consumption has gone down for many types of computing devices. In the Wi-Fi arena, we’ve turned phones into sensors to track motion throughout public spaces, and, in essence, “outsourced” the sensor to individual customers.

Phones, however, are relatively large devices, and the I/O capabilities of the phone aren’t needed in most sensor operations. A smartphone today can measure motion and acceleration, and even position through GPS. However, in many cases, display isn’t needed on the sensor itself, and the data to be collected might need another type of sensor. Many inexpensive sensors are available today to measure temperature, humidity, or even air quality. By moving the I/O from the sensor itself onto a centralized device, the battery power can be devoted almost entirely to collecting data.

Miniature low-power sensors can be purpose-built for a specific application. Because I’m interested in wearable computing, I recently started using a Jawbone UP band, one of the many fitness sensors available that also tracks sleep. Years ago, I thought about enrolling in a study at the Stanford Sleep Clinic because I’d read about the work in a book and wanted to learn more about my own sleep cycle. The UP band is worn on a wrist and translates motion it senses into pedometer data and sleep quality data. Will an UP band give me the same quality of information as a world-leading sleep clinic? I doubt it, but it’s also way more affordable (and I get to use my own bed).

The UP is based around accelerometers that monitor motion in all three axes. A small button on the side is used to switch between the waking mode, in which the band collects pedometer data, and the sleeping mode, in which the band monitors the depth of sleep. It’s not necessary to take the band out of sleep mode in the morning, but it will not automatically go into sleep mode at night. (I did wonder if an ambient light sensor might help switch modes, but I didn’t ask Jawbone if that was in development.)

I was interested in the UP because it also integrates with If This/Then That (IFTTT), which readers might recognize from a post I wrote on lighting automation. Because the UP tracks sleep patterns, it is naturally suited to work with my home lighting, triggering the lights to turn on in the morning; in fact, that’s exactly what I did. (You can access the IFTTT recipe here.) Since I’ve already set up the lights to be controlled by IFTTT, all I needed to do was create another trigger. When new sleep is logged with the UP band — which means that I am awake and have taken action to log the sleep — the power to the lights is turned on.

ifttt-up-fig1ifttt-up-fig1

I have the non-wireless version of the UP band, which synchronizes by plugging directly into the phone.  To get data out of the band, I plug it into the phone, grab the data, and an app on the phone loads the data into the Jawbone service.  The entire process takes a couple of seconds and two taps on the screen.  It does mean that the trigger won’t fire until I connect the band to my phone in the morning, but I’m now practiced enough to accomplish that maneuver in the dark. (Jawbone has since released an improved version of the band, the UP24, that synchronizes data wirelessly.  With the new model, I would move the band into the awake mode and the lights would turn on without needing to involve my phone at all.)

The second reason I was interested in the UP is that it’s easy to log the data it collects in a spreadsheet. Whenever sleep gets logged by the band, the details are saved in a Google Drive spreadsheet. In addition to logging time, the band is able to tell the difference between deep and shallow sleep, as well as the number of times you wake up. I recently attended a Wi-Fi Alliance meeting in Europe, and the data on sleep quality was helpful in monitoring how well I was adjusting to the new time zone.

ifttt-up-fig2ifttt-up-fig2

What I most liked about spending the time with UP is that it shows the promise available in all kinds of wearable sensors. Just as many functions came together to create today’s smartphone — a web device, music player, and general-purpose computer — there is a wearable device out there with good battery life, the ability to control remote devices, and monitor physical activity. The Pebble was only the beginning.

As I go through the data in the future, I may do some of my own number crunching to see what other insights can be gleaned. And just for laughs, my favorite IFTTT recipe for use with UP is to warn somebody by e-mail if their sleep time is low.

November 04 2013

Software, hardware, everywhere

Real and virtual are crashing together. On one side is hardware that acts like software: IP-addressable, controllable with JavaScript APIs, able to be stitched into loosely-coupled systems—the mashups of a new era. On the other is software that’s newly capable of dealing with the complex subtleties of the physical world—ingesting huge amounts of data, learning from it, and making decisions in real time.

The result is an entirely new medium that’s just beginning to emerge. We can see it in Ars Electronica Futurelab’s Spaxels, which use drones to render a three-dimensional pixel field; in Baxter, which layers emotive software onto an industrial robot so that anyone can operate it safely and efficiently; in OpenXC, which gives even hobbyist-level programmers access to the software in their cars; in SmartThings, which ties Web services to light switches.

The new medium is something broader than terms like “Internet of Things,” “Industrial Internet,” or “connected devices” suggest. It’s an entirely new discipline that’s being built by software developers, roboticists, manufacturers, hardware engineers, artists, and designers.

Ten years ago, building something as simple as a networked thermometer required some understanding of electrical engineering. Now it’s a Saturday-afternoon project for a beginner. It’s a shift we’ve already seen in programming, where procedural languages have become more powerful and communities have arisen to offer free help with programming problems. As the blending of hardware and software continues, the physical world will become democratized: the ranks of people who can address physical challenges from lots of different backgrounds will swell.

The outcome of all of this combining and broadening, I hope, will be a world that’s safer, cleaner, more efficient, and more accessible. It may also be a world that’s more intrusive, less private, and more vulnerable to ill-intentioned interference. That’s why it’s crucial that we develop a strong community from the new discipline.

Solid, which Joi Ito and I will present on May 21 and 22 next year, will bring members of the new discipline together to discuss this new medium at the blurred line between real and virtual. We’ll talk about design beyond the computer screen; software that understands and controls the physical world; new hardware tools that will become the building blocks of the connected world; frameworks for prototyping and manufacturing that make it possible for anyone to create physical devices; and anything else that touches both the concrete and abstract worlds.

Solid’s call for proposals is open to the public, as is the call for applications to the Solid Fellowships—a new program that comes with a stipend, a free pass to Solid, and help with travel expenses for students and independent innovators.

The business implications of the new discipline are just beginning to play out. Software companies are eyeing hardware as a way to extend their offerings into the physical world—think, for instance, of Google’s acquisition of Motorola and its work on a driverless car—and companies that build physical machines see software as a crucial component of their products. The physical world as a service, a business model that’s something like software as a service, promises to upend the way we buy and use machines, with huge implications for accessibility and efficiency. These types of service frameworks, along with new prototyping tools and open-source models, are making hardware design and manufacturing vastly easier.

A few interrelated concepts that I’ve been thinking about as we’ve sketched out the idea for Solid:

  • APIs for the physical world. Abstraction, modularity, and loosely-coupled services—the characteristics that make the Web accessible and robust—are coming to the physical world. Open-source libraries for sensors and microcontrollers are bringing easy-to-use and easy-to-integrate software interfaces to everything from weather stations to cars. Networked machines are defining a new physical graph, much like the Web’s information graph. These models are starting to completely reorder our physical environment. It’s becoming easier to trade off functionalities between hardware and software; expect the proportion of intelligence residing in software to increase over time.
  • Manufacturing made frictionless. Amazon’s EC2 made it possible to start writing and selling software with practically no capital investment. New manufacturing-as-a-service frameworks bring the same approach to building things, making factory work fast and capital-light. Development costs are plunging, and it’s becoming easier to serve niches with specialized hardware that’s designed for a single purpose. The pace of innovation in hardware is increasing as the field becomes easier for entrepreneurs to work in and financing becomes available through new platforms like Kickstarter. Companies are emerging now that will become the Amazon Web Services of manufacturing.
  • Software intelligence in the physical world. Machine learning and data-driven optimization have revolutionized the way that companies work with the Web, but the kind of sophisticated knowledge that Amazon and Netflix have accumulated has been elusive in the offline world. Hardware lets software reach beyond the computer screen to bring those kinds of intelligence to the concrete world, gathering data through networked sensors and exerting real-time control in order to optimize complicated systems. Many of the machines around us could become more efficient simply through intelligent control: a furnace can save oil when software, knowing that homeowners are away, turns down the thermostat; a car can save gas when Google Maps, polling its users’ smartphones, discovers a traffic jam and suggests an alternative route—the promise of software intelligence that works above the level of a single machine. The Internet stack now reaches all the way down to the phone in your pocket, the watch on your wrist, and the thermostat on your wall.
  • Every company is a software company. Software is becoming an essential component of big machines for both the builders and the users of those machines. Any company that owns big capital machines needs to get as much out of them as possible by optimizing their operation with software, and any company that builds machines must improve and extend them with layers of software in order to be competitive. As a result, a software startup with promising technology might just as easily be bought by a big industrial company as by a Silicon Valley software firm. This has important organizational, cultural, and competency impact.
  • Complex systems democratized. The physical world is becoming accessible to innovators at every level of expertise. Just as it’s possible to build a Web page with only a few hours’ learning, it’s becoming easier for anyone to build things, whether electronic or not. The result: realms like the urban environment that used to be under centralized control by governments and big companies are now open to innovation from anyone. New economic models and communities will emerge in the physical world just as they’ve emerged online in the last twenty years.
  • The physical world as a service. Anything from an Uber car to a railroad locomotive can be sold as a service, provided that it’s adequately instrumented and dispatched by intelligent software. Good data from the physical world brings about efficient markets, makes cheating difficult, and improves quality of service. And it will revolutionize business models in every industry as service guarantees replace straightforward equipment sales. Instead of just selling electricity, a utility could sell heating and cooling—promising to keep a homeowner’s house at 70 degrees year round. That sales model could improve efficiency and quality of life, bringing about incentive for the utility to invest in more efficient equipment and letting it take advantage of economies of scale.
  • Design after the screen. Our interaction with software no longer needs to be mediated through a keyboard and screen. In the connected world, computers gather data through multiple inputs outside of human awareness and intuit our preferences. The software interface is now a dispersed collection of conventional computers, mobile phones, and embedded sensors, and it acts back onto the world through networked microcontrollers. Computing happens everywhere, and it’s aware of physical-world context.
  • Software replaces physical complexity. A home security system is no longer a closed network of motion sensors and door alarms; it’s software connected to generic sensors that decides when something is amiss. In 2009, Alon Halevy, Peter Norvig, and Fernando Pereira wrote that having lots and lots of data can be more valuable than having the most elegant model. In the connected world, having lots and lots of sensors attached to some clever software will start to win out over single-purpose systems.

These are some rough thoughts about an area that we’ll all spend the next few years trying to understand. This is an open discussion, and we welcome thoughts on it from anyone.

July 05 2013

Context Aware Programming

I’m increasingly realizing that many of my gripes about applications these days are triggered by their failure to understand my context in the way that they can and should. For example:

  • Unruly apps on my Android phone, like gmail or twitter, update messages in the background as soon as I wake up my phone, slowing the phone to a crawl and making me wait to get to the app I really want. This is particularly irritating when I’m trying to place a phone call or write a text, but it can get in the way in the most surprising places. (It’s not clear to me if this is the application writers’ fault, or due to a fundamental flaw in the Android OS.)
  • Accuweather for Android, otherwise a great app, lets you set up multiple locations, which seems like it would be very handy for frequent travelers, but it inexplicably defaults to whichever location you set up first, regardless of where you are. Not only does it ignore the location sensor in the phone, it doesn’t even bother to remember the last location I chose.
  • The WMATA app (Washington Metro transit app) I just downloaded lets you specify and save up to twelve bus stops for which it will report next bus arrival times. Why on earth doesn’t it just detect what bus stop you are actually standing at?
  • And it’s not just mobile apps. Tweetdeck for Mac lets you schedule tweets for later in the day or on future dates, yet it defaults to the date that you last used the feature rather than today’s date!  How frustrating is it to set the time of the tweet for the afternoon, only to be told “Cannot schedule a tweet for the past”, because you didn’t manually update the date to today!

In each of these cases, programmers seemingly have failed to understand that devices have senses, and that consulting those senses is the first step in making the application more intelligent. It’s as if a human, on awaking, blundered down to breakfast without opening his or her eyes!

By contrast, consider how some modern apps have used context awareness to create brilliant, transformative user experiences:

  • Uber, recognizing both where you are and where the nearest driver is, gives you an estimated time of pickup, connects the two of you, and lets you track progress towards that pickup.
  • Square Register notices anyone running Square Wallet entering the store, and pops up their name, face, and stored payment credentials on the register, creating a delightful in-store checkout experience.
  • Apps like FourSquare and Yelp are like an augmentation that adds GPS as a human sixth sense, letting you find restaurants and other attractions nearby. Google Maps does all that and more. (Even Google Maps sometimes loses the plot, though. For example, yesterday afternoon, I was on my way to Mount Vernon. Despite the fact that I was in Virginia, a search unadorned with the state name gave me as a first result Mount Vernon WA, rather than Mount Vernon VA. I’ve never understood how an app that can, and does, suggest the correct street name nearby before I’ve finished typing the building number can sometimes go so wrong.)
  • Google Now, while still a work in progress, does an ambitious job of intuiting things you might want to know about your environment. It understands your schedule, your location, the weather, the time, and things you have asked Google to remember on your behalf. It sometimes suggests things that you don’t care about, but I’d far rather than than an idiot application that requires me to use keystrokes or screen taps to tell the app things that my phone already knows.

Just as the switch from the command line to the GUI required new UI skills and sensibilities, mobile and sensor-based programming creates new opportunities to innovate, to surprise and delight the user, or, in failing to use the new capabilities, the opportunity to create frustration and anger.  The bar has been raised. Developers who fail to embrace context-aware programming will eventually be left behind.

June 05 2013

May 08 2013

Four short links: 8 May 2013

  1. How to Build a Working Digital Computer Out of Paperclips (Evil Mad Scientist) — from a 1967 popular science book showing how to build everything from parts that you might find at a hardware store: items like paper clips, little light bulbs, thread spools, wire, screws, and switches (that can optionally be made from paper clips).
  2. Moloch (Github) — an open source, large scale IPv4 packet capturing (PCAP), indexing and database system with a simple web GUI.
  3. Offline Wikipedia Reader (Amazon) — genius, because what Wikipedia needed to be successful was to be read-only. (via BoingBoing)
  4. Storing and Publishing Sensor Data — rundown of apps and sites for sensor data. (via Pete Warden)

January 11 2013

Defining the industrial Internet

We’ve been collecting threads on what the industrial Internet means since last fall. More case studies, company profiles and interviews will follow, but here’s how I’m thinking about the framework of the industrial Internet concept. This will undoubtedly continue to evolve as I hear from more people who work in the area and from our brilliant readers.

The crucial feature of the industrial Internet is that it installs intelligence above the level of individual machines — enabling remote control, optimization at the level of the entire system, and sophisticated machine-learning algorithms that can work extremely accurately because they take into account vast quantities of data generated by large systems of machines as well as the external context of every individual machine. Additionally, it can link systems together end-to-end — for instance, integrating railroad routing systems with retailer inventory systems in order to anticipate deliveries accurately.

In other words, it’ll look a lot like the Internet — bringing industry into a new era of what my colleague Roger Magoulas calls “promiscuous connectivity.”

Optimization becomes more efficient as the size of the system being optimized grows (in theory). Your software can take into account lots of machines, learning from a much larger training set and then optimizing both within the machine and for the group of machines working together. Think of a wind farm. There are certain optimizations you need to make at the machine level: the turbine turns itself to face into the wind, the blades adjust themselves through every cycle in order to account for flex and compression, and the machine shuts down during periods of dangerously high wind.

System-wide optimization means that when you can operate each turbine in a way that minimizes air disruption to other turbines (these things create wake, just like an airplane, that can disrupt the operation of nearby turbines). When you need to increase or decrease power output across the whole farm, you can do it across lots of machines in a way that minimizes wear (i.e., curtail each machine by 5% or cut off 5% of your machines, or something in between depending on differential output and the impact of different speeds on machine wear). And by gathering data from thousands of machines, you can develop highly-detailed optimization plans.

By tying machines together, the industrial Internet will encourage “platformization.” Cars have several control systems, and until very recently they’ve been linked by point-to-point connections: when you move the windshield-wiper lever, it actuates a switch that’s connected to a small PLC that operates the windshield wipers. The brake pedal is part of the chassis-control system, and it’s connected by cable or hydraulics to the brake pads, with an electronic assist somewhere in the middle. The navigation system and radio are part of the same telematics platform, but that platform is not linked to, say, the steering wheel.

The car as enabled by the industrial Internet will be a platform — a bus, in the computing sense — built by the car manufacturer, with other systems communicating with each other through the platform. The brake pedal is an actuator that sends a “brake” signal to the car’s brake controller. The navigation system is able to operate the steering wheel and has access to the same brake controller. Some of these systems will be driven by third-party-built apps that sit on top of the platform.

This will take some time to happen in cars because it takes 10 or 15 years to renew the American auto fleet, because cars are maintained by a vast network of independent mechanics that need change to happen slowly, and because car development works incrementally.

But it’s already happening in commercial aircraft, which often come from clean-sheet designs (as with the Boeing 787 and Airbus A350), and which are maintained under very different circumstances than passenger cars. In Bombardier’s forthcoming C-series midsize jet, for instance, the jet engines do nothing but propel the plane and generate electricity (they don’t generate hydraulic pressure or compress air for the cabin; these are handled by electrically-powered compressors). The plane acts as a giant hardware platform on which all sorts of other systems sit: the landing-gear switch communicates with the landing gear through the aircraft’s bus, rather than by direct connection to the landing gear’s PLC.

The security implications of this sort of integration — in contrast to effectively air-gapped isolation of systems — are obvious. The industrial Internet will need its own specially-developed security mechanisms, which I’ll look into in another post.

The industrial Internet makes it much easier to deploy and harvest data from sensors, which goes back to the system-wide intelligence point above. If you’re operating a wind farm, it’s useful to have wind-speed sensors distributed across the country in order to predict and anticipate wind speeds and directions. And because you’re operating machine-learning algorithms at the system-wide level, you’re able to work large-scale sensor datasets into your system-wide optimization.

That, in turn, will help the industrial Internet take in previously-uncaptured data that’s made newly useful. Venkatesh Prasad, from Ford, pointed out to me that the windshield wipers in your car are a sort of human-actuated rain API. When you turn on your wipers, you’re acting as a sensor — you see water on your windshield, in a quantity sufficient to cause you to want your wipers on, and you set your wipers to a level that’s appropriate to the amount of water on your windshield.

In isolation, all you’re doing is turning on your windshield wipers. But if your car is networked, then it can send a signal to a cloud-based rain-detection service that geocorrelates your car with nearby cars whose wipers are on and makes an assumption about the presence of rain in the area and its intensity. That service could then turn on wipers in other cars nearby or do more sophisticated things — anything from turning on their headlights to adjusting the assumptions that self-driving cars make about road adhesion.

This is an evolving conversation, and I want to hear from readers. What should be included in the definition of the industrial Internet? What examples define, for you, the boundaries of the field?


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Related

Defining the industrial Internet

We’ve been collecting threads on what the industrial Internet means since last fall. More case studies, company profiles and interviews will follow, but here’s how I’m thinking about the framework of the industrial Internet concept. This will undoubtedly continue to evolve as I hear from more people who work in the area and from our brilliant readers.

The crucial feature of the industrial Internet is that it installs intelligence above the level of individual machines — enabling remote control, optimization at the level of the entire system, and sophisticated machine-learning algorithms that can work extremely accurately because they take into account vast quantities of data generated by large systems of machines as well as the external context of every individual machine. Additionally, it can link systems together end-to-end — for instance, integrating railroad routing systems with retailer inventory systems in order to anticipate deliveries accurately.

In other words, it’ll look a lot like the Internet — bringing industry into a new era of what my colleague Roger Magoulas calls “promiscuous connectivity.”

Optimization becomes more efficient as the size of the system being optimized grows (in theory). Your software can take into account lots of machines, learning from a much larger training set and then optimizing both within the machine and for the group of machines working together. Think of a wind farm. There are certain optimizations you need to make at the machine level: the turbine turns itself to face into the wind, the blades adjust themselves through every cycle in order to account for flex and compression, and the machine shuts down during periods of dangerously high wind.

System-wide optimization means that when you can operate each turbine in a way that minimizes air disruption to other turbines (these things create wake, just like an airplane, that can disrupt the operation of nearby turbines). When you need to increase or decrease power output across the whole farm, you can do it across lots of machines in a way that minimizes wear (i.e., curtail each machine by 5% or cut off 5% of your machines, or something in between depending on differential output and the impact of different speeds on machine wear). And by gathering data from thousands of machines, you can develop highly-detailed optimization plans.

By tying machines together, the industrial Internet will encourage “platformization.” Cars have several control systems, and until very recently they’ve been linked by point-to-point connections: when you move the windshield-wiper lever, it actuates a switch that’s connected to a small PLC that operates the windshield wipers. The brake pedal is part of the chassis-control system, and it’s connected by cable or hydraulics to the brake pads, with an electronic assist somewhere in the middle. The navigation system and radio are part of the same telematics platform, but that platform is not linked to, say, the steering wheel.

The car as enabled by the industrial Internet will be a platform — a bus, in the computing sense — built by the car manufacturer, with other systems communicating with each other through the platform. The brake pedal is an actuator that sends a “brake” signal to the car’s brake controller. The navigation system is able to operate the steering wheel and has access to the same brake controller. Some of these systems will be driven by third-party-built apps that sit on top of the platform.

This will take some time to happen in cars because it takes 10 or 15 years to renew the American auto fleet, because cars are maintained by a vast network of independent mechanics that need change to happen slowly, and because car development works incrementally.

But it’s already happening in commercial aircraft, which often come from clean-sheet designs (as with the Boeing 787 and Airbus A350), and which are maintained under very different circumstances than passenger cars. In Bombardier’s forthcoming C-series midsize jet, for instance, the jet engines do nothing but propel the plane and generate electricity (they don’t generate hydraulic pressure or compress air for the cabin; these are handled by electrically-powered compressors). The plane acts as a giant hardware platform on which all sorts of other systems sit: the landing-gear switch communicates with the landing gear through the aircraft’s bus, rather than by direct connection to the landing gear’s PLC.

The security implications of this sort of integration — in contrast to effectively air-gapped isolation of systems — are obvious. The industrial Internet will need its own specially-developed security mechanisms, which I’ll look into in another post.

The industrial Internet makes it much easier to deploy and harvest data from sensors, which goes back to the system-wide intelligence point above. If you’re operating a wind farm, it’s useful to have wind-speed sensors distributed across the country in order to predict and anticipate wind speeds and directions. And because you’re operating machine-learning algorithms at the system-wide level, you’re able to work large-scale sensor datasets into your system-wide optimization.

That, in turn, will help the industrial Internet take in previously-uncaptured data that’s made newly useful. Venkatesh Prasad, from Ford, pointed out to me that the windshield wipers in your car are a sort of human-actuated rain API. When you turn on your wipers, you’re acting as a sensor — you see water on your windshield, in a quantity sufficient to cause you to want your wipers on, and you set your wipers to a level that’s appropriate to the amount of water on your windshield.

In isolation, all you’re doing is turning on your windshield wipers. But if your car is networked, then it can send a signal to a cloud-based rain-detection service that geocorrelates your car with nearby cars whose wipers are on and makes an assumption about the presence of rain in the area and its intensity. That service could then turn on wipers in other cars nearby or do more sophisticated things — anything from turning on their headlights to adjusting the assumptions that self-driving cars make about road adhesion.

This is an evolving conversation, and I want to hear from readers. What should be included in the definition of the industrial Internet? What examples define, for you, the boundaries of the field?


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Related

January 08 2013

The inevitability of smart dust

it's not fog... it's smoke... by Guilherme Jófili, on Flickrit's not fog... it's smoke... by Guilherme Jófili, on FlickrI’ve put forward my opinion that desktop computing is dead on more than one occasion, and been soundly put in my place as a result almost every time. “Of course desktop computing isn’t dead — look at the analogy you’re drawing between the so called death of the mainframe and the death of the desktop. Mainframes aren’t dead, there are still plenty of them around!”

Well, yes, that’s arguable. But most people, everyday people, don’t know that. It doesn’t matter if the paradigm survives if it’s not culturally acknowledged. Mainframe computing lives on, buried behind the scenes, backstage. As a platform it performs well, in its own niche. No doubt desktop computing is destined to live on, but similarly behind the scenes, and it’s already fading into the background.

The desktop will increasingly belong to niche users. Developers need them, at least for now and for the foreseeable future. But despite the prevalent view in Silicon Valley, the world does not consist of developers. Designers need screen real estate, but buttons and the entire desktop paradigm are a hack; I can foresee the day when the computing designers use will not even vaguely resemble today’s desktop machines.

For the rest of the world? Computing will almost inevitably diffuse out into our environment. Today’s mobile devices are transition devices, artifacts of our stage of technology progress. They too will eventually fade into their own niche. Replacement technologies, or rather user interfaces, like Google’s Project Glass are already on the horizon, and that’s just the beginning.

People never wanted computers; they wanted what computers could do for them. Almost inevitably the amount computers can do for us on their own, behind our backs, is increasing. But to do that, they need data, and to get data they need sensors. So the diffusion of general purpose computing out into our environment is inevitable.

Everyday objects are already becoming smarter. But in 10 years’ time, every piece of clothing you own, every piece of jewelry, and every thing you carry with you will be measuring, weighing and calculating. In 10 years, the world — your world — will be full of sensors.

The sensors you carry with you may well generate more data every second, both for you and about you, than previous generations did about themselves during the course of their entire lives. We will be surrounded by a cloud of data. While the phrase “data exhaust” has already entered the lexicon, we’re still essentially at the banging-the-rocks-together stage. You haven’t seen anything yet …

The end point of this evolution is already clear: it’s called smart dust. General purpose computing, sensors, and wireless networking, all bundled up in millimeter-scale sensor motes drifting in the air currents, flecks of computing power, settling on your skin, ingested, will be monitoring you inside and out, sensing and reporting — both for you and about you.

Almost inevitably the amount of data that this sort of technology will generate will vastly exceed anything that can be filtered, and distilled, into a remote database. The phrase “data exhaust” will no longer be a figure of speech; it’ll be a literal statement. Your data will exist in a cloud, a halo of devices, tasked to provide you with sensor and computing support as you walk along, calculating constantly, consulting with each other, predicting, anticipating your needs. You’ll be surrounded by a web of distributed sensors and computing.

Makes desktop computing look sort of dull, doesn’t it?

Photo: it’s not fog… it’s smoke… by Guilherme Jófili, on Flickr

Related:

October 29 2012

Listening for tired machinery

Software is making its way into places where it hasn’t usually been before, like the cutting surfaces of very fast, ultra-precise machine tools.

A high-speed milling machine can run at 42,000 RPM as it fabricates high-quality machine components within tolerances of a few microns. Excessive wear in that environment can lead to a failure that ruins an expensive part, but it’s difficult to use physical means to detect wear on cutting surfaces: human operators can’t see it and detailed microscopic inspections are costly. The result is that many operators simply replace parts on a pre-determined schedule — every two months, perhaps — that ends up being overly conservative.

The researchers’ milling machine, shown with sensors near the cutting device. (Source: X. Li, M.J. Er, H. Ge, O. P. Gan, S. Huang, L.Y. Zhai, S. Linn, Amin J. Torabi, “Adaptive Network Fuzzy Inference System and Support Vector Machine Learning for Tool Wear Estimation in High Speed Milling Processes,” Proceedings of the 38th Annual Conference of the IEEE Industrial Electronics Society, pp. 2809-2814, 2012.)

Enter software: in a paper delivered to the IEEE’s Industrial Electronics Society in Montreal last Thursday*, a group of researchers from Singapore propose a way to use low-cost sensors along with machine learning algorithms to accurately predict wear on machine parts — a technique that could cut costs for manufacturers by lengthening the lifespan of machine parts while avoiding failures.

The group’s demonstration is a promising illustration of the industrial Internet, which promises to bring more intelligence to machines by linking them to networks and integrating them with sophisticated software. Techniques from areas like machine learning, which can be computationally intensive, can thus be available in monitoring parts as small and common as cutting surfaces in milling machines.

“This is a simple optimization problem,” says Meng Joo Er, a professor at the Nanyang Technological University and an author of the paper. “But you’re talking about a very expensive piece of equipment working on a very expensive product. We have to be very careful.”

The cutting tool, enlarged under a microscope, after four cuts (Source: ibid.)

Er and his colleagues outfitted a high-speed computer-controlled milling machine with a handful of common sensors: accelerometers to measure vibrations, an acoustic emission sensor to measure stress waves and a dynamometer to measure cutting forces. The researchers then used data from these sensors along with measurements of wear taken through a microscope as a training set for two machine-learning algorithms (one ANFIS model and one SVM model). The researchers managed to predict wear at above 90% by the ANFIS method and 85% by the SVM method.

The same surface showing wear (Source: ibid.)

That’s accurate enough to find its way into application although, Er notes, industrial users will likely combine this sort of method with conventional fallbacks, and it will take longer to make its way into very-high-stakes fields like aviation. Computation time for running the trained model — now on the order of 25 seconds for the more accurate model and half a second for the less accurate — will also need to come down, and engineers will need to find ways to integrate these types of models into the industrial control systems that are ubiquitous in automated manufacturing.

Nevertheless, this is a good lens for peering into the future of connected machines: generate lots of data, let software swallow it up, and optimize away.

*Available for a fee from IEEE: X. Li, M.J. Er, H. Ge, O. P. Gan, S. Huang, L.Y. Zhai, S. Linn, Amin J. Torabi, “Adaptive Network Fuzzy Inference System and Support Vector Machine Learning for Tool Wear Estimation in High Speed Milling Processes,” Proceedings of the 38th Annual Conference of the IEEE Industrial Electronics Society, pp. 2809-2814, 2012.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

October 04 2012

Four short links: 4 October 2012

  1. As We May Think (Vannevar Bush) — incredibly prescient piece he wrote for The Atlantic in 1945.
  2. Transparency and Topic Models (YouTube) — a talk from DataGotham 2012, by Hanna Wallach. She uses latent Dirichlet allocation topic models to mine text data in declassified documents where the metadata are useless. She’s working on predicting classification durations (AWESOME!). (via Matt Biddulph)
  3. Slippy Map of the Ancient World — this. is. so. cool!
  4. Technology in the NFLX2IMPACT’s Concussion Management System (CMS) is a great example of this trend. CMS, when combined with a digital mouth guard, also made by X2, enables coaches to see head impact data in real-time and asses concussions through monitoring the accelerometers in a players mouth guard. That data helps teams to decide whether to keep a player on the field or take them off for their own safety. Insert referee joke here.

August 14 2012

The complexity of designing for everywhere

In her new book The Mobile Frontier, author Rachel Hinman (@Hinman) says the mobile design space is a wide-open frontier, much like space exploration or the Wild West, where people have room to “explore and invent new and more human ways for people to interact with information.”

In the following interview, Hinman talks about the changing landscape of computing — GUIs becoming NUIs — and delves into the future of mobile and how designers and users alike will make the journey.

What is mobile’s biggest strength? What about it is creating a new frontier?

Rachel Hinman: Humans have two legs, making us inherently mobile beings. Yet for the last 50 years, we’ve all settled into a computing landscape that assumes a static context of use. Mobile’s biggest strength is that it maps to this inherent human characteristic to be mobile.

The static, PC computing landscape is known and understood. Mobile is a frontier because there’s still much we don’t understand and much yet to be discovered. There are lots of breakthroughs in mobile yet to come, making it an exciting place for those who can stomach the uncertainty and ambiguity to be.

You talk about “mobile context” in your book. What does that involve and why is it important?

Rachel Hinman: Most designers have been steeped in a tradition of creating experiences with few context considerations, though they may not realize it. Books, websites, software programs, and even menus for interactive televisions share an implicit and often overlooked commonality: use occurs in relatively static and predictable environments. In contrast, most mobile experiences are situated in highly dynamic and unpredictable environments. Issues of context tend to blindside most designers new to mobile.

Compelling mobile experiences share a common characteristic — they are empathetic to the constraints of the mobile context. Underneath all the hoopla that mobile folks make about the importance of context is the recognition of a skill that everyone interested in this medium must develop: both empathy and curiosity for the complexity of designing for everywhere. It’s not a skill most people grow overnight, but rather something we learn through trial and error. And like any skill, the learning never stops.

How do you see the ethics and privacy issues surrounding sensors playing out?

Rachel Hinman: I think what’s interesting about the privacy issue is that it’s not a technical or UX “problem” as much as it is a cultural/social/ethical issue. Information has always had value. Before the widespread acceptance of the Internet, we lived in a world where information wasn’t as accessible, and there was a much higher level of information symmetry. The Internet has changed that and is continuing to change that.

Now, people recognize their information has value and some (not all) are willing to exchange that information if they’ll receive value in return. I think experiences that get called out for privacy violation problems are often built by companies that don’t have a handle on how this issue is evolving, companies that are still living in a time when users’ relationships to information were less transparent. That kind of thoughtlessness just doesn’t fly anymore.

I interviewed Alex Rainert, the head of product for Foursquare, and a quote I distinctly remember related to this very issue. He said:

“It seems weird to think that in our lifetimes, we had computers in our homes that were not connected to a network, but I can vividly remember that. But that’s something my daughter will never experience. I think a similar change will happen with some of the information sharing questions that we have today.”

I think he’s right. I think it’s easy when we talk about privacy and information sharing to believe and act as if people’s sensibilities will forever be the way they are now. This childlike instinct has its charms, but it’s usually wrong and particularly dangerous for designers and people creating user experiences. People who think deeply about the built world necessarily must view these issues as fungible and ever evolving, not fixed.

I think Alex said it best in the interview when he said:

“I think the important thing to remember is that some problems are human problems. They’re problems a computer can’t solve. I’m definitely not one of those people who says stuff like, ‘We think phones will know what you want to do before you want to do it.’ I think there’s a real danger to over rely on the algorithm to solve human problems. I think it’s finding the right balance of how could you leverage the technology to help improve someone’s experience, but not expect that you’re going to be able to wholeheartedly hand everything over to a computer to solve.”

In your book, you talk about a paradigm shift. What is the Mobile NUI Paradigm and what sort of paradigm shift is underway?

Rachel Hinman: Paradigm shifts happen when enough people recognize that the underlying values, beliefs, and ideas any given paradigm supports should be changed or are no longer valid. While GUI, WYSIWYG, files, hierarchical storage, and the very metaphor of a desktop were brilliant inventions, they were created before the PC was ubiquitous, email was essentially universal, and the World Wide Web became commonplace. For all its strengths, the desktop paradigm is a static one, and the world is longing for a mobile paradigm. We’ve reached the edges of what GUI can do.

Just like the Apple Macintosh released in 1984 ushered in the age of the graphical user interface, Apple’s iPhone was a hero product that served as an indicator of the natural evolution of the next wave of user interfaces. The fast and steady uptake of mobile touchscreen devices in all shapes and sizes since 2007 indicates a fundamental change is afoot. A natural UI (NUI) evolution has started. GUIs will be supplanted by NUIs in the not-so-distant future.

While “NUI domination” may seem inevitable, today we are situated in a strange GUI/NUI chasm. While there are similarities and overlap between graphical user interfaces and natural user interfaces, there are obvious differences in the characteristics and design principles of each. What makes a GUI experience successful is very different from the attributes that make a NUI experience successful. This is where much of the design confusion comes into play for both designers of NUIs and users of NUIs. We’re still stuck in a valley between the two paradigms. Just as we look back today on the first GUIs with nostalgia for their simplicity, the NUI interfaces we see today in mobile devices and tablets are larval examples of what NUIs will grow to become. NUIs are still new — the design details and conventions are still being figured out.

Who is doing mobile right at this point? Who should we watch going forward?

Rachel Hinman: There are a couple categories of people who I think are doing mobile right. The first are those who have figured out how to create experiences that can shapeshift across multiple devices and contexts. They’re the folks who have figured out how to separate content from form and allow content to be a fluid design material. Examples include Flipboard, the Windows 8 platform, as well as content providers like the Scripps network, which handles interactive content for TV networks like FoodNetwork and HGTV.

Another category is novel mobile apps that are pushing boundaries of mobile user experience in interesting ways. I’m a big fan of Foursquare because it combines social networks with a user’s sense of place to deliver a unique mobile experience. An app called Clear is pushing the boundaries of gesture-based UIs in an interesting way. It’s nice to see voice coming into its own with features like Siri. Then there are apps like DrawSomething; it’s an experience that is so simple, yet so compelling — addictive even.

There’s also tons of interesting mobile work being done in places like Africa — in some ways even more cutting-edge than the U.S. or Europe because of the cultural implications of the work. Frontline SMS, Ushahidi and M-Pesa are shining examples of mobile technology that’s making a huge impact.

What does mobile look like in 10 years?

Rachel Hinman: I think the biggest change that will happen in the next 10 years is that we likely won’t even differentiate a computing experience as being “mobile.” Instead, we will assume all computing experiences are mobile. I predict in the not-so-distant future, we will reflect on the desktop computing experience much in the same way my parents reflect on punch card computing systems or telephones with long cords that used to hang on kitchen walls. What seems novel and new now will be the standard sooner than we can probably imagine.

This interview was edited and condensed.

The Mobile Frontier — This book looks at how invention in the mobile space requires casting off anchors and conventions and jumping head first into a new and unfamiliar design space.

Related:

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl