Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 26 2014

Hurdles to the Internet of Things prove more social than technical

Last Saturday’s IoT Festival at MIT became a meeting-ground for people connecting the physical world. Embedded systems developers, security experts, data scientists, and artists all joined in this event. Although it was called a festival, it had a typical conference format with speakers, slides, and question periods. Hallway discussions were intense.

However you define the Internet of Things (O’Reilly has its own take on it, in our Solid blog site and conference), a lot stands in the way of its promise. And these hurdles are more social than technical.

Participants eye O'Reilly booksParticipants eye O'Reilly books

Participants eye O’Reilly books (Credit: IoT Festival)

Some of the social discussion we have to have before we get the Internet of Things rolling are:

  • What effects will all this data collection, and the injection of intelligence into devices, have on privacy and personal autonomy? And how much of these precious values are we willing to risk to reap the IoT’s potential for improving life, saving resources, and lowering costs?

  • Another set of trade-offs involve competing technical goals. For instance, power consumption can constrain features, and security measures can interfere with latency and speed. What is the priority for each situation in which we are deploying devices? How do we make choices based on our ultimate goals for these things?

  • How do we persuade manufacturers to build standard communication protocols into everyday objects? For those manufacturers using proprietary protocols and keeping the data generated by the objects on private servers, how can we offer them business models that makes sharing data more appealing than hoarding it?

  • What data do we really want? Some industries are already drowning in data. (Others, such as health care, have huge amounts of potential data locked up in inaccessible silos.) We have to decide what we need from data and collect just what’s useful. Collection and analysis will probably be iterative: we’ll collect some data to see what we need, then go back and instrument things to collect different data.

  • How much can we trust the IoT? We all fly on planes that depend heavily on sensors, feedback, and automated controls. Will we trust similar controls to keep self-driving vehicles from colliding on the highway at 65 miles per hour? How much can we take humans out of the loop?

  • Similarly, because hubs in the IoT are collecting data and influencing outcomes at the edges, they require a trust relationship among the edges, and between the edges and the entity who is collecting and analyzing the data.

  • What do we need from government? Scads of data that is central to the IoT–people always cite the satellite GIS information as an example–comes from government sources. How much do we ask the government for help, how much do we ask it to get out of the way of private innovators, and how much can private innovators feed back to government to aid its own efforts to make our lives better?

  • It’s certain that IoT will displace workers, but likely that it will also create more employment opportunities. How can we retrain workers and move them to the new opportunities without too much disruption to their lives? Will the machine serve the man, or the other way around?

We already have a lot of the technology we need, but it has to be pulled together. It must be commercially feasible at a mass production level, and robust in real-life environments presenting all their unpredictability. Many problems need to be solved at what the Jim Gettys (famous for his work on the X Window System, OLPC, and bufferbloat) called “layer 8 of the Internet”: politics.

Large conference hallLarge conference hall

Large conference hall. I am in striped olive green shirt at right side of photo near rear (Credit: IoT Festival)

The conference’s audience was well-balanced in terms of age and included a good number of women, although still outnumbered by men. Attendance was about 175 and most of the conference was simulcast. Videos should be posted soon at the agenda site. Most impressive to me was the relatively small attrition that occurred during the long day.

Isis3D, a sponsor, set up a booth where their impressive 3D printer ratcheted out large and very finely detailed plastic artifacts. Texas Instruments’ University Program organized a number of demonstrations and presentations, including a large-scale give away of LaunchPads with their partner, Anaren.

Isis3D demos their 3D printerIsis3D demos their 3D printer

Isis3D demos their 3D printer (Credit: IoT Festival)

Does the IoT hold water?

One application of the IoT we could all applaud is reducing the use of water, pesticide, and fertilizer in agriculture. This application also illustrates some of the challenges the IoT faces. AgSmarts installs sensors that can check temperature, water saturation, and other factors to report which fields need more or less of a given resource. The service could be extended in two ways:

  • Pull in more environmental data from available sources besides the AgSmarts devices. For instance, Dr. Shawana Johnson of Global Marketing Insight suggested that satellite photos (we’re back to that oft-cited GIS data) could show which fields are dry.

  • Close the loop by running all the data through software that adjusts the delivery of water or fertilizer with little or no human intervention. This is a big leap in sophistication, reviving my earlier question about trusting the IoT. Closing the loop would require real-time analysis of data from hundreds of locations. It’s worth noting that AgSmarts offers a page about Analytics, but all it says right now is “Coming Soon.”

Lights, camera, action

A couple other interesting aspects of the IoT are responsive luminaires (light fixtures) and unmanned aerial vehicles, which can track things on the ground through cameras and other sensors.

Consumer products to control indoor lights are already available. Lighting expert John Luciani reported that outdoor lights are usually coordinated through the Zigbee wireless protocol. Some applications for control over outdoor lighting include:

  • Allowing police in their cruisers to brighten the lights during an emergency.

  • Increasing power to LEDs gradually over time, to compensate for their natural dimming.

  • Identifying power supplies that are nearing the end of their life, because LEDs can last much longer than power supplies.

Lights don’t suffer from several of the major constraints that developers struggle with when using many IoT devices. Because luminaires need a lot of power to cast their light, they always have a power source that is easily high enough to handle their radio communications and other computing needs. Second, there is plenty of room for a radio antenna. Finally, recent ANSI standards have allowed a light fixture to contain more control wires.

Thomas Almholt of Texas Instruments reported that Texas electric utilities have been installing smart meters. Theoretically, these could help customers and utilities save money and reduce usage, because rates change every 15 minutes and most customers have no window on these changes. But the biggest benefit they discovered from the smart meters was quite unexpected: teams came out to houses when the meters started to act in bizarre ways, and prevented a number of fires from being starting by the rogue meters.

A presentation on UAVs and other drones (not all are air-borne) highlighted uses for journalism, law enforcement, and environmental tracking, along with the risks of putting too much power in the hands of those inside or outside of government who possess drones.

MIT’s SENSEable City Lab, as an example of a positive use, measures water quality in the nearby Charles River through drones, producing data at a much more fine-gained level than what could be measured before, spacially and temporally.

Kade Crockford of the Massachusetts ACLU laid out (through a video simulation as scary as it was amusing) the ways UAVs could invade city life and engage in creepy tracking. Drones can go more places than other surveillance technologies (such as helicopters) and can detect our movements without us detecting the drones. The chilling effects of having these robots virtually breathe down our necks may outweigh the actual abuses perpetrated by governments or other actors. This is an illustration of my earlier point about trust between the center and the edges. (Crockford led a discussion on privacy in the conference, but I can’t report on it because I felt I needed to attend a different concurrent session.)

Protocols such as the 802.15 family and Zigbee permit communication among devices, but few of these networks have the processing power to analyze the data they produce and take action on their own. To store their data, base useful decisions on it, and allow humans to view it, we need to bring their data onto the larger Internet.

One tempting solution is to use the cellular network in which companies have invested so much. But there are several challenges to doing so.

  • Cell phone charges can build up quickly.

  • Because of the shortage of phone numbers, an embedded device must use a special mobile subscriber ISDN number (MSISDN), which comes with limitations.

  • To cross the distance to cell towers, radios on the devices need to use much more power than they need to communicate over local wireless options such as WiFi and Bluetooth.

A better option, if the environment permits, is to place a wireless router near a cluster of devices. The devices can use a low-power local area network to communicate with the router, which is connected to the Internet by cable or fiber.

When a hub needs to contact a device to send it a command, several options allow the device to avoid wasting power keeping its radio on. The device can wake up regularly to poll the router and get the command in the router’s response. Alternatively, the router can wake the device when necessary.

Government and the IoT platform

One theme coming out of State of the Union addresses and other White House communications is the role of innovation in raising living standards. As I’ve mentioned, there was a bit of discussion at the conference about jobs and what IoT might do to them. Job creation is one of the four goals of the SmartAmerica Challenge run by the Presidential Innovation Fellows program. The other three are saving lives, creating new business opportunities, and improving the economy.

I was quite disappointed that climate change and other environmental protections were left off the SmartAmerica list. Although I applaud the four goals, I noticed that each pleases a powerful lobbying constituency (the health industry, chambers of commerce, businesses, and unions). The environment is the world’s most urgent challenge, and one that millions of people care about, but big money isn’t behind the cause.

Two people presenting the SmartAmerica Challenge suggested that most of the technologies enabling IoT have been developed already. What we need are open and easy-to-use architectures, and open standards that turn each layer of the system into a platform on which others are free to innovate. The call for open standards–which can also create open markets–also came from Bill Curtis of ARM.

Another leading developer in the computer field, Bob Frankston, advised us to “look for resources, not solutions.” I take this to mean we can create flexible hardware and software infrastructure that many innovators can use as platforms for small, nimble solutions. Frankston’s philosophy, as he put it to me later, is “removing the barriers to rapid evolution and experimentation, thus removing the need for elaborate scenarios.”

Bob Frankston speakingBob Frankston speaking

Bob Frankston speaking (Credit: IoT Festival)

Dr. Johnson claimed that government agencies can’t share a lot of their applications for their data because of security concerns, and predicted that each time a tier of government data is published, it will be greeted by an explosion of innovation from the private sector.

However, government can play another useful role. One speaker pointed out that it can guide the industry to use standards by requiring those standards in its own procurements.

Standards we should all learn

Both Almholt and Curtis laid out protocol stacks for the IoT, with a lot of overlap. Both agreed that many Internet protocols in widespread use were inappropriate for IoT. For instance, connections on the IoT tend to be too noisy and unreliable for TCP to work well. Another complication is that an 802.15.4 packet has a maximum size of 127 bytes, and IPv6 (the addressing system of choice for mobile devices) takes up at least 40. Clever reuse of data between packets can reduce this overhead.

Curtis said that, because most Internet standards are too complex for the constrained devices in the IoT, these devices tend to run proprietary protocols, creating data silos. He also called for devices that use less than 10 milliwatts of power. Almholt said that devices will start harvesting electricity from light, vibration, and thermal sources, thus doing without batteries altogether and running for long periods without intervention.

Some of the protocols that show promise for the IoT include:

  • 6LoWPAN or successors for network-layer transmission.

  • The Constrained Access Protocol (CoAP) for RESTful message exchange. By running over UDP instead of TCP and using binary headers instead of ASCII, this protocol achieves most of the benefits of HTTP but is more appropriate for small devices.

  • The Time Synchronized Mesh Protocol (TSMP) for coordinating data transmissions in an environment where devices contend for wireless bandwidth, thus letting devices shut down their radios and save power when they are not transmitting.

  • RPL for defining and changing routes among devices. This protocol sacrifices speed for robustness.

  • eDTLS, a replacement for the TLS security standard that runs over UDP.

Adoption of these standards would lead to a communication chasm between the protocol stack used within the LAN and the one used with the larger outside world. Therefore, edge devices would be devoted to translating between the two networks. One speaker suggested that many devices will not be provided with IP addresses, for security reasons.

Challenges that remain

Research with impacts on the IoT is flourishing. I’ll finish this article with a potpourri of problems and how people are addressing them:

  • Gettys reported on the outrageous state of device security. In Brazil, an enormous botnet has been found running just on the consumer hubs that people install in their homes and businesses for Internet access. He recommended that manufacturers adopt OpenWRT firmware to protect devices, and that technically knowledgeable individuals install it themselves where possible.

  • Although talking over the Internet is a big advance for devices, talking to each other over local networks at the radio layer (using for instance the 802.15.4 standards) is more efficient and enables more rapid responses to the events they monitor. Unfortunately, according Almholt, interoperability is a decade away.

  • Mobility can cause us to lose context. A recent day-to-day example involves the 911 emergency call system, which is supposed to report the caller’s location to the responder. When people gave up their landlines and called 911 from cell phones, it became much harder to pinpoint where they were. The same thing happen in a factory or hospital when a staffer logs in from a mobile device.

Whether we want the IoT to turn on devices while we sit on the couch (not necessarily an indulgence for the lazy, but a way to let the elderly and disabled live more independently) or help solve the world’s most pressing problems in energy and water management, we need to take the societal and technological steps to solve these problems. Then let the fun begin.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter — and to learn more about the Solid Conference coming to San Francisco in May, visit the Solid website.

January 11 2013

Defining the industrial Internet

We’ve been collecting threads on what the industrial Internet means since last fall. More case studies, company profiles and interviews will follow, but here’s how I’m thinking about the framework of the industrial Internet concept. This will undoubtedly continue to evolve as I hear from more people who work in the area and from our brilliant readers.

The crucial feature of the industrial Internet is that it installs intelligence above the level of individual machines — enabling remote control, optimization at the level of the entire system, and sophisticated machine-learning algorithms that can work extremely accurately because they take into account vast quantities of data generated by large systems of machines as well as the external context of every individual machine. Additionally, it can link systems together end-to-end — for instance, integrating railroad routing systems with retailer inventory systems in order to anticipate deliveries accurately.

In other words, it’ll look a lot like the Internet — bringing industry into a new era of what my colleague Roger Magoulas calls “promiscuous connectivity.”

Optimization becomes more efficient as the size of the system being optimized grows (in theory). Your software can take into account lots of machines, learning from a much larger training set and then optimizing both within the machine and for the group of machines working together. Think of a wind farm. There are certain optimizations you need to make at the machine level: the turbine turns itself to face into the wind, the blades adjust themselves through every cycle in order to account for flex and compression, and the machine shuts down during periods of dangerously high wind.

System-wide optimization means that when you can operate each turbine in a way that minimizes air disruption to other turbines (these things create wake, just like an airplane, that can disrupt the operation of nearby turbines). When you need to increase or decrease power output across the whole farm, you can do it across lots of machines in a way that minimizes wear (i.e., curtail each machine by 5% or cut off 5% of your machines, or something in between depending on differential output and the impact of different speeds on machine wear). And by gathering data from thousands of machines, you can develop highly-detailed optimization plans.

By tying machines together, the industrial Internet will encourage “platformization.” Cars have several control systems, and until very recently they’ve been linked by point-to-point connections: when you move the windshield-wiper lever, it actuates a switch that’s connected to a small PLC that operates the windshield wipers. The brake pedal is part of the chassis-control system, and it’s connected by cable or hydraulics to the brake pads, with an electronic assist somewhere in the middle. The navigation system and radio are part of the same telematics platform, but that platform is not linked to, say, the steering wheel.

The car as enabled by the industrial Internet will be a platform — a bus, in the computing sense — built by the car manufacturer, with other systems communicating with each other through the platform. The brake pedal is an actuator that sends a “brake” signal to the car’s brake controller. The navigation system is able to operate the steering wheel and has access to the same brake controller. Some of these systems will be driven by third-party-built apps that sit on top of the platform.

This will take some time to happen in cars because it takes 10 or 15 years to renew the American auto fleet, because cars are maintained by a vast network of independent mechanics that need change to happen slowly, and because car development works incrementally.

But it’s already happening in commercial aircraft, which often come from clean-sheet designs (as with the Boeing 787 and Airbus A350), and which are maintained under very different circumstances than passenger cars. In Bombardier’s forthcoming C-series midsize jet, for instance, the jet engines do nothing but propel the plane and generate electricity (they don’t generate hydraulic pressure or compress air for the cabin; these are handled by electrically-powered compressors). The plane acts as a giant hardware platform on which all sorts of other systems sit: the landing-gear switch communicates with the landing gear through the aircraft’s bus, rather than by direct connection to the landing gear’s PLC.

The security implications of this sort of integration — in contrast to effectively air-gapped isolation of systems — are obvious. The industrial Internet will need its own specially-developed security mechanisms, which I’ll look into in another post.

The industrial Internet makes it much easier to deploy and harvest data from sensors, which goes back to the system-wide intelligence point above. If you’re operating a wind farm, it’s useful to have wind-speed sensors distributed across the country in order to predict and anticipate wind speeds and directions. And because you’re operating machine-learning algorithms at the system-wide level, you’re able to work large-scale sensor datasets into your system-wide optimization.

That, in turn, will help the industrial Internet take in previously-uncaptured data that’s made newly useful. Venkatesh Prasad, from Ford, pointed out to me that the windshield wipers in your car are a sort of human-actuated rain API. When you turn on your wipers, you’re acting as a sensor — you see water on your windshield, in a quantity sufficient to cause you to want your wipers on, and you set your wipers to a level that’s appropriate to the amount of water on your windshield.

In isolation, all you’re doing is turning on your windshield wipers. But if your car is networked, then it can send a signal to a cloud-based rain-detection service that geocorrelates your car with nearby cars whose wipers are on and makes an assumption about the presence of rain in the area and its intensity. That service could then turn on wipers in other cars nearby or do more sophisticated things — anything from turning on their headlights to adjusting the assumptions that self-driving cars make about road adhesion.

This is an evolving conversation, and I want to hear from readers. What should be included in the definition of the industrial Internet? What examples define, for you, the boundaries of the field?


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Related

Defining the industrial Internet

We’ve been collecting threads on what the industrial Internet means since last fall. More case studies, company profiles and interviews will follow, but here’s how I’m thinking about the framework of the industrial Internet concept. This will undoubtedly continue to evolve as I hear from more people who work in the area and from our brilliant readers.

The crucial feature of the industrial Internet is that it installs intelligence above the level of individual machines — enabling remote control, optimization at the level of the entire system, and sophisticated machine-learning algorithms that can work extremely accurately because they take into account vast quantities of data generated by large systems of machines as well as the external context of every individual machine. Additionally, it can link systems together end-to-end — for instance, integrating railroad routing systems with retailer inventory systems in order to anticipate deliveries accurately.

In other words, it’ll look a lot like the Internet — bringing industry into a new era of what my colleague Roger Magoulas calls “promiscuous connectivity.”

Optimization becomes more efficient as the size of the system being optimized grows (in theory). Your software can take into account lots of machines, learning from a much larger training set and then optimizing both within the machine and for the group of machines working together. Think of a wind farm. There are certain optimizations you need to make at the machine level: the turbine turns itself to face into the wind, the blades adjust themselves through every cycle in order to account for flex and compression, and the machine shuts down during periods of dangerously high wind.

System-wide optimization means that when you can operate each turbine in a way that minimizes air disruption to other turbines (these things create wake, just like an airplane, that can disrupt the operation of nearby turbines). When you need to increase or decrease power output across the whole farm, you can do it across lots of machines in a way that minimizes wear (i.e., curtail each machine by 5% or cut off 5% of your machines, or something in between depending on differential output and the impact of different speeds on machine wear). And by gathering data from thousands of machines, you can develop highly-detailed optimization plans.

By tying machines together, the industrial Internet will encourage “platformization.” Cars have several control systems, and until very recently they’ve been linked by point-to-point connections: when you move the windshield-wiper lever, it actuates a switch that’s connected to a small PLC that operates the windshield wipers. The brake pedal is part of the chassis-control system, and it’s connected by cable or hydraulics to the brake pads, with an electronic assist somewhere in the middle. The navigation system and radio are part of the same telematics platform, but that platform is not linked to, say, the steering wheel.

The car as enabled by the industrial Internet will be a platform — a bus, in the computing sense — built by the car manufacturer, with other systems communicating with each other through the platform. The brake pedal is an actuator that sends a “brake” signal to the car’s brake controller. The navigation system is able to operate the steering wheel and has access to the same brake controller. Some of these systems will be driven by third-party-built apps that sit on top of the platform.

This will take some time to happen in cars because it takes 10 or 15 years to renew the American auto fleet, because cars are maintained by a vast network of independent mechanics that need change to happen slowly, and because car development works incrementally.

But it’s already happening in commercial aircraft, which often come from clean-sheet designs (as with the Boeing 787 and Airbus A350), and which are maintained under very different circumstances than passenger cars. In Bombardier’s forthcoming C-series midsize jet, for instance, the jet engines do nothing but propel the plane and generate electricity (they don’t generate hydraulic pressure or compress air for the cabin; these are handled by electrically-powered compressors). The plane acts as a giant hardware platform on which all sorts of other systems sit: the landing-gear switch communicates with the landing gear through the aircraft’s bus, rather than by direct connection to the landing gear’s PLC.

The security implications of this sort of integration — in contrast to effectively air-gapped isolation of systems — are obvious. The industrial Internet will need its own specially-developed security mechanisms, which I’ll look into in another post.

The industrial Internet makes it much easier to deploy and harvest data from sensors, which goes back to the system-wide intelligence point above. If you’re operating a wind farm, it’s useful to have wind-speed sensors distributed across the country in order to predict and anticipate wind speeds and directions. And because you’re operating machine-learning algorithms at the system-wide level, you’re able to work large-scale sensor datasets into your system-wide optimization.

That, in turn, will help the industrial Internet take in previously-uncaptured data that’s made newly useful. Venkatesh Prasad, from Ford, pointed out to me that the windshield wipers in your car are a sort of human-actuated rain API. When you turn on your wipers, you’re acting as a sensor — you see water on your windshield, in a quantity sufficient to cause you to want your wipers on, and you set your wipers to a level that’s appropriate to the amount of water on your windshield.

In isolation, all you’re doing is turning on your windshield wipers. But if your car is networked, then it can send a signal to a cloud-based rain-detection service that geocorrelates your car with nearby cars whose wipers are on and makes an assumption about the presence of rain in the area and its intensity. That service could then turn on wipers in other cars nearby or do more sophisticated things — anything from turning on their headlights to adjusting the assumptions that self-driving cars make about road adhesion.

This is an evolving conversation, and I want to hear from readers. What should be included in the definition of the industrial Internet? What examples define, for you, the boundaries of the field?


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Related

September 05 2012

The future of medicine relies on massive collection of real-life data

Health care costs rise as doctors try batches of treatments that don’t work in search of one that does. Meanwhile, drug companies spend billions on developing each drug and increasingly end up with nothing to show for their pains. This is the alarming state of medical science today. Shahid Shah, device developer and system integrator, sees a different paradigm emerging. In this interview at the Open Source convention, Shah talks about how technologies and new ways of working can open up medical research.

Shah will be speaking at Strata Rx in October.

Highlights from the full video interview include:

  • Medical science will come unstuck from the clinical trials it has relied on for a couple hundred years, and use data collected in the field [Discussed at the 0:30 mark]
  • Failing fast in science [Discussed at the 2:38 mark]

  • Why and how patients will endorse the system [Discussed at the 3:00 mark]

  • Online patient communities instigating research [Discussed at the 3:55 mark]

  • Consumerization of health care [Discussed at the 5:15 mark]

  • The pharmaceutical company of the future: how research will get faster [Discussed at the 6:00 mark]

  • Medical device integration to preserve critical data [Discussed at the 7:20 mark]

You can view the entire conversation in the following video:

Strata Rx — Strata Rx, being held Oct. 16-17 in San Francisco, is the first conference to bring data science to the urgent issues confronting healthcare.

Save 20% on registration with the code RADAR20

July 25 2012

Democratizing data, and other notes from the Open Source convention

There has been enormous talk over the past few years of open data and what it can do for society, but proponents have largely come to admit: data is not democratizing in itself. This topic is hotly debated, and a nice summary of the viewpoints is available in this PDF containing articles by noted experts. At the Open Source convention last week, I thought a lot about the democratizing potential of data and how it could be realized.

Who benefits from data sets

At a high level, large businesses and other well-funded organizations have three natural advantages over the general public in the exploitation of data sets:

  • The resources to gather the data
  • The resources to do the necessary programming to crunch and interpret the data
  • The resources to act on the results

These advantages will probably always exist, but data can be useful to the public too. We have some tricks that can compensate for each of the large institutions’ advantages:

  • Crowdsourcing can create data sets that can help everybody, including the formation of new businesses. OpenStreetMap, an SaaS project based on open source software, is a superb example. Its maps have been built up through years of contributions by people trying to support their communities, and it supports interesting features missing from proprietary map projects, such as tools for laying out bike paths.

  • Data-crunching is where developers, like those at the Open Source convention, come in. Working at non-profits, during week-end challenges, or just on impulse, they can code up the algorithms that make sense of data sets and apps to visualize and accept interaction from people with less technical training.

  • Some apps, such as reports of neighborhood crime or available health facilities, can benefit individuals, but we can really drive progress by joining together in community organizations or other associations that use the data. I saw a fantastic presentation by high school students in the Boston area who demonstrated a correlation between funding for summer jobs programs and lowered homicides in the inner city–and they won more funding from the Massachusetts legislature with that presentation.

Health care track

This year was the third in which the Open Source convention offered a health care track. IT plays a growing role in health care, but a lot of the established institutions are creaking forward slowly, encountering lots of organizational and cultural barriers to making good use of computers. This year our presentations clustered around areas where innovation is most robust: personal tracking, using data behind the scenes to improve care, and international development.

Open source coders Fred Trotter and David Neary gave popular talks about running and tracking one’s achievements. Bob Evans discussed a project named PACO that he started at Google to track productivity by individuals and in groups of people who come together for mutual support, while Anne Wright and Candide Kemmler described the ambitious BodyTrack project. Jason Levitt gave the science of sitting (and how to make it better for you).

In a high-energy presentation, systems developer Shahid Shah described the cornucopia of high-quality, structured data that will be made available when devices are hooked together. “Gigabytes of data is being lost every minute from every patient hooked up to hospital monitors,” he said. DDS, HTTP, and XMPP are among the standards that will make an interconnected device mesh possible. Michael Italia described the promise of genome sequencing and the challenges it raises, including storage requirements and the social impacts of storing sensitive data about people’s propensity for disease. Mohamed ElMallah showed how it was sometimes possible to work around proprietary barriers in electronic health records and use them for research.

Representatives from OpenMRS and IntraHealth international spoke about the difficulties and successes of introducing IT into very poor areas of the world, where systems need to be powered by their own electricity generators. A maintainable project can’t be dropped in by external NGO staff, but must cultivate local experts and take a whole-systems approach. Programmers in Rwanda, for instance, have developed enough expertise by now in OpenMRS to help clinics in neighboring countries install it. Leaders of OSEHRA, which is responsible for improving the Department of Veteran Affairs’ VistA and developing a community around it, spoke to a very engaged audience about their work untangling and regularizing twenty years’ worth of code.

In general, I was pleased with the modest growth of the health care track this year–most session drew about thirty people, and several drew a lot more–and both the energy and the expertise of the people who came. Many attendees play an important role in furthering health IT.

Other thoughts

The Open Source convention reflected much of the buzz surrounding developments in computing. Full-day sessions on OpenStack and Gluster were totally filled. A focus on developing web pages came through in the popularity of talks about HTML5 and jQuery (now a platform all its own, with extensions sprouting in all directions). Perl still has a strong community. A few years ago, Ruby on Rails was the must-learn platform, and knock-off derivatives appeared in almost every other programming language imaginable. Now the Rails paradigm has been eclipsed (at least in the pursuit of learning) by Node.js, which was recently ported to Microsoft platforms, and its imitators.

No two OSCons are the same, but the conference continues to track what matters to developers and IT staff and to attract crowds every year. I enjoyed nearly all the speakers, who often pump excitement into the dryest of technical topics through their own sense of continuing wonder. This is an industry where imagination’s wildest thoughts become everyday products.

April 05 2012

Editorial Radar with Mike Loukides & Mike Hendrickson

Mike Loukides and Mike Hendrickson, two of O'Reilly Media's editors, sat down recently to talk about what's on their editorial radars. Mike and Mike have almost 50 years of combined technical book publishing experience and I always enjoy listening to their insight.

In this session, they discuss what they see in the tech space including:

  • How 3D Printing and personal manufacturing will revolutionize the way business is conducted in the U.S. [Discussed at the 00:43 mark ]
  • The rise of mobile and device sensors and how intelligence will be added to all sorts of devices. [Discussed at the 02:15 mark ]
  • Clear winners in today's code space: JavaScript. With Node.js, D3, HTML5, JavaScript is stepping up the plate. [Discussed at the 04:12 mark ]
  • A discussion on the best first language to teach programming and how we need to provide learners with instruction for the things they want to do. [Discussed at the 06:03 mark ]

You can view the entire interview in the following video.

Next month, Mike and Mike will be talking about functional languages.

Fluent Conference: JavaScript & Beyond — Explore the changing worlds of JavaScript & HTML5 at the O'Reilly Fluent Conference (May 29 - 31 in San Francisco, Calif.).

Save 20% on registration with the code RADAR20

February 09 2012

It's time for a unified ebook format and the end of DRM

This post originally appeared on Publishers Weekly.

EreadersImagine buying a car that locks you into one brand of fuel. A new BMW, for example, that only runs on BMW gas. There are plenty of BMW gas stations around, even a few in your neighborhood, so convenience isn't an issue. But if one of those other gas stations offers a discount, a membership program, or some other attractive marketing campaign, you can't participate. You're locked in with the BMW gas stations.

This could never happen, right? Consumers are too smart to buy into something like this. Or are they? After all, isn't that exactly what's happening in the ebook world? You buy a dedicated ebook reader like a Kindle or a NOOK and you're locked in to that company's content. Part of this problem has to do with ebook formats (e.g., EPUB or Mobipocket) while another part of it stems from publisher insistence on the use of digital rights management (DRM). Let's look at these issues individually.

Platform lock-in

I've often referred to it as Amazon's not-so-secret formula: Every time I buy another ebook for my Kindle, I'm building a library that makes me that much more loyal to Amazon's platform. If I've invested thousands or even hundreds of dollars in Kindle-formatted content, how could I possibly afford to switch to another reading platform?

It would be too inconvenient to have part of my library in Amazon's Mobipocket format and the rest in EPUB. Even though I could read both on a tablet (e.g., the iPad), I'd be forced to switch between two different apps. The user interface between any two reading apps is similar but not identical, and searching across your entire library becomes a two-step process since there's no way to access all of your content within one app.

This situation isn't unique to Amazon. The same issue exists for all the other dedicated ereader hardware platforms (e.g., Kobo, NOOK, etc.). Google Books initially seemed like a solution to this problem, but it still doesn't offer mobi formats for the Kindle, so it's selling content for every format under the sun — except the one with the largest market share.

EPUB would seem to be the answer. It's a popular format based on web standards, and it's developed and maintained by an organization that's focused on openness and broad industry adoption. It also happens to be the format used by seemingly every ebook vendor except the largest one: Amazon.

Even if we could get Amazon to adopt EPUB, though, we'd still have that other pesky issue to deal with: DRM.

The myth of DRM

I often blame Napster for the typical book publisher's fear of piracy. Publishers saw what happened in the music industry and figured the only way they'd make their book content available digitally was to tightly wrap it with DRM. The irony of this is that some of the most highly pirated books were never released as ebooks. Thanks to the magic of high-speed scanner technology, any print book can easily be converted to an ebook and distributed illegally.

Some publishers don't want to hear this, but the truth is that DRM can be hacked. It does not eliminate piracy. It not only fails as a piracy deterrent, but it also introduces restrictions that make ebooks less attractive than print books. We've all read a print book and passed it along to a friend. Good luck doing that with a DRM'd ebook! What publishers don't seem to understand is that DRM implies a lack of trust. All customers are considered thieves and must be treated accordingly.

The evil of DRM doesn't end there, though. Author Charlie Stross recently wrote a terrific blog post entitled "Cutting Their Own Throats." It's all about how publisher fear has enabled a big ebook player like Amazon to further reinforce its market position, often at the expense of publishers and authors. It's an unintended consequence of DRM that's impacting our entire industry.

Given all these issues, why not eliminate DRM and trust your customers? Even the music industry, the original casualty of the Napster phenomenon, has seen the light and moved on from DRM.

TOC NY 2012 — O'Reilly's TOC Conference, being held Feb. 13-15, 2012, in New York, is where the publishing and tech industries converge. Practitioners and executives from both camps will share what they've learned and join together to navigate publishing's ongoing transformation.

Register to attend TOC 2012

Lessons from the music industry

Several years ago, Steve Jobs posted a letter to the music industry pleading for them to abandon DRM. The letter no longer appears on Apple's website, but community commentary about it lives on. My favorite part of that letter is where Jobs asks why the music industry would allow DRM to go away. The answer is that, "DRMs haven't worked, and may never work, to halt music piracy." In fact, a study last year by Rice University and Duke University contends that removing DRM can actually decrease piracy. Yes, you read that right.

I recently had an experience with my digital music collection that drove this point home for me. I had just switched from an iPhone to an Android phone and wanted to get my music from the old device onto the new one. All I had to do was drag and drop the folder containing my music in iTunes to the SD card in my new phone. It worked perfectly because the music file formats are universal and there was no DRM involved.

Imagine trying to do that with your ebook collection. Try dragging your Kindle ebooks onto your new NOOK, for example. Incompatible file formats and DRM prevent that from happening ... today. At some point in the not-too-distant future, though, I'm optimistic the book publishing industry will get to the same stage as the music industry and offer a universal, DRM-free format for all ebooks. Then customers will be free to use whatever e-reader they prefer without fear of lock-in and incompatibilities.

The music industry made the transition, why can't we?


Related:


January 20 2012

Don't expect the end of electronics obsolescence anytime soon

This post originally appeared in Mike Loukides' Google+ feed.

Solid State by skippyjon, on FlickrA cheery post-CES Mashable article talks about "the end of obsolescence." Unfortunately, that's precisely not what the sort of vendors distributing at CES have in mind.

The general idea behind the end of obsolescence is that consumer electronics — things like Internet-enabled TVs — are software upgradeable. The vendor only needs to ship a software update, and your TV is as good as the new ones in the store.

Except it isn't. It's important to think about what drives people to buy new gear. It used to be that you bought a TV or a radio and used it for 10 or 20 years, until it broke. (Alas, repairability is no longer in our culture.) Yes, the old one didn't have all the features of the new one, but who really cared? It worked. I've got a couple of flat screen monitors in my office; they're sorta old, they're not the best, and sure, I'd like brand new ones, but they work just fine and will probably outlive the computers they're connected to.

The point of field upgrades is that your old TV will have all the "new" features — just like my office computers that get regular updates from Apple and Microsoft. But your old TV will also have its 10-year-old CPU, 10-year-old RAM, and its 10-year-old SD card slot that doesn't know what to do with terabyte SD cards. And the software upgrades will make you painfully aware of that. Instead of a TV that works just fine, you'll have a TV that works worse and worse as time goes by. If you're in the computing business, and you've used a five-year-old machine, you know how painful that can be.

End of obsolescence? No, it's rubbing obsolescence in your face. Good for vendors, maybe, but not for consumers.

See comments and join the conversation about this topic at Google+.

Photo: Solid State by skippyjon, on Flickr

January 13 2012

A study confirms what we've all sensed: Readers are embracing ereading

The recently released Consumer Attitudes Toward E-Book Reading study by the Book Industry Study Group (BISG) showed impressive growth in ereading. From October 2010 to August 2011, the ebook market share more than tripled. Also notable, readers are committing to the technology, with almost 50% of ereading consumers saying they would wait up to three months to read a new ebook from a favorite author rather than reading the same book immediately in print.

In the following interview, BISG's deputy executive director Angela Bole reviews some of the study's data and addresses growing trends in ereading.

Bole will further examine the study's results — including data from the new third volume — at the "Consumer Attitudes Toward E-Book Reading" session at the upcoming Tools of Change for Publishing conference.

Are readers embracing ereading?

AngelaBole.jpgAngela Bole: When the first survey in volume two of BISG's "Consumer Attitudes Toward E-Book Reading" was fielded in October 2010, the market share for ebooks was less than 5%. In the latest fielding, conducted in August 2011, the market share was almost 16%. Clearly, readers are embracing ereading. The greatest interest today seems to lie in narrative fiction and nonfiction, with interest in more interactive nonfiction and education taking longer to develop.

How are most readers consuming e-books?

Angela Bole: In the October 2010 and January 2011 survey fieldings, there were two distinct categories of ereaders — tablets like the iPad and dedicated devices like the Kindle — with a wide functionality difference between them. During the May 2011 and August 2011 fieldings, the NOOK Color and many new Android-based tablets were released, and distinctions between device categories began to blur. Even so, dedicated ereaders remain the favorite ebook reading device for book lovers, especially for reading fiction and narrative nonfiction. The Kindle, in particular, remains strong.

DeviceGraph.jpg
A graph illustrating responses to the study question, "What device do you now use most frequently to read e-books?"

What are the most popular genres for ebooks?

Angela Bole: This depends to a degree on whether you're a "tablet person" or a "dedicated ereader person." Data from the Consumer Attitudes survey shows that the Kindle and NOOK are the preferred devices of survey respondents in all fiction categories, while tablets like the iPad hold the edge in nonfiction categories. In these reports, the data have suggested that dedicated ereaders may be better optimized for narrative reading, while the richer media capabilities of tablets may be more appropriate for nonfiction, education, and scientific and professional titles.

GenreGraph.jpg
A graph illustrating responses to the study question, "What genre(s) do you like to read, overall (in any format)?"

Do people typically buy ebooks on their computers and then transfer them to their devices?

Angela Bole: Until August 2011, our data showed that the computer (desktop or laptop) was the prevailing purchasing platform. Today, however, more and more people are purchasing directly on their dedicated ereaders — 49% of respondents to the August 2011 fielding, up from 36% in May 2011.

Does the research point to digital publishing helping or hurting the publishing industry?

Angela Bole: Consumers who migrate to digital are spending less on physical hardcover and paperback books. The research supports this out quite clearly. That said, respondents to the survey actually report increasing their overall dollar spending as they make the transition to ebooks. Almost 70% of the respondents to the August 2011 fielding reported increasing their ebook expenditures, compared with 49% in the October 2010 fielding. Respondents reported increased spending on books in all formats to a greater degree than they reported decreased spending. Assuming the publishing industry can develop the right business models, this is good news.

This interview was edited and condensed.

TOC NY 2012 — O'Reilly's TOC Conference, being held Feb. 13-15, 2012, in New York City, is where the publishing and tech industries converge. Practitioners and executives from both camps will share what they've learned and join together to navigate publishing's ongoing transformation.

Register to attend TOC 2012

Related:

January 06 2012

Visualization of the Week: AntiMap

A new mobile phone app, AntiMap Log, allows users to record their own data as they move around. The app uses the phone's GPS and compass sensors to capture the following data: latitude, longitude, compass direction, speed, distance, and time.

While the AntiMap Log — available for both Android and iPhone — is the data-gathering component, it's just one part of a trio of open source tools. AntiMap Simple and AntiMap Video provide the visualization and analysis components.

AntiMap Video was originally designed to help snowboarders visualize their data in real-time, synced with footage of their rides. Here's a demo video:

That same snowboarder data is also used in the following visualization:

AntiMap snowboard visualization

AntiMap describes the visualization:

Circles are used to visualise the plotted data. The color of each circle is mapped to the compass data (0˚ = black, 360˚ = white), and the size of each circle is mapped to the speed data (bigger circles = faster) ... You can see from the visualisation, during heelside turns (left) the colours are a lot whiter/brighter than toeside turns (right). The sharper/more obvious colour changes indicate either sudden turns or spins (eg. the few black rings right in the centre).


Found a great visualization? Tell us about it

This post is part of an ongoing series exploring visualizations. We're always looking for leads, so please drop a line if there's a visualization you think we should know about.

Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.

Save 20% on registration with the code RADAR20

More Visualizations:

September 26 2011

Could Medical Devices in the Field Help Prevent Fraud?

Medical devices, as I reported earlier this month, are becoming more and more important to health care as they get more sophisticated, more compact, cheaper, and easier to deploy in the home. But in addition to improving care, keeping patients out of the hospital, and providing useful clinical data to researchers, could medical devices help with one of the biggest contributors to escalating health costs--could they help prevent fraud?

Fraud is definitely sucking money from the health care system. The Coalition Against Insurance Fraud (while cautioning that accurate statistics are hard to gather) estimated that Medicare and Medicaid made an estimated 23.7 billion dollars in improper payments in 2007. The Wall Street Journal just reported the conviction of a single scam that cost Medicare 205 million dollars.

Governments aren't the only ones paying bad claims. Private insurers are obsessed with fraudulent charges. And all payers--both government and private--put up huge bureaucratic barriers to payments in order to combat fraud, elevating health costs even more.

Deliberate charges for work that was never performed represent the extreme end of a range of poor medical practices. Medicare defines a category of "abuse" in addition to fraud. Examples of abuse include performing an unnecessary procedure and offering treatments in an ineffective manner.

What can medical devices do? As developer Shahid Shah pointed out in an interview, devices provide authoritative information that is more accurate than impressions reported by either patients or health care providers. This means they can objectively indicate:

  • Traits and statistics indicating the presence of diseases and medical conditions

  • Traits and statistics indicating that treatments were applied

  • Traits and statistics indicating the remission of symptoms

In other words, evidence from devices can verify that a treatment was necessary, that it was administered, and that it was effective. To carry out their role in fraud prevention, devices must be secured. When a manufacturer's testing verifies their accuracy, the manufacturer should digitally sign them, and each transmission from the device should bear the signature. Tampering should, as much as the manufacturer can guarantee, disable the digital signature. Analysis may also be able to detect other ways of gaming the system, such as patients creating artificial conditions to simulate medical problems.

September 09 2011

Big health advances in small packages: report from the third annual Medical Device Connectivity conference

At some point, all of us are likely to owe our lives--or our quality of life--to a medical device. Yesterday I had the chance to attend the third annual Medical Device Connectivity conference, where manufacturers, doctors, and administrators discussed how to get all these monitors, pumps, and imaging machines to work together for better patient care.

A few days ago I reported on the themes in the conference, and my attendance opened up lots of new insights for me. I've been convinced for a long time that the real improvements in our health care system will come, not by adjusting insurance policies, payment systems, or data collection, but by changes right at the point of care. And nothing is closer to the point of care than the sensor attached to your chest or the infusion pump inserted into your vein.

What sorts of innovation can we hope for in medical devices? They include:

Connectivity

Devices already produce (in the form of monitors' numerical outputs and waveforms) and consume (in the form of infusion pumps' formularies and settings) data. The next natural step is to send them over networks--hopeful wireless--so they can be viewed remotely. Even better, networking can infuse their data into electronic health records.

One biomedical engineer gestured vigorously as he told me how the collection of device data could be used to answer such questions as "How do some hospitals fare in comparison to others on similar cases?" and "How fast do treatments take effect?" He declared, "I'd like to spend the rest of my life working on this."

Cooperation

After coaxing devices to share what they know, the industry can work on getting them to signal and control each other. This would remove a major source of error in the system, because the devices would no longer depend on nurses to notice an alert and adjust a pump. "Smart alarms" would combine the results of several sensors to determine more accurately whether something is happening that really requires intervention.

Of course, putting all this intelligence in software calls for iron-clad requirements definitions and heavy testing. Some observers doubt that we can program devices or controllers well enough to handle the complexity of individual patient differences. But here, I think the old complaint, "If we can get men to the moon..." applies. The Apollo program showed that we can create complex systems that work, if enough care and effort are applied. (However, the Challenger explosion shows that we can also fail, when they are not.)

Smaller and smarter

Faster, low-power chips are leading to completely different ways of solving common medical problems. Some recent devices are two orders of magnitude smaller than corresponding devices of the previous generation. This means, among other things, that patients formerly confined to bed can be more ambulatory, which is better for their health. The new devices also tend to be 25% to 30% the cost of older ones.

Wireless

Telemedicine is one of the primary goals of health care reform. Taking care out of clinics and hospitals and letting people care for themselves in their natural environments gives us whole new opportunities for maintaining high-level functioning. But it isn't good for anybody's health to trip over cables, so wireless connections of various types are a prerequisite to the wider use of devices.

Ed Cantwell of the West Wireless Health Institute suggested three different tiers of wireless reliability, each appropriate for different devices and functions in a medical institution. The lowest level, consumer-grade, would correspond to the cellular networks we all use now. The next level, enterprise-grade, would have higher capacity and better signal-to-noise ratio. The highest level, medical-grade, would be reserved for life-critical devices and would protect against interference or bandwidth problems by using special frequencies.

Standards

The status of standards and standards bodies in the health care space is complex. Some bodies actually create standards, while others try to provide guidelines (even standards) for using these standards. When you consider that it takes many years just to upgrade from one version of a standard to the next, you can see why the industry is permanently in transition. But these standards do lead to interoperability.

A few of the talks took on a bit of a revival meeting atmosphere--step up and do things the way we tell you! Follow standards, do a risk analysis, get that test plan in place. I think one of the most anticipated standards for device safety,

Communication frameworks

The industry is just in the opening chapter of making devices cooperate, but a key principle seems to be generally accepted: instead of having one device directly contact another, it's better to have them go through a central supervisor or node. A monitor, for instance, may say, "This patient's heart rate is falling." The supervisor would receive this information and check a "scenario" to see whether action should be taken. If so, it tells the pump, "Change the infusion rate as follows."

Michael Robkin reported about an architecture for an integrated clinical environment (ICE) in a set of safety requirements called ASTM F2761-09. The devices are managed by a network controller, which in turn feeds data back and forth to a supervisor.

I took the term "scenario" from a project by John Hatcliff of Kansas State University, the Medical Device Coordination Framework that seems well suited to the ICE architecture. In addition to providing an API and open-source library for devices to communicate with a supervisor, the project creates a kind of domain-specific language in which clinicians can tell the devices how to react to events.

Communicating devices need regulatory changes too. As I described in my earlier posting, the FDA allows communicating devices only if the exact configuration has been tested by the vendor. Each pair of devices must be proven to work reliably together.

In a world of standards, it should be possible to replace this pair-wise clearance with what Robkin called component-wise clearance. A device would be tested for conformance to a standard protocol, and then would be cleared by the FDA to run with any other device that is also cleared to conform. A volunteer group called the Medical Device Interoperability Safety Working Group has submitted a set of recommendations to the FDA that recommends this change. We know that conformance to written standards does not perfectly guarantee interoperability (have you run into a JavaScript construct that worked differently on different web browsers?), but I suppose this change will not compromise safety because the hospital is always ultimately responsible for testing the configurations of devices it chooses.

The safety working group is also persuading the FDA to take a light hand in regulating consumer devices. Things such as the Fitbit or Zeo that are used for personal fitness should not have to obey the same rigorous regulations as devices used to make diagnostic decisions or deliver treatments.

Open Source

One of the conference organizers is Shahid Shah, a software developer in the medical device space with a strong leaning toward open-source solutions. As in his OSCon presentation, Shah spoke here of the many high-quality components one can get just for the effort of integration and testing. The device manufacturer takes responsibility for the behavior of any third-party software, proprietary or free, that goes into the device. But for at least a decade, devices using free software have been approved by the FDA. Shah also goaded his audience to follow standards wherever possible (such as XMPP for communication), because that will allow third parties to add useful functionality and enhance the marketability of their devices.

Solving old problems in new ways

Tim Gee, program chair, gave an inspiring talk showing several new devices that do their job in radically better ways. Many have incorporated the touch screen interface popularized by the iPhone, and use accelerometers like modern consumer devices to react intelligently to patient movements. For instance, many devices give off false alarms when patients move. The proper interpretation of accelerometers might help suppress these alarms.

In addition to user interface advances and low-power CPUs with more memory processing power, manufacturers are taking advantage of smaller, better WiFi radios and tools that reduce their time to market. They are also using third party software modules (including open source) and following standards more. Networking allows the device to offload some processing to a PC or network server, thus allowing the device itself to shrink even more in physical size and complexity. Perhaps most welcome to clinicians, manufacturers are adding workflow automation.

Meaningful use and the HITECH act play a role in device advances too. Clinicians are expected to report more and more data, and the easiest way to get and store it is go to the devices where it originates.

Small, pesky problems

One of the biggest barriers to sharing device data is a seemingly trivial question: what patient does the data apply to? The question is actually a major sticking point, because scenarios like the following can happen:

  • A patient moves to a new hospital room. Either her monitor reports a new location, or she gets a new monitor. Manually updating the information to put the new data into the patient's chart is an error-prone operation.

  • A patient moves out and a device is hooked up to a new patient. This time the staff must be sure to stop reporting data to the old patient's record and direct it into the new patient's record.

    • Several speakers indicated that even the smallest risk of putting data into the wrong person's record was unacceptable. Hospitals assign IDs to patients, and most monitors report the ID with each data sample, but Luis Melendez of Partners HealthCare says the EHR vendors don't trust this data (nurses could have entered it incorrectly) so they throw the IDs away and record only the location. Defaults can be changed, but these introduce new problems.

      Another one of these frustratingly basic problems is attaching the correct time to data. It seems surprisingly hard to load the correct time on devices and keep them synched to it. I guess medical devices don't run NTP. They often have timestamps that vary from the correct time by minutes or even months, and EHRs store these without trying to verify them.

      Other steps forward

      We are only beginning to imagine what we could do with smarter medical devices. For instance, if their output is recorded and run through statistical tests, problem with devices might be revealed earlier in their life cycles. (According to expert Julian Goldman, the FDA and ONC are looking into this.) And I didn't even hear talk of RFIDs, which could greatly reduce the cost of tracking equipment as well as ensure that each patient is using what she needs. No doubt about it: even if medical device improvements aren't the actually protagonists of the health care reform drama, they will play a more than supporting role. But few clinical institutions use the capabilities of smart devices yet, and those that do exploit them do so mostly to insert data into patient records. So we'll probably wait longer than we want for the visions unveiled at this conference to enter our everyday lives.

September 06 2011

Medical device experts and their devices converse at Boston conference

I'm looking forward to Medical Device Connectivity conference this week at Harvard Medical School. It's described by one of the organizers, Shahid N. Shah, as an "operational" conference, focused not on standards or the potential of the field but on how to really make these things work. On the other hand, as I learned from a call with Program Chair Tim Gee, many of the benefits for which the field is striving have to wait for changes in technology, markets, FDA regulations, and hospital deployment practices.

For instance, Gee pointed out that patients under anesthesia need a ventilator in order to breath. But this interferes with certain interventions that may be needed during surgery, such as X-Rays, so the ventilator must be also sometimes turned off temporarily. Guess how often the staff forget to turn the ventilator back on after taking the X-Ray? Often enough, anyway, to take this matter out of human hands and make the X-Ray machine talk directly to the ventilator. Activating the X-Ray machine should turn off the ventilator, and finishing the X-Ray should turn it back on. But that's not done now.

Another example where Gee pointed out the benefits of smarter and better networked devices are patient-controlled analgesia (PCA) pumps. When overused, they can cause the patient's organs to slow down and eventually cause death. But a busy nurse or untrained family member might keep the pump going even after the patient monitors issue alarms. It would be better for the monitor to shut off the pump when vital signs get dangerously low.

The first requirement for such life-saving feedback loops is smart software that can handle feedback loops and traverse state machines, as well as rules engines for more complex scenarios such as self-regulating intravenous feeds. Standards will make it easier to communicate between devices. But the technologies already exist and will be demonstrated at an interoperability lab on Wednesday evening.

None of the functions demonstrated at the lab are available in the field. They require more testing by manufacturers. Obviously, devices that communicate also add an entirely new dimension of tasks to hospital staff. And FDA certification rules may also need to be updated to be more flexible. Currently, a manufacturer selling a device that interoperates with other devices must test every configuration it wants to sell, specifying every device in that configuration and taking legal responsibility for their safe and reliable operation. It cannot sell into a hospital that uses the device in a different configuration. Certifying a device to conform to a standard instead of working with particular devices would increase the sale and installation of smart, interconnected instruments.

Reading the conference agenda (PDF), I came up with three main themes for the conference:

The device in practice

Hospitals, nursing homes, and other institutions who are current major uses of devices (although they're increasingly being used in homes as well) need to make lots of changes in workflow and IT to reap the benefits of newer device capabilities. Currently, according to Gee, the output from devices is used to make immediate medical decisions but are rarely stored in EHRs. The quantities of data could become overwhelming (perhaps why one session at the conference covers cloud storage) and doctors would need tools for analyzing it. Continuous patient monitoring, which tends to send 12 Kb of data per second and must have a reliable channel that prevents data loss, has a session all its own.

Standards and regulations

The FDA has responded with unusual speed and flexibility to device manufacturers recently. One simple but valuable directive (PDF) clarified that devices could exchange information with other systems (such as EHRs) without requiring the FDA to regulate those EHRs. It has also issued a risk assessment standard named IEC80001, which specifies what manufacturers and hospitals should do when installing devices or upgrading environments. This is an advance for the FDA because they normally do not regulate the users of products (only the manufacturers) but they realize that users need guidance for best practices. Another conference session covers work by the mHealth Regulatory Coalition.

Wireless networks

This single topic is complex and important enough to earn several sessions, including security and the use of the 5 GHz band. No surprise that so much attention is being devoted to wireless networking: it's the preferred medium for connecting devices, but its reliability is hard to guarantee and absolutely critical.

The conference ends with three workshops: one on making nurse response to alarms and calls more efficient, one on distributed antenna systems (good for cell phones and ensuring public safety), and one on the use of open source libraries to put connectivity into devices. The latter workshop is given by Shah and goes more deeply into the topic he presented in a talk at O'Reilly's Open Source convention and an interview with me.

July 07 2011

3 Android predictions: In your home, in your clothes, in your car

In advance of his upcoming webcast and the Android Open conference, I asked "Learning Android" author Marko Gargenta to weigh in on Android's future. Below he offers three predictions that focus on Android's expansion beyond mobile devices.


Prediction 1: Android controls the home

Marko GargentaMarko Gargenta: Google painted their vision of Android @ Home at the last Google I/O. I think this has huge potential to make Android the de-facto controller for many other devices, from lights to music players to robots and factory machinery. We are seeing the first stage with numerous home security systems being developed using Android, as well as set-top boxes powered by Android. At the moment, many of these devices simply use Android as a replacement for embedded Linux and they're still just self-contained devices.

In the second stage, manufacturers will start exposing libraries so developers can build custom applications for their devices, effectively turning them into platforms. I predict this will happen later this year as manufacturers realize the power of letting users hack their systems. The latest case study with Microsoft Kinetic should help pave the way.

In the third stage, various devices will be able to interact with one another — my phone can detect my TV and my TV can communicate with my stereo. This will take a bit longer to get to as we still don't have common protocols for this type of communication. We also run the risk of companies developing their own proprietary protocols, such as a Samsung TV only talking to a Samsung phone, etc. Compatibility may require Google stepping in and using the Compatibility Test Suite (CTS) as a tool to enforce common protocols.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD

Prediction 2: Wearable Android

Marko Gargenta: The form factor for Android boards is getting to be very small and the price of the actual chipset is approaching the $100 point for a full-featured device. This allows for development of wearable Android-powered devices. Some of them will be for fashion purposes, such as watches. Others will be for medical and safety applications. I predict that toward the end of this year we're going to start seeing high-end fashion accessories based on Android. We may not be aware they are Android-powered, and we may not be able to develop for them. At the same time, early medical devices will emerge, initially for non-critical applications. These will likely be closed, purpose-built systems with little opportunity for development or extension.

Prediction 3: Android and networked cars

Android logoMarko Gargenta: This is the next big frontier for Android to seize. The car industry is now at the point where the mobile phone industry was 5-10 years ago. People are going to want more from their car systems as they realize that things like Google Maps beat any stock navigation system. Consumers will want car-based connectivity to the Internet as well as apps.

The first stage of networked car development will involve using Android to build proprietary systems. This is already underway with commercial systems being built for cars without users even knowing the systems are based on Android. The second stage will involve connecting the cars to the Internet. This can be done in a couple of ways: cars can have radios with their own connections to the Internet or a driver's mobile phone can be tapped for online access.

Whatever approach we take, 4G and LTE network developments will help the process quite a bit. Once the cars are connected, manufactures will have the opportunity to open up kits for developers to build purpose-built applications for those systems. It is likely that manufacturers may tightly control what apps are allowed into what vehicles by running their own proprietary app stores with strict policies and quality control. This is simply the nature of the auto industry to self-police itself and focus heavily on testing the software. It is not very likely that we'll be able to simply download car apps from a major app market right away.



Related:


January 27 2011

Loathe your Kindle? Swap it for a bunch of books

KindleIf you're an extreme curmudgeon who deems the Kindle to be a "soulless faux-literary technology," the Microcosm Publishing Book and Zine Store in Portland, Ore., has a solution should you somehow come into possession of the device. Trade your Kindle — dollar for dollar — for old-school paper books.

I spoke with Matt Gauck, a bookseller at the store, on the phone Wednesday night. He said the plan is to add the Kindles to the store's collection of outdated technology. So far, storage limitations haven't been an issue because they've had just two participants in the exchange program. If the trade-in catches on, however, they'll need a plan B.

One potential solution: Consider a tax-deductible donation to Worldreader.org. The nonprofit organization provides access to digital books in developing countries. In November, they launched a project in partnership with Amazon called the iREAD Pilot Study. Kindles were distributed to more than 500 children at six schools in Ghana, providing them with books and textbooks to which they wouldn't otherwise have access. The prelimiary progress report is worth a read and quite inspirational.

Anytime technology can be used to make the world a better place, it's a success in my book.

TOC: 2011, being held Feb. 14-16, 2011 in New York City, will explore "publishing without boundaries" through a variety of workshops, keynotes and panel sessions.

Save 15% off registration with the code TOC11RAD

December 16 2010

Device Update: Year-end edition

I agree with other analysts in the industry who believe this holiday season will mark the tipping point for ereaders and ebooks. Here's why:

  • The release of Apple's iPad kicked off a wave of new, ereader-friendly devices.
  • Access to these devices is getting easier. Most leading office supply, electronics, and department stores sell one or more dedicated ereader devices.
  • The flexible application strategies from Amazon, Google, and Kobo mean that even if you don't get a shiny new mobile gadget as a gift, you can download an ereader application for your mobile phone, laptop, or desktop computer.

I sense a strange irony emerging from such an over-active ereader marketplace. With the strong application strategies of Amazon, Google, and Kobo, it's unlikely that any device manufacturer will end up owning a majority of the marketplace. Up until the release of Google's eBookstore platform, I thought it was premature to anticipate the future of dedicated ereading devices. But It's clear to me now that dedicated ereading platforms will only be a niche market. Any of the multiple platform approaches that manage a user's library in the cloud will ultimately be the preferable option. Surely there will continue to be innovation in the dedicated ereader device market, but strong competition will keep profit margins low. The real competition in the ereading marketplace will happen with software, not hardware.

Looking forward into 2011, I see two areas that will be important within the ereader and ebook market: first, we'll see an implosion of the dedicated ereader device market. Subsequently, the focus will shift to the maturation of the ebook market.

Prediction 1: The dedicated ereader market will crest

When Apple launched the iTunes store in 2003 and established a marketplace for digital music, most of the commercial electronics makers were taken by surprise. Apple's capture of this new market was so swift, no one was able to mount a serious challenge. Even now, I don't believe any company will break Apple's stranglehold until the experience of consuming digital music changes.

When Apple announced plans for the iPad and the iBooks platform earlier this year, the commercial electronics marketplace was determined to prevent Apple from repeating their digital music success with digital books. While Apple has built a commanding lead in the tablet marketplace, competitors flooded the market with a plethora of different devices.

Despite bold predictions of an expanding market for dedicated ereading devices (here and here), I believe that demand for these devices will peak in 2011 and rapidly surrender to the growing popularity of multiple-purpose tablets, like the iPad and its competitors.

The downsides inherent with dedicated ereaders will work to decrease consumers' desire for these devices. The disadvantages include:

  • Complications resulting from device lock-in, such as the inability to migrate purchased books to a different platform.
  • As readers settle into using their new devices, the experience of identifying, obtaining, and recommending titles will become increasingly more important. Due to their limited functionality, these single-purpose devices may prevent readers from sharing their reading experience with friends and family.
  • With very few devices offering color displays or strong graphics capabilities, consumers may determine that text-only books aren't worth the price.

Prediction 2: An increased focus on the ebook experience

As the holiday decorations go back into storage early next year, the pressure will really be on the publishing industry to deliver the goods for the freshly-minted marketplace of ereaders. The most interesting aspect of the ereader tipping point will be the evolution of the ebook experience. By "experience" I mean the full environment of ebooks: how readers discover books, purchase them, read, and share titles. While a portion of this environment will be built into specific ereader devices, the remaining aspects of this environment will have to be built from software.

It seems natural to expect that new ebook consumers will want to replicate their existing reading-related behaviors. While I've seen how sites like GoodReads and Shelfari have become an active part of this process, it is less clear how sites like Facebook and LinkedIn help readers share their love for books. It'll be interesting to see how this emerging ereading environment integrates into existing social networks.

Tim Berners-Lee recently noted the importance of ebooks remaining part of the Internet. His concern was that by packaging books into binary applications for consumption on mobile devices, they would become invisible to the Internet. This is similar to what happened with content that was packaged in Adobe Flash files, which were hidden from search engines up until very recently. I agree with Berners-Lee and submit that, while of great interest to many publishers, books as applications will be a passing fad because they cut readers off from the rest of this ereading environment.

Looking forward, the publishing industry has a few areas that might become obstacles to long-term growth in the ebook market. First, with millions of books pre-dating the ereading tipping point, how will publishers make those titles available to consumers in digital formats? In this regard, Google's efforts to scan books may become their default answer.

The second tricky situation involves localizing content for a global economy. As broadband and wireless access expands into more countries, the demand for digital books in languages other than English will grow. If that material isn't available, digital sales will be hampered.

2011: Trial by fire

Several years ago, I could already see drastic change approaching the publishing industry. The uncertainty of how the industry would react to those changes, coupled with the opportunity to actively participate in that change, is what sparked my interest in the future of publishing.

For most publishers, 2011 will be a critical period. The digital book changes publishers have already made will meet fully with the harsh demands of a significant consumer base. Authors, agents, publishers, and bookstores will need to agile if they want to keep up with the demands of digital readers.



Related:


Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl