Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 09 2012

Four short links: 9 August 2012

  1. Doing Capitalism in the Innovation Economy (Amazon) — soon-to-be-released book by Bill Janeway, of Warburg-Pincus (and the O’Reilly board). People raved about his session at scifoo. I’m bummed I missed it, but I’ll console myself with his book.
  2. Cell Image Librarya freely accessible, easy-to-search, public repository of reviewed and annotated images, videos, and animations of cells from a variety of organisms, showcasing cell architecture, intracellular functionalities, and both normal and abnormal processes. The purpose of this database is to advance research, education, and training, with the ultimate goal of improving human health. And an excellent source of desktop images.
  3. Smartphone EEG Scanner — unusually, there’s no Kickstarter project for an iPhone version. (Designs and software are open source)
  4. Feynman — excellent graphic novel bio of Feynman, covering the science as well as the personality. Easy to read and very enjoyable.

August 01 2012

Sensors and Arduino: How to glue them together

Federico Lucifredi (@federico_II) is the maintainer of man(1) and also the author of the upcoming book, Sensor Interfaces for Arduino. We had a chance to sit down recently and talk about how to connect sensors to microcontrollers (in particular Arduino).

Given how many sensors there are in the wild, there’s a lot to say about sensors. Some of the key points from the full video are:

  • When to look for a library to support your sensor and when to just write a few lines of code to read it. [Discussed at the 3:00 mark]
  • Thinking about sensors that return non-linear responses and how that might affect your code. [4:40]
  • Detecting a human presence on a door mat. [6:00]
  • Using a Geiger counter to measure radiation and generate random numbers. [8:14]
  • Where to look for docs and code when you start working with an unfamiliar sensor. [11:30]

The full discussion is available in the following video:


Four short links: 1 August 2012

  1. China Hackers Hit EU Point Man and DC (Bloomberg) — wow. The extent to which EU and US government and business computer systems have been penetrated is astonishing. Stolen information is flowing out of the networks of law firms, investment banks, oil companies, drug makers, and high technology manufacturers in such significant quantities that intelligence officials now say it could cause long-term harm to U.S. and European economies. (via Gady Epstein)
  2. Digestible Microchips (Nature) — The sand-particle sized sensor consists of a minute silicon chip containing trace amounts of magnesium and copper. When swallowed, it generates a slight voltage in response to digestive juices, which conveys a signal to the surface of a person’s skin where a patch then relays the information to a mobile phone belonging to a healthcare-provider. (via Sara Winge)
  3. Quantum Mechanics Make Simple(r) — clever way to avoid the brain pain of quantum mechanics and leap straight to the “oh!”. [N]ature is described not by probabilities (which are always nonnegative), but by numbers called amplitudes that can be positive, negative, or even complex. [...] In the usual “hierarchy of sciences”—with biology at the top, then chemistry, then physics, then math—quantum mechanics sits at a level between math and physics that I don’t know a good name for. Basically, quantum mechanics is the operating system that other physical theories run on as application software (with the exception of general relativity, which hasn’t yet been successfully ported to this particular OS). (via Hacker News)
  4. Selectively De-Animating Video — SIGGRAPH talk showing how to keep some things still in a video. Check out the teaser video with samples: ZOMG. I note that Maneesh Agrawala was involved: I’m a fan of his from Line Drive maps and 3D exploded views, but his entire paper list is worth reading. Wow. (via Greg Borenstein)

April 05 2012

Editorial Radar with Mike Loukides & Mike Hendrickson

Mike Loukides and Mike Hendrickson, two of O'Reilly Media's editors, sat down recently to talk about what's on their editorial radars. Mike and Mike have almost 50 years of combined technical book publishing experience and I always enjoy listening to their insight.

In this session, they discuss what they see in the tech space including:

  • How 3D Printing and personal manufacturing will revolutionize the way business is conducted in the U.S. [Discussed at the 00:43 mark ]
  • The rise of mobile and device sensors and how intelligence will be added to all sorts of devices. [Discussed at the 02:15 mark ]
  • Clear winners in today's code space: JavaScript. With Node.js, D3, HTML5, JavaScript is stepping up the plate. [Discussed at the 04:12 mark ]
  • A discussion on the best first language to teach programming and how we need to provide learners with instruction for the things they want to do. [Discussed at the 06:03 mark ]

You can view the entire interview in the following video.

Next month, Mike and Mike will be talking about functional languages.

Fluent Conference: JavaScript & Beyond — Explore the changing worlds of JavaScript & HTML5 at the O'Reilly Fluent Conference (May 29 - 31 in San Francisco, Calif.).

Save 20% on registration with the code RADAR20

March 28 2012

The Reading Glove engages senses and objects to tell a story

I encountered Karen Tanenbaum (@ktanenbaum) through friends over on the Make side of O'Reilly Media. Thinking she might be a potential speaker for an upcoming edition of our Tools of Change for Publishing Conference, I contacted her for more information about the cool stuff she's working on, particularly her Reading Glove project.

"The Reading Glove is a collaboration between myself and my husband, Josh Tanenbaum," said Karen. "I have an interest in adaptive and intelligent systems, his research looks at storytelling and video games, and we both like to explore alternative interfaces: tangible, wearable, and ubiquitous computing environments."

Read more about the Glove project as well as Karen's take on design education, Steampunk culture and the Maker movement in the interview below.

How did the Reading Glove come about and what are your goals for the project?

Karen Tanenbaum: We wanted to see what happened when we gave people a story that was embedded on real, physical objects that could be played with and moved around. Our original vision was an entire room that told a story when you explored it, responding to objects you touched or moved via light and sound responses — sort of like a haunted house, but intended to tell a specific narrative rather than just be spooky. That was outside the scope and the budget of our dissertation work, so the Reading Glove was our first, more constrained exploration of that space.

Recommender, objects, and glove
Recommender, objects, and glove.

We also wanted to explore wearable technology with the glove; the goal there was to invoke the idea of "psychometry," or the psychic power of object reading. When you pick up the objects, you hear the echoes of the past, what these objects experienced, and then you use this power to piece the story back together. I developed a guidance system that helped to navigate the non-linear narrative, adding an adaptive or intelligent component to the experience. We've gotten some really interesting results out of it, such as how people talk about a system that has intelligent components, how much they anthropomorphize it and how accurate their estimates of its "intelligence" are.

What role does data play in the project?

Karen Tanenbaum: We collected a ton of data for this project, and I've spent the last year trying to sort through it and make sense of it. It's a real challenge with these kinds of novel systems to figure out what the most important thing is and to see how to correlate all these things together: how many objects they picked up, in what sequence, whether they interrupted pieces, whether or not they followed the system's recommendations, etc.

The focus of my analysis was on how people talked about the system and their use of it, particularly the notion of "control" and "choice." People would say that they really liked the freedom to choose any object and control how the story went, but would also say that since they never knew what story fragment they were going to get when they picked an object, they wished they had more control. It's interesting how people use technology and feel like they are, or are not, in control of it.

There's a problem with any simple measure of novel technology, which is that people in general tend to respond positively toward something new, especially if they know they are talking to the person who designed and built it. It's hard to ask someone "Did you like X?" when X is a new experience, like wearing a glove and picking up objects to hear a story. Of course, they're going to say "yes" because they don't have much to compare it to and because it is a fun thing to do. But there are innumerable design decisions that go into the whole experience, and it's hard to disentangle them to see where a different choice might have led to a better experience. That's why I think richer, more qualitative data is important to the field. It gets you beyond "I liked it, I thought it was easy to use, etc.," and you see what aspects of the experience people are really responding to, or what was actually frustrating them but which they didn't mention in the yes/no survey questions.

Reading Glove diagram
Reading Glove data diagram.

As well as being a PhD candidate, you teach interaction design at the university level. What trends are you seeing in design and technology education?

Karen Tanenbaum: I've taught interaction design at an art and design school and within a research university, and they are very different experiences. The art and design students were much more wary of the technology, but they had great intuition on how to use it to express their points of view. The students at the big university were more naturally technology-seeking, but they had to be pushed to really explore what it meant to say something with the technology. You really want the blend of both of those things: the technological expertise and the desire and ability to express something via technology rather than simply use it. I teach Processing to as many people as I can — basic coding is an incredibly beneficial skill for people in all fields to learn. There are tools to help simplify and automate a lot of the routines of everyone's work if you know how to write some basic code or search string parameters.

The other side of the coin, which I've had less opportunity to teach directly, is developing a critical stance toward technology. Programming and technical skills are really important, but so are critical thinking and reflective analysis. I don't believe technology is a neutral force; it is embedded and intertwined with a host of other cultural and societal forces, and we have both the ability and mandate to try to shape technical systems that are socially and ethically responsible. It's hard to teach both detailed technical expertise and deep critical thinking at the same time, and it seems that most schools end up focusing on one to the detriment of the other.

How is technology changing the experience of art and reading?

Karen Tanenbaum: Despite making a wearable device called the Reading Glove, most of my reading processes are stuck firmly in the last century. The power of technology as applied to art and reading is the connectivity that it can bring about. You can connect to what other people think about the work or you can see related pieces that might lead you to new discoveries. The interesting thing would be to bring that connectivity to the physical books, not to make the books themselves digital.

What other projects are you working on?

Karen Tanenbaum: As I finish the dissertation, I’m also doing a year-long internship at Intel Lab's Interaction and Experience Research Group, which is providing me with a fantastic opportunity to pursue some of the research work I've done since the Reading Glove.

My first project at Intel was to coordinate an exhibition of design fiction work called "Powered by Fiction," which ran alongside Emerge, a conference at Arizona State University on designing the future. We explored how fiction inspires the creation of physical, tangible props, costumes, and artifacts. One of the characters in the show "Captain Chronomek" was developed by me, my husband, and a colleague. The character is a time-traveling, Steampunk-flavored superhero.

The other related project that I've got going is an academic look at the subculture of Steampunk, picking apart what's driving its increased popularity. I'm a co-author on a paper on Steampunk at CHI this year. That paper looks at some of the implications of the Steampunk movement: the way it re-imagines the Industrial Revolution, the historical story of technology development, and the drive toward customization and artisan craftsmanship in technology.

And finally, I am now working on Intel's presence at Maker Faire. I had a booth at the Vancouver Mini Maker Faire this year with my husband to represent our fledging production company, Tanenbaum Fabrications. We're putting together a joint booth with some other folks at the Bay Area Maker Faire this year called Steampunk Academy. As with the Steampunk work, there's something really interesting going on with the democratization of technology design and production that is represented in the Maker movement.

I'm hoping to spend time in the next year working more with the LilyPad Arduino and other e-textile and soft circuitry components since I think that's a really exciting area for open source, tinker-y innovation.

Who inspires you? Whose work do you follow?

Karen Tanenbaum: I'm most inspired by the people doing the kind of work I was talking about above in the question about education: critical thinking on technology and the fusing of philosophy with technology design practice. Material like Terry Winograd and Fernando Flores' classic and incisive critique of artificial intelligence work in the mid '80s, and Paul Dourish's work merging Heideggerian philosophy with tangible computing and his collaboration with Genevieve Bell on science fiction and ubiquitous computing. I'm also influenced by all of Daniel Fallman's papers on what interaction design and design research is or could be. I'm also really enamored with some of the more recent work being done in merging "craft" and "design": Leah Buechely's Lilypad Arduino and High-Low Tech Lab at MIT, Daniela Rosner's work applying craft knowledge from antiquarian book restoration and knitting to technology design, and Hannah Perner-Wilson's amazing and beautiful textile sensors.

Watch a demonstration of the Reading Glove:

This interview was edited and condensed. Photos: Team Tanenbaum, on Flickr.


January 06 2012

Visualization of the Week: AntiMap

A new mobile phone app, AntiMap Log, allows users to record their own data as they move around. The app uses the phone's GPS and compass sensors to capture the following data: latitude, longitude, compass direction, speed, distance, and time.

While the AntiMap Log — available for both Android and iPhone — is the data-gathering component, it's just one part of a trio of open source tools. AntiMap Simple and AntiMap Video provide the visualization and analysis components.

AntiMap Video was originally designed to help snowboarders visualize their data in real-time, synced with footage of their rides. Here's a demo video:

That same snowboarder data is also used in the following visualization:

AntiMap snowboard visualization

AntiMap describes the visualization:

Circles are used to visualise the plotted data. The color of each circle is mapped to the compass data (0˚ = black, 360˚ = white), and the size of each circle is mapped to the speed data (bigger circles = faster) ... You can see from the visualisation, during heelside turns (left) the colours are a lot whiter/brighter than toeside turns (right). The sharper/more obvious colour changes indicate either sudden turns or spins (eg. the few black rings right in the centre).

Found a great visualization? Tell us about it

This post is part of an ongoing series exploring visualizations. We're always looking for leads, so please drop a line if there's a visualization you think we should know about.

Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.

Save 20% on registration with the code RADAR20

More Visualizations:

December 09 2011

Top Stories: December 5-9, 2011

Here's a look at the top stories published across O'Reilly sites this week.

The end of social
Mike Loukides: "If you want to tell me what you listen to, I care. But if sharing is nothing more than a social application feed that's constantly updated without your volition, then it's just another form of spam."

Why cloud services are a tempting target for attackers
Jeffrey Carr says before organizations embrace the efficiencies and cost savings of cloud services, they should also closely consider the security repercussions and liabilities attached to the cloud.

White House to open source as open government data platform
The new " in a box" could empower countries to build their own platforms. With this step forward, the prospects are brighter for stimulating economic activity, civic utility and accountability under a global open-government partnership.

Stickers as sensors
Put a GreenGoose sticker on an object, and just like that, you'll have an Internet-connected sensor. In this interview, GreenGoose founder Brian Krejcarek discusses stickers as sensors and the data that can be gathered from everyday activities.

What publishers can learn from Netflix's problems writer Tim Carmody examines the recent missteps of Netflix and takes a broad look at how technology shapes the reading experience.

Tools of Change for Publishing, being held February 13-15 in New York, is where the publishing and tech industries converge. Register to attend TOC 2012.

December 06 2011

Stickers as sensors

Rather than ask people to integrate bulky or intrusive sensors into their lives, GreenGoose's upcoming system (pre-orders start on Dec. 15; systems ship on Jan. 1) will instead provide small stickers with built-in Internet-connected sensors. Tip a water bottle and the attached GreenGoose sticker logs it through a small base station that plugs into your wireless router. Feed the dog, go for a walk, clean the house — GreenGoose has designs on all of it. No special skills required.

GreenGoose founder Brian Krejcarek calls his company's sensors "elegantly playful." In the following short interview, Krejcarek explains how the GreenGoose stickers will work and how he hopes people will use the data they acquire from their everyday activities.

What will GreenGoose stickers measure?

Brian Krejcarek: Our sensors measure things you do based on how you interact with an object. This interaction correlates to a signature of forces that our sensors try to match against known patterns that represent a specific behavior around the use of the object. When there's a match, then we send a little wireless message from the sensor to the Internet.

For example, you can put a sensor sticker on a medicine bottle or water bottle. There are certain patterns here — tip, dispense, return upright — that the sensors can pick up.

Stickers are great for this because they're simple, flexible and they easily stick to curved things like bottles. Also, existing objects or things around the house can be enabled with sensing capabilities by just sticking on a sticker. We're taking everyday things and making them more fun. We're also lowering barriers to adopting sensors by treating them in a playful way.

We're trying to make it really easy. There's no batteries to recharge or USB cables or software to worry about. The sensors last more than a year, and the range is over 200 feet, so it's completely in the background.

We're finding all kinds of new applications for these sensors. We're going to be launching with sensors that target pets — measuring when you feed your pets or walk the dog, for instance. We've got about 50 or so other sensors in development right now that we will fairly quickly release over time.

GreenGoose sensor system
The GreenGoose sensor/sticker system.

How are you applying gamification?

Brian Krejcarek: We're keeping the gamification side of this really simple to start. It's all about making people smile and sharing a laugh as they do ordinary things throughout their day. No points, levels, or badges, necessarily. We're first going to roll out a simple application ourselves around these pet sensors, but developers will have immediate access to the API and data they generate. We invite those developers to start layering on their own game mechanics. GreenGoose is a platform play.

How do you think people will use the data your sensors gather?

Brian Krejcarek: We hope that the use of the data fits nicely into applications that help people have more fun with everyday things they do. Think families and kids. Toward that end, we've got a bunch of sensors on the way for toys and doing things around the house.

What lies ahead for GreenGoose?

Brian Krejcarek: Plans going forward include launching the previously mentioned sensor kits around pets, releasing an open API to developers, and launching a sensor around physical movement (exercise) as a little card that can slip into your wallet or purse. We affectionately call it a "get-up-off-your-bum" sensor. No calorie tracking, or graphs and charts. More sensors will be released shortly afterward, too.

This interview was edited and condensed.

Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.

Save 20% on registration with the code RADAR20


October 28 2011

Sensors, data, UI and the future of publishing

Tim O'Reilly spoke this morning in Chicago at Silverchair Strategies 2011, a conference focused on the evolution of the scientific, technical and medical (STM) publishing domains.

Below you'll find a handful of tweets and additional links that zero in on key points from Tim's presentation.

[View the story "Tweets related to Tim O'Reilly's keynote at #SIS2011" on Storify]

TOC NY 2012 — O'Reilly's TOC Conference, being held Feb. 13-15, 2012 in New York City, is where the publishing and tech industries converge. Practitioners and executives from both camps will share what they've learned and join together to navigate publishing's ongoing transformation.

Register to attend TOC 2012


September 30 2011

Top Stories: September 26-30, 2011

Here's a look at the top stories published across O'Reilly sites this week.

From crowdsourcing to crime-sourcing: The rise of distributed criminality
Crowdsourcing began as a way to tap the wisdom of crowds for the betterment of business and science. Crime groups have now repurposed the same tools and techniques for their own variation: "crime-sourcing."

Wearing Android on your sleeve
WIMM Labs believes that wearable technology and at-a-glance moments — things like looking at a thermometer and checking the clock — can create powerful combinations.

Fighting the next mobile war
While you'll likely interact with your smartphone tomorrow in much the same way you interacted with it today, it's quite possible that your smartphone will interact with the world in a very different way. The next mobile war has already begun.

Spoiler alert: The mouse dies. Touch and gesture take center stage
As touch and gesture evolve from novelty to default, we must rethink how we build software, implement hardware, and design interfaces.

Amazon's "Prime" challenger to the iPad
While conventional wisdom says that to compete with the iPad you must emulate Apple's best practices, Mark Sigal argues that Amazon can do just fine by blazing its own trail.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders. Save 20% on registration with the code AN11RAD.

Wearing Android on your sleeve

WIMM Labs concept watchThe people behind WIMM Labs believe we're experiencing a shift toward information immediacy. To meet that shift, WIMM created a new class of personal devices that delivers information at a glance.

The company's platform for connected, wearable devices is based on the tiny WIMM module, a wristwatch-sized Android-powered computer that has a capacitive touchscreen, Wi-Fi and Bluetooth connectivity, an accelerometer, magnetometer, audible and tactile alerts, and a 14-pin accessory connector.

An early-stage Silicon Valley-based company, WIMM is trying to lead the "micro experience" movement of wearable hardware. The company hopes to license its platform to brands in a variety of industries, including mobile, sports, finance and consumer electronics.

I recently spoke with Ted Ladd, director of developer relations and platform evangelist for WIMM Labs, about the convergence of wearable tech and micro experiences. Ladd will lead a session on "Wearing Android: Developing Apps For Micro Devices" at the upcoming Android Open Conference.

Our interview follows.

What shifts are driving society toward wearable computing?

Ted LaddTed Ladd: We are now connected to the cloud and to each other all the time. Wearable technology makes that connection more convenient and powerful. You can read a recent SMS or pay for your coffee with a flick of the wrist. Or you can track your workout — reps, calories, heart rate — without trying to strap a smartphone to your chest. These technologies are available today.

The value isn't just from wearable technology; it's also from the proliferation of Bluetooth-enabled sensors — from heart rate to proximity — and web services that can collect and interpret that data. Wearable technology provides the local hub for these sensors, allows the user to see and interact with the data directly via local applications, and then sends the data to the cloud (with the user's permission).

What are "micro experiences"?

Ted Ladd: Micro experiences are subtle glances of immediately relevant information. It's things like looking at your watch to tell the time, looking at the thermometer to see the temperature, or glancing at the speedometer to tell the speed. We have extended this metaphor to wearable computing to help developers design Micro Apps that deliver these glances. Since our platform is built on Android, the challenge when building a WIMM Micro App is not technical. Instead, the challenge is to rethink the features and UX of the app.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD

Are there downsides to the information immediacy that wearable computers provide?

Ted Ladd: Just like with a smartphone, perpetual connection to the office, to friends, to sensors, and to the cloud has its drawbacks. We are deluged with information. What we want to do is funnel only the most important information into personal snippets that fit subtly into our lives.

What do you see as the first killer app for the WIMM platform?

Ted Ladd: We are a platform company, so our business model is to help existing companies easily design and launch wearable products — using our module at their core. For a fitness company, the killer app is exercise tracking in real time. For the military, it is body sensing, location tracking, and communication. For a theme park, it may be real-time updates on the wait times for popular rides. For a consumer company aiming at business professionals, it might be a context-specific task list. As a platform provider, we try to enable our licensees to create the killer app for their industries.

Now, speaking only for myself, I love the calendar and the Starbucks Micro Apps. They are more convenient and subtle than the apps on my smartphone.

What were the biggest challenges you faced in developing the WIMM platform?

Ted Ladd: Our device is always on, displaying the time and other possible data on a reflective screen. In this "passive" mode, the device is running on a small microcontroller, which sips power. When I touch the screen, the device launches its full-color capacitive touchscreen and ARM 11 applications processor.

Designing this dual-processor architecture was not trivial. Happily, the user doesn't see the complex hardware and software processes that make this work. The user only sees the time, and then with a touch, a full computer.

What are some of the obstacles to the adoption of wearable computing?

Ted Ladd: A common thought in the press is that wearable technology will not evolve because the smartphone has already captured the mass market. As evidence, the press cites the perceived trend away from wrist watches, especially among the younger, more digitally connected generation. I disagree. Once people see the current and rapidly evolving functionality of wearables, they will see the value for this class of products as a complement to their smartphones. And as major consumer brands add fashion and extra functionality via hardware and software, the value will become immediately apparent.

What kinds of developer tools do you provide?

Ted Ladd: We are launching our Software Developer Kit soon, which is an add-on to the Android Developer Tools, along with an emulator, sample code, design guidelines, videos, and forum. We also have a Hardware Developer Kit for people who want to design accessories that attach to a WIMM module.

We will also start shipping the WIMM One Developer Preview, which includes the WIMM module, a wrist strap, paddle charger, and cable. This product aims to help developers understand the platform's capabilities.

This interview was edited and condensed.


September 28 2011

Fighting the next mobile war

It's arguable that with the arrival of touch displays, the current form factor for the smartphone is going to be with us for some time to come. You can't get much simpler than a solid block of glass and aluminum with a button. Unless you remove the button. Thinking about it, that's probably a solid suggestion — I'd look for that next.

If things aren't going to change very much on the surface, underneath the glass things might not be much different either. Oh, the devices will be faster, and they'll have more cores, better displays, faster network connections, and the batteries will last longer. But fundamentally, they'll still be the same. The device won't provide you with any new levers on the world. With the exception of NFC, which admittedly is a big exception, there are no new sensory modalities on the horizon that are likely to be integrated into handsets. You'll interact with your smartphone tomorrow in much the same way you interact with it today, at least in the near term.

That said, it's quite possible that your smartphone will interact with the world in a very different way. That's because the next mobile war has already begun, and you've seen nothing yet.

The phoney war

It began quietly, with little noise or fanfare, just over two years ago with Apple's announcement of iOS 3, the External Accessory Framework, and the opportunity for partners in the MFi program to build external hardware that connected directly to the iPhone.

For the first time, it was easy, at least for certain values of easy, to build sensor hardware that connected to a mass-market mobile device. And for the first time, the mobile device had enough computing power and screen real estate to do something interesting with the sensor data.

Except of course, it wasn't easy. While initially the External Accessory Framework was seen as having the potential to open up Apple's platform to a host of external hardware and sensors, little of the innovation people were expecting actually occurred. Much of the blame was laid squarely at the feet of Apple's own MFi program.

There was some headway made using the devices as sensor gateways, mainly in the medical community, which Apple had initially pushed heavily during the launch. But in the end, the framework was used to support a fairly predictable range of audio and video accessories from big-name manufacturers — although more recently there have been a few notable exceptions.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD

Opening a second front

Things stayed quiet until earlier this year when Google announced the Android Accessory Development Kit (ADK) at Google I/O in May.

While there was a lot of criticism of Google's approach, it was justifiably hailed as a disruptive move by Google in what had become a fairly stagnant accessories market. Philip Torrone hit the right note when he speculated that this might mean the end of Apple's restrictive MFi program.

I've talked about the Arduino here before. It allows rapid, cheap prototyping for embedded systems. Making Android the default platform for development of novel hardware was a brilliant move by Google. Maybe just a little too brilliant.

The counterattack by Apple

Around the middle of the year, right in the middle of Apple's WWDC conference, I was approached by Redpark and sworn to secrecy. Apple was on the brink of approving a serial cable for iOS that they would let Redpark sell into the hobbyist market.

I'd known about the existence of the cable since the preceding November with the release of the SkyWire telescope control kit. I'd begged Redpark for developer access to their cable, and after signing a thick stack of NDAs, I got my hands on one around mid-December. At the time there seemed little chance of Apple ever approving the cable except for specific use cases where the cable and an accompanying iOS application were approved together as part of the MFi program — exactly as Apple had for Skywire for telescopes and Cisco had for networking gear.

The news that the cable might soon be generally available to hobbyists was surprising. Despite Apple's beginnings — and the large community of indie developers surrounding its products — the hobbyist market isn't something Apple is known for caring about these days. Quite the opposite: Apple is notorious for keeping its products as closed as possible.

Controlling an Arduino with an iPhone.

Close on the heels of Google's ADK announcement, Apple's sudden enthusiasm was suspiciously timed. Someone high up at Apple had obviously realized the disruptive nature of the ADK and this was their response, their counter-attack. Despite the Android ADK actually being an Arduino, it was now easier to talk to an Arduino from iOS using Redpark's cable than it was to talk to an Arduino from Android.

The long war

The Android ADK board is only now appearing in large numbers as the open hardware community gears up to produce compatible boards cheaper than Google's ruinously expensive initial batch of "official" developer boards. The Redpark cable also faced supply issues, with the initial production run selling out on the Maker Shed within a few days. We're only now seeing it in larger volumes. So, despite appearances, it's still the early days.

Discussing the Redpark cable at OSCON 2011.

I think the availability of both these products is going to prove to be amazingly disruptive in the longer term. After spending two days at the recent World Maker Faire in New York, I know there's a lot of enthusiasm inside the Maker community for that disruption — and Apple may have the edge.

Because of Apple's policy restrictions, you can only develop applications that work with Redpark's cable for your own personal use or for distribution inside an enterprise environment without going through the MFi program. The ease of use and popularity of the iOS platform with developers means there will still be a big uptake, and after a few people struggle through the process, I think that, with time, the cable will spell the end of the MFi program.

Over the next couple of years, we'll be seeing some real innovation in the external accessory product space. Rapid prototyping combined with ease of access to increasingly powerful mobile platforms means that the next mobile war, and the next big thing of a real ubiquitous computing environment, is just around the corner.


September 16 2011

Visualization of the Week: Running for a year

This summer, YesYesNo partnered with Nike+ to visualize running data. YesYesNo created installations for Nike retail stores that displayed one year's worth of runs through the streets of New York, London and Tokyo.

From the YesYesNo project page:

The runs showed tens of thousands of peoples' runs animating the city and bringing it to life. The software visualizes and follows individual runs, as well as showing the collective energy of all the runners, defining the city by the constantly changing paths of the people running in it.

Here's a screenshot of the visualization:

Nike Plus Visualization

This video shows some of the visualization in action:

The visualization was created with custom designed software, produced in collaboration with DualForces using OpenStreetMap maps.

Found a great visualization? Tell us about it

This post is part of an ongoing series exploring visualizations. We're always looking for leads, so please drop a line if there's a visualization you think we should know about.

Strata Conference New York 2011, being held Sept. 22-23, covers the latest and best tools and technologies for data science — from gathering, cleaning, analyzing, and storing data to communicating data intelligence effectively.

Save 30% on registration with the code STN11RAD

More Visualizations:

July 20 2011

Smartphones and spheres of influence

This is part of an ongoing series suggesting ways to spark mobile disruption. We'll be featuring additional material in the weeks ahead.

Though the mobile space is rapidly expanding, some may argue that the space just isn't disruptive enough. In the spirit of disruption, I've reached out to several people across the tech and publishing industries to answer one question:

If you were going to build an app that fully harnessed mobile's capabilities, what would it do and how would it work?

I recently posed this question to Tyler Bell (@twbell), director of product at Factual. His answer follows.

TylerBell.jpg Tyler Bell: I'm keenly interested in the idea of the phone as a universal telefactor-cum-sensor platform. To effect something you must know its status, so I would note that this would entail it's being a universal monitor also. I'm not certain I could choose a single app to develop, but I would be most interested in accelerating the phone's development in several ways:

  • As a universal interface — Moving us away from proprietary devices and a multiplicity of interfaces. I would first include obvious next-steps like house and car security and automation. Generally these devices will go hand-in-hand with monitoring tasks — energy consumption, charge levels, and security status are all base-level statuses that can be monitored and effected from a single device.
  • As a sensor suite — Phones now are packed with devices that can create data from the world around us. It gets most interesting when their use deviates from their original intention. For example, the camera has migrated from a novelty to become an input device, the microphone and accelerometer — my favorite sensor; everyone has one, surely — are used to determine proximity, and the Wi-Fi chip is used to aid in location determination. I'd like to see the phone continue on a similar trajectory, especially with regard to personal health monitoring.
  • As a swarm — Phones are still built around one-to-one thinking. Sensors and collective inputs are more valuable when part of a massive collective. I would expect that this hypothetical app would ensure that my phone worked together with millions of others, reporting data on civic infrastructure, the weather, traffic, and other aspects of our shared existence.
  • As an agent — The phone remains a synchronous device, though push-based mechanisms have begun to move us away from this as the norm. I would expect my preferred app to become more autonomous, "spinning up" agents to satisfy my requests and reporting back when complete.

Phones allow us to expand our influence to other things, people, and places. Any app that facilitates and enriches such interaction can only be a good thing.

This interview was edited and condensed.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD


  • The next, next big thing
  • Toward a local syzygy: aligning deals, check-ins and places
  • Location data could let retailers entice customers in new ways
  • More from the Mobile Disruption Series

  • June 22 2011

    The smart grid data deluge

    Seoul by night by Koshyk, on FlickrThe smart grid is an information revolution for utilities, and the first source of that information is smart meters. A smart meter records consumption of electricity in intervals of an hour or less and communicates that information back to the utility.

    EMeter provides metering data management products to utilities. I talked to Aaron DeYonker, global head of product management at eMeter, about how data will transform the way utilities do business. Our interview follows.

    How much new data do utilities have to process because of smart metering?

    Aaron DeYonker: Currently, most utilities do a standard meter read once a month. With smart meters, utilities have to process data at 15-minute intervals. This is about a 3,000-fold increase in daily data processing for a utility, and it's just the first wave of the data deluge. The second wave will include granular data from smart appliances, electric vehicles and other metering points throughout the grid. That will exponentially increase the amount of data being generated.

    What processes do utilities have to change to deal with the data?

    Aaron DeYonker: Utilities tend to be risk adverse and as a result may not have utilized technology for internal business process as much as other industries like telco or banking. Coupled with growing energy demands, utilities need to change a number of business processes beyond billing to handle the amount and type of data they now manage. Outage management, customer service, load research and wholesale electricity market transactions — where utilities buy and sell electricity to each other — are just a few of the impacted areas.

    In the case of power outage management, for example, the traditional business process relied on processing customer calls to detect outage hotspots. Now, smart meters can automatically report that they have lost power. That enables more proactive dispatching of resources to fix problems. However, the data needs to be collected, validated and processed quickly to make resolution times as fast as possible. So utilities now need to develop new business processes to enable this.

    Strata Conference New York 2011, being held Sept. 22-23, covers the latest and best tools and technologies for data science -- from gathering, cleaning, analyzing, and storing data to communicating data intelligence effectively.

    Save 20% on registration with the code STN11RAD

    Beyond billing, how can utilities put metering data to use?

    Aaron DeYonker: The list is endless, but here are a few use cases: customer service, outage management, conservation and efficiency programs, demand response (reducing consumption at peak demand times), demand forecasting, energy "theft" detection, "line-loss" detection, new product offerings (including new rates for electric vehicles and renewables), and peak-load reduction programs.

    Let's take customer service as a specific example. Many utility call centers deal with routine complaints about high bills. Customers can now get notifications that a spike in usage has been detected and the data might even be able to pinpoint the actual culprit within the home, such as an aggressively programmed air conditioner. Alerting the customer before the bill arrives will reduce call volumes and increase customer satisfaction.

    What actionable metrics or alerts can you derive from meter data?

    Aaron DeYonker: Having access to more detailed data opens the door to setting more accurate metrics, both for the utility and the consumer. With the right data, utilities can better diagnose outages as close to real time as possible and allocate resources as quickly and effectively as possible to resolve problems.The meters will send these events up and the system needs to perform powerful and fast analysis to distinguish true problems from false alarms.

    On the business side, many utilities look for ways to reduce the peak demand for energy, which typically occurs in the afternoon and evening during the summer. Programs that have cheaper night-time rates are now possible because metering data is gathered at regular intervals. Measuring the effect of these rates on customer behavior is critical in lobbying regulatory bodies to approve these programs for broader roll out.

    In addition, consumers are empowered to regulate their own energy use. That helps reduce unnecessary energy usage and generation. In some jurisdictions, the metering data will in fact eliminate the need for new nuclear power plants based on the expected efficiencies derived from smarter pricing.

    How do you see utilities using metering data in the future?

    Aaron DeYonker: At the very least, flattening out the daily demand curve will have incalculable benefits on our overall energy infrastructure as demand increases. The use of metering data enables a limitless potential for new products and services for utilities and their customers. Armed with a rich dataset on par with that of the financial services and telco industries, utility companies can model, forecast and prototype with much more power and precision. It's impossible to envision the extent to which this data is applied to future innovations, but it will be extensive and world changing.

    This interview was edited and condensed.

    Photo: Seoul by night by Koshyk, on Flickr


    March 14 2011

    Wireless sensor networks can see and shape the world

    Wireless sensor networking technology has moved from hobby circles to interconnected consumer electronics, appliances, and devices for the home. In the following interview, "Building Wireless Sensor Networks" author Robert Faludi (@faludi) discusses the practical application of sensor networks and how he thinks they will evolve to meet a variety of needs.

    For the uninitiated, what's the appeal of building wireless sensor networks?

    Robert FaludiRobert Faludi: Wireless sensor networks are distributed, therefore they're good at seeing things you can't. For example, if you place a single soil moisture sensor in a field, you'll know the moisture at that one point. But if you place 100 of them you'll be able to learn the entire topography of the moisture in the soil, whether it's the same everywhere or different, and whether those differences change in shape over time.

    If you're interested in urban pollution you can measure at ground level, at walking height, and at the height where a child breathes. This reveals the distribution of danger across an entire landscape — and you can see that across a week and across the seasons.

    Because the networks are two-way, we have the ability to distribute actuator networks of devices that can physically affect the world based upon the special things that sensor networks can see. If you think of a laptop computer as a processor of local data, imagine these sensor/actuator networks as massive distributed devices, kind of like those enormous underground mushrooms in the Pacific Northwest.

    To me though, the scale and economics are secondary to the greater purpose of conveying the stories that this distributed data has to tell us, and improving our lives by employing those lessons in the same distributed way.

    Has interest in wireless sensor networking reached a tipping point?

    Robert Faludi: I think we've crossed the chasm from academic experimentation toward widespread commercial deployment of low-power, low-bandwidth wireless sensor and device networking. Smart energy, home networking and industrial implementations are well out of the gate. At the same time there's an astounding opportunity for proving the value of these networks in the market.

    Imagine the Internet around 1994, with a few high-profile deployments and limitless headroom for individuals to create something and get it online. Device networking is in a similar place. Those with a knack for the technical and a sense of adventure can create the killer applications that will drive an inventive industry for years to come. It's getting to be a pretty exciting time.

    What technical skills or background do you need to build wireless sensor networks?

    Robert Faludi: People who have used a microcontroller platform like Arduino, or who have tried their hand with an accessible development platform like Processing, will certainly have a faster path. Really, if you trust yourself, follow instructions moderately well, and can be patient while you learn, that's all you need. I've taught middle school students the basics and they built the book's first full project in a single 45-minute class. So it's not so much about your skill level as your willingness to try something new.

    How do current sensor networks and technologies need to improve?

    Robert Faludi: These components are already quite cheap, but it would be terrific if they were even cheaper. As they become more widely used, a radio that costs $18 will eventually come down to $1 or even $0.50. Low cost is important if you want to fool around with a network of 100 devices.

    Range, stability, and setup will also improve. The dream is to just turn the radio on and have it figure out which network to join, what its role is, and configure itself. That's possible right now, but you have to do a lot of advanced planning. I'm hoping that new technologies will keep making this process easier, so that in the future nobody needs a book like mine to get the job done.

    What lies ahead for wireless sensor networking?

    Robert Faludi: I'm looking forward to a future where wireless communications are part of everything around us. It's not about the cool factor — though it would certainly be cool — but more about creating behaviors and interactions that can be carried out in an intelligent fashion. I want systems that help free me up from menial tasks like setting a clock or punching instructions into a microwave or endlessly silencing falsely-triggered smoke alarms. Devices are smart right now but they can't talk to each other properly, so all the things they know and detect are kept hidden from other systems. The great systems that people design are going to be the first steps in making that cool and useful world a reality.

    This interview was edited and condensed.

    Building Wireless Sensor Networks: Create distributed sensor systems and intelligent interactive devices using the ZigBee wireless networking protocol and Series 2 XBee radios.


    December 27 2010

    What lies ahead: Data

    Tim O'Reilly recently offered his thoughts and predictions for a variety of topics we cover regularly on Radar. I'll be posting highlights from our conversation throughout the week. -- Mac

    Are companies catching on to the importance of data?

    Tim O'ReillyTim O'Reilly: For a long time, data was a secret hiding in plain sight. It became clear to me quite a while ago that it was the key to competitive advantage in the Internet era. That was one of the points in my Web 2.0 paper back in 2005. It's pretty clear that everybody knows about it now. There are "chief data scientists" at companies like LinkedIn and Data and algorithms are at the heart of what so many companies are doing, and that's just going to accelerate.

    There's more data every day, and we're going to see creative applications that use data in new ways. Take Square, for example. It's a payment company that's hoping to do some degree-of-risk mitigation via social network analysis. Will that really work? Who knows? Even Google is struggling with the limits of algorithmic curation.

    There was a story in the New York Times recently about a guy who figured out that getting lots of negative comments led to a high Page Rank. That raises new questions. Google doesn't like to do manual intervention, so they're saying to themselves, "How can we correct this algorithmically?" [Note: Google responded with an algorithmic solution.]

    I'm not privy to what's happening inside Google's search quality team, but I think there are probably new sources of data that Google could mine to improve results. I've been urging Google to partner with people who have sources of data that aren't scrapeable.

    Along those lines, I think more data cooperation is in the future. There are things multiple companies can accomplish working together that they couldn't do alone. It occurs to me that the era of Google was the era in which people didn't realize how valuable data was. A lot of data was there for the taking. That's not true anymore. There are sources of data that are now guarded.

    Facebook, as an example, is not going to just let a company like Google take its data to improve Google's own results. Facebook has its own uses for that data. Meanwhile, Google, which was allowing Facebook users to extract their Gmail contacts to seed their Facebook friend lists, responded by setting their own limits. That's why I see more data-sharing agreements in the future, and more data licensing.

    The contacts battle between Google and Facebook is an early example of the new calculus of data. I don't know why Google didn't stand up sooner and say, "If you're scraping our data to fill out your network, why can't we do the same in reverse?" That's a really good question. It's one thing if Facebook wants to keep their data private. But it's another thing if they take from the "data commons" and don't give anything back.

    I also anticipate big open data movements. These will be different from the politically or religiously motivated open data movements of the past. The Google/Facebook conflict is an example of an open data battle that's not motivated by religion or principle. It's motivated by utility.

    Strata: Making Data Work, being held Feb. 1-3, 2011 in Santa Clara, Calif., will focus on the business and practice of data. The conference will provide three days of training, breakout sessions, and plenary discussions -- along with an Executive Summit, a Sponsor Pavilion, and other events showcasing the new data ecosystem.

    Save 30% off registration with the code STR11RAD

    How will an influx of data change business analytics?

    Tim O'Reilly: Jeff Hawkins says the brain is a prediction engine. The reason you stumble if a step isn't where you expect it to be is because your brain has performed a prediction and you're acting on that prediction.

    Online services are becoming intelligent in similar ways. For example, Google's original competitive advantage in advertising came from their ability to predict which ads were the most likely to be clicked on. People don't really grasp the significance of that. Google had a better prediction engine, which means they were smarter. Having a better prediction engine is literally, in some sense, the definition of being smarter. You have a better map of what's true than the next guy.

    The old prediction engine was built on business intelligence; analytics and reports that people study. The new prediction engine is reflex. It's autonomic. The new engine is at work when Google is running a real-time auction, figuring out which ad is going to appear and which one is going to give them the most money. The engine is present when someone on Wall Street is building real-time bid/ask algorithms to identify who they're going to sell shares to. These examples are built on predictive analytics that are managed automatically by a machine, not by a person studying a report.

    Predictive analytics is an area that's worth thinking about in the years ahead: how it works, how you become proficient at it, how it can transform fields, and how it might conflict with existing business models.

    Healthcare offers an example of the potential conflicts. There is no question in my mind that there's a huge opportunity for predictive analytics in healthcare and in the promise of personalized medicine. Certain therapies work better than others, and those conclusions are in the data, but we don't reimburse based on what works. Imagine if Medicare worked like Google. They would say, "You can use any medicine you want, but we're going to reimburse at the rate of the lowest one." Pretty soon, the doctors would be using the drugs that cost the least. That's opposed to what we have now, where doctors get drugs pushed on them by drug companies. Doctors end up recommending particular therapies that the data says don't work any better, but cost three times as much. The business models of pharmaceutical companies are all dependent on this market aberration. If we moved to a predictive analytics regime, we would actually cut a lot of cost and make the system work better. But how do you get there?

    Predictive analytics will chip away at new areas and create opportunities. A great example is a company we had on stage at Gov 2.0 Summit, PASSUR Aerospace, that has been managing predictive analytics for airline on-time arrival. For 10 years or so, they've been tracking every commercial airline flight in the U.S. and correlating it with data like weather and other events. They're better than the airlines or the FAA at predicting when the planes will actually arrive. A number of airlines have hired them to help manage expectations about flight arrivals.

    Mobile sensors have come up in many of your recent talks. Why are sensors important?

    Tim O'Reilly: Recently I was talking with Bryce Roberts about how sensors connect a bunch of OATV's investments. Path Intelligence is using cell phone check-ins to count people in shopping malls and other locations. Foursquare uses sensors to check you in. RunKeeper, which tracks when you run, is another sensor-based application.

    The idea that the smart phone is a mobile sensor platform is absolutely central to my thinking about the future. And it should be central to everyone's thinking, in my opinion, because the way that we learn to use the sensors in our phones and other devices is going to be one of the areas where breakthroughs will happen.

    (Note: Tim will share more thoughts on mobile sensors and the opportunities they create in a post coming later this week.)

    Next in this series: What lies ahead in publishing
    (Coming Dec. 28)


    December 18 2010

    Strata Gems: A sense of self

    We're publishing a new Strata Gem each day all the way through to December 24. Yesterday's Gem: Clojure is a language for data.

    Strata 2011 The data revolution isn't just about big data. The smallest data can be the most important to us, and nothing more so than tracking our own activities and fitness. While standalone pedometers and other tracking devices are nothing new, today's devices are network-connected and social.

    Android phones and iPhones are fitted with accelerometers, and work well as pedometers and activity monitors. RunKeeper is one application that takes advantage of this, along with GPS tracking, to log runs and rides. FitFu takes things a step further, mixing monitoring with fitness instruction and social interaction.

    Phones, however ubiquitous, are still awkward to use for full-time fitness tracking. With a much smaller form factor, the Fitbit is a sensor you clip to your clothes. Throughout the day it records your movement, and at night it can sense whether you wake. With a long battery life and wireless syncing, it's the least intrusive device currently available for measuring your activity.

    Fitbit data
    An extract from the author's Fitbit data

    Fitbit are working on delivering an API for access to your own data, but in the meantime there's an unofficial API available.

    Withings produce a wi-fi enabled scale, that records weight and body mass index, uploading the data to their web site and making it available for tracking on the web or a smartphone.

    The next step for these services is to move towards an API and interoperation. Right now, Fitbit requires you manually enter your own weight, and diet plans such as WeightWatchers aren't able to import your weight or activity from the other services.

    For much more on recording and analyzing your own data, check out Quantified Self.

    July 30 2010

    Augmented reality as etiquette coach

    Identifying local landmarks and uncovering hidden coupons are fun augmented reality applications, but "Programming iPhone Sensors" author Alasdair Allan has a loftier AR goal.

    "I'm terrible with faces and names," he said during a recent interview at OSCON. "So, I want those little glasses where you see someone and it's like: 'This is Gary. You met him in 2005. His wife is called Mary. He's got three kids. His birthday is ...'. That sort of thing. That's my ultimate goal."

    Allan's ideal is based on facial recognition, which is a step above facial detection. But you can't have identification without detection, and detection is something we're close to seeing in real-time. Allan himself successfully built a real-time face detection demo on the iPhone 3GS. The iPhone 4's improved hardware makes the same functionality easier to implement. (Not trivial ... just easier.)

    Allan touched on a number of additional topics during our OSCON chat, including:

    • How (theoretically) a geolocation database -- like SimpleGeo -- could be matched up with a cloud-based facial recognition database.
    • How iPhoto's Faces tool could influence FaceTime and other real-time video applications.
    • He closed with a brief rundown on the types of sensors commonly found in many smartphones ... and why the introduction of gyroscopes in Android phones is now a near inevitability.

    The full interview is available in the following video:


    April 23 2010

    Four short links: 23 April 2010

    1. Google's Insane Number of Servers Visualized (Gizmodo) -- sometimes you do just have to see it to comprehend it.
    2. Spreading Critical Behaviours "Virally" (HBR) -- form small groups of peers and get them to exchange best practices. Repeat and watch quality rise. (via Kevin Marks)
    3. Android: Monitoring Sensors in the Background -- tips on how to have programs continuously monitoring the sensors.
    4. HP Designjet 3D Printers -- everyone's hoping HP can do to 3D printer prices what they did to 2D printer prices. (WIthout doing to 3D printer materials what they did to 2D printer ink) (via fabbaloo)

    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!