Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

April 14 2011

3 big challenges in location development

With the goal of indexing the entire web by location, Fwix founder and Where 2.0 speaker Darian Shirazi (@darian314) has had to dig in to a host of location-based development issues. In the following interview, he discusses the biggest challenges in location and how Fwix is addressing them.


What are the most challenging aspects of location development?

Darian ShiraziDarian Shirazi: There are three really big challenges. The first is probably the least difficult of the three, and that's getting accurate real-time information around locations. Crawling the web in real-time is difficult, especially if you're analyzing millions of pieces of data. It's even difficult for a company like Google just because there's so much data out there. And in local, it's even more important for crawling to be done in real-time, because you need to know what's happening near you right now. That requires a distributed crawling system, and that's tough to build.

The second problem, the most difficult we've had to solve, is entity extraction. That's the process of taking a news article and figuring out what locations it mentions or what locations it's about. If you see an article that mentions five of the best restaurants in the Mission District, being able to analyze that content and note, for example, that "Hog 'N Rocks" is a restaurant on 19th and Mission, is really tough. That requires us to linguistically understand an entity and what is a pronoun and what isn't a pronoun. Then you get into all of these edge conditions where a restaurant like Hog 'N Rocks might be called "Hogs & Rocks" or "Hogs and Rocks" or "H 'N Rocks." You want to catch those entities to be able to say, "This article is about these restaurants and these lat/longs."

The third problem we've had to tackle is building a places taxonomy that you can match against. If you use SimpleGeo's API or Google Places' API, you're not going to get the detailed taxonomy required to match the identified entities. You won't be able to get the different spellings of certain restaurants, and you won't necessarily know that, colloquially, "Dom and Vinnie's Pizza Shop" is just called "Dom's." Being able to identify those against the taxonomy is quite difficult and requires structuring the data in a certain way, so that matching against aliases is done quickly.

Fwix
Identifying and extracting entities, like restaurants, is a challenge for location developers.

How are you dealing with those challenges?

Darian Shirazi: We have a bunch of different taggers that we put into the system that we've worked through over time to determine which are good at identifying certain entities. Some taggers are very good at tagging cities, some are better at tagging businesses, and some are really good at identifying the difference between a person, place, or thing. So we have these quorum taggers that are being applied to the data to determine the validity of the tag or whether a tag gets detected.

The way that you test it is that you have a system that allows you to input hints, and you test the hint. The hints get put into a queue of other hints that we're testing. We run a regression test and then we see if that hint improved the tagging ability or made it worse. At this point, the process is really about moving the accuracy needle a quarter of a percent per week. That's just how this game goes. If you talk to the people at Google or Bing, they'll all say the same thing.



At Where 2.0 you'll be talking about an "open places database." What is that?


Darian Shirazi: A truly open database is a huge initiative, and something that we're working toward. I can't really give details as to exactly what it's going to be, but we're working with a few partners to come up with an open places database that is actually complete.

We think that an open places database is a lot more than just a list of places — it's a list of places and content, it's a list of places and the reviews associated with those businesses, it's the list of parks and the people that have checked in at those parks, etc. Additionally, an open places database, in our minds, is something that you can contribute to. We want users and developers and everyone to come back to us and say, "Dom and Vinnie's is really just called Dom's." We also want to be able to give information to people in any format. One of the things that we'll be allowing is if you contact us, we'll give you a data dump of our places database. We'll give you a full licensed copy of it, if you want it.

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD

How do you see location technology evolving?

Darian Shirazi: Looking toward the future, I think augmented reality is going to be a big deal. I don't mean augmented reality in the sense of a game or in the sense of Yelp's Monocle, which is a small additive to their app to show reviews in the camera view. I think of augmented reality as you are at a location and you want to see the metadata about that location. I believe that when you have a phone or a location-enabled device, you should be able to get a sense for what's going on right there and a sense for the context. That's the key.

This interview was edited and condensed.



Related:


April 08 2011

The convergence of biometrics, location and surveillance

Applying biometric matching to location-based surveillance technologies produces both fascinating possibilities and scary scenarios.

I recently spoke with Tactical Information Systems CTO Alex Kilpatrick and Mary Haskett, co-founder and president, about the state of biometrics and what we need to be concerned about as surveillance becomes more prevalent in our society. They'll expand on many of these ideas during a session at the upcoming Where 2.0 Conference.

Our interview follows.


What is biometric matching and how might it be used in future consumer applications?

Alex KilpatrickAlex Kilpatrick: Biometrics is the science that studies things that make an individual unique. Interestingly enough, there are lots of things that make an individual unique, especially when examined closely enough. The most common is fingerprints, but the pattern of ridges on your palms and feet are just as unique. Your iris, the colored part of your eye, is extremely easy to read and unique, even among twins. The shape of your ear, the way you walk, the size of your hands, your smell, the way you talk, and of course your face are all unique.

Biometric matching comes in two forms: verification and identification. Verification is when I come to a sensor and say "I am Alex Kilpatrick" and the sensor verifies that one of my biometrics, perhaps my face, matches the biometric on record. This type of matching is relatively easy. Identification is when I present a biometric to a system and ask it "Who is this?" That is a much harder problem, especially when there are hundreds of millions of records. The FBI deals with this problem every day, taking unknown fingerprints from crime scenes and looking for a match in their database containing millions of records.

Like most technologies, biometrics can be used for good or evil. In consumer applications, it can be used to help prevent identify theft by providing strong authentication for financial transactions. It can be used for convenience — to allow you to login to a computer, or gain entry to your health club without having to remember a password or carry an ID. Disney World uses biometrics to prevent season pass holders from sharing their passes with other people. It can be used to identify people who can't identify themselves, such as Alzheimer's patients, autistic children, very young children, people in accidents, etc.

Mary HaskettMary Haskett: I think the entire industry is holding its breath and waiting to see what's going to happen in the consumer space.

The potential is enormous and current methods, like a card or code that you have to show or recite, are actually quite crude. But everyone is wondering if the consumer will accept it. Are they willing to use their fingerprints, face image, or iris image in these ways?

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD



How are biometrics being used in surveillance technologies?



Alex Kilpatrick: In terms of "surveillance society" activities, there are only two players: face and iris. There are technologies such as gait — the way you walk — that may theoretically be used to monitor people, but these are not discriminating enough to uniquely identify individuals. For a biometric technology to be effective for covert surveillance, it has to be able to detect you from a distance without your knowledge. Currently, face matching is the only biometric technology that supports covert surveillance, meaning that you can be tracked without your consent. However, it is not robust in situations involving poor lighting, poor cameras, or poor angles. Face matching is very effective for "passport"-style controlled poses, but its accuracy drops off rapidly with shadows, or with the traditional angles used by current surveillance cameras.

Outside of face, there are iris cameras available now that can detect an individual at distances up to 2-4 feet, but these require the cooperation of the individual — they have to look at the sensor.

The bottom line, though, is that there are not any broadly effective biometric means of covert surveillance available right now. That will change in the near future, though.

The most "promising" technology for covert surveillance is 3D face matching. This uses a three-dimensional model of an individual's face, so it is much more robust when matching partial faces, or faces at low angles from surveillance cameras. These systems are not currently effective for large populations, but research is progressing rapidly in this area. I would guess that this technology will be mainstream in the next 5-10 years.

Iris technology may also advance to become effective without user participation as well. Your iris is effectively a barcode to a computer, and it provides a much better system for matching than a face. For iris to be effective for covert surveillance, better cameras with highly adaptive focusing systems will need to be developed.

That said, the advancement of biometric technology may ultimately be a moot point because many people carry around a GPS tracker with them all the time in the form of a smartphone. These devices can track your location down to 10 meters if they use GPS, and 300 meters if they use triangulation. That may not be accurate enough to place you at a particular location, but it is enough to build a pretty accurate profile of your activities. The devices can be activated (including GPS) without your consent, if required by law enforcement or other authorities.



What are some of the pros and cons of location-based surveillance tech?


Mary Haskett: If you consider apps like Foursquare and Gowalla to be location-based surveillance technologies, and I do, then you've already experienced some of the benefits. I've run into people who were at the same cafe where I was eating and I was very happy to get a chance to connect with them. I've tried restaurants based on seeing people I know check-in and comment on their experiences. But I will also sometimes choose not to check-in at a restaurant because I don't want to hear my kids complaining "You had sushi for lunch? Why don't you ever take me when you get sushi?"

Alex Kilpatrick: Like so many technologies, the advantages are a double-edged sword. When I travel, I can meet up with people in the cities I am visiting, but I also have a social obligation to meet up with those people, even when I may not feel like being social. I can track my kids on their way home from school, but they can never have the covert thrill of going somewhere they probably shouldn't go. A restaurant can offer me a free dessert for checking in, but I can also be targeted by a deluge of ads based on where I'm walking.

The biggest disadvantage of these technologies is the continual erosion of the expectation of privacy. As people share more, there is less of a societal need to keep things private. This results in people sharing more than they should, as well as a troubling lack of concern about what companies are doing with their data.

Scott Adams, of Dilbert fame, wrote a great blog post about living in "noprivacyville." In this thought experiment, he took privacy to an extreme — would you be willing to have every aspect of your life tracked in exchange for 30% lower cost of living? Insurance companies are already offering discounts if you are willing to have them install a GPS tracker in your car, and accept continual surveillance. Those companies can save a lot of money if they can see how you really drive, and adjust your costs accordingly. Many people would welcome 30% savings, and they will probably feel like they aren't changing their behavior at all. But it's a slippery slope. Once you accept surveillance in one aspect of your life, it's easy to let it slip into another. Before you know it you're paying a 30% premium to have a browser that doesn't track all of your web activity.

The biggest danger to society is not the technologies themselves. Technology can be controlled. The biggest danger is that over time our society will just accept surveillance as part of the cultural landscape.



How can people protect themselves from unwanted surveillance?


Alex Kilpatrick: There are "dazzle" techniques that can be used to degrade face matching — contacts can block iris matching, a rock in your shoe will fool gait sensors, etc. However, all of these approaches are stop-gap measures. They don't address the fundamental root of the problem: Ultimately, as a society we have to decide that we will accept some risk in exchange for living in a truly free society.

The biggest barrier to unwanted surveillance is knowledge and vigilance. Knowledge about where it is occurring, knowledge about what is being done with the information, knowledge about who is behind the surveillance and what their motivations are. Learn to ask "Why do you need this?" and "What if I leave this blank?"

Vigilance is about protecting your privacy; don't share personal information unless there is some real value to sharing it, and only share information with the people who absolutely need the information in order to do their job, or with people you trust. Once you share the information, it's out of your hands. Trusting a company to protect it is not a viable long-term strategy. The only long-term strategy is building a society where people value privacy and are willing to fight for it.

Associated photo (on home and category pages): Points by Vince Alongi, on Flickr



Related:


March 31 2011

Why Motorola may move beyond Android

An Information Week article claiming Motorola is developing its own web-based mobile OS sparked widespread speculation this week that the handset maker could be moving away from using Google's Android for future smartphones.

Based on an unnamed source familiar with the project and citing a rash of recent hires of ex-Apple and Google OS engineers, InfoWeek reported that Motorola may be developing its own OS because of concerns about Oracle's patent claims against Google.

Although it would seem to be late to get into this game with the popularity of iOS and Android, it's easy to see why Motorola would want its own OS. Beyond patent concerns, depending on a third party for a core part of your product is always risky. With most other smartphone vendors jumping on the Android bandwagon, it could also become harder for Motorola to differentiate its products. And let's not forget the fragmentation issues around Android, which will likely be a pain point for all Android-based handset makers.

Motorola responded to the reports with a terse and not terribly convincing denial. From PCMag:

When asked to comment on the accuracy of the InformationWeek report, Kira Golin, a spokeswoman for Motorola Mobility, issued what has come to be known as a "non-denial denial." "Motorola Mobility is committed to Android," she said. "That's our statement, and I can't control how you interpret or print it."

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD

Just what exactly is Motorola up to by hiring all those OS engineers? There are several theories floating around. Jared Newman points out that Motorola already has a basic web-based OS in its Webtop "application" for its Atrix line of docking devices, and Motorola has indicated that it plans to expand the use of Webtop to other smartphones. Keir Thomas of PC World wonders if the new hires may be working on an English-language version of Wophone, a Linux-based smartphone OS that's already been released in China with support from Motorola. Then there's Azingo, another Linux-based mobile OS that Motorola acquired last year.


There's also speculation that the "web-based OS" noted in the InfoWeek report could refer to a completely cloud-based system where not just the apps are online, but all of a user's data resides in the cloud.

We'll have to see if any of those theories prove correct, but it's not surprising that Motorola would be hedging its bets on Android. Last year, Motorola CEO Sanjay Jha clearly summed up the situation:

Owning your OS is important; provided you have an ecosystem, you have all the services and you have an ability and the scale to execute on keeping that OS at the leading edge.

Motorola isn't the only one hedging here. In addition to Android devices, HTC actively supports Windows Phone 7 and Samsung has launched its own Bada mobile OS. It's unlikely that Motorola is planning a wholesale move away from Android anytime soon, but like other smartphone vendors, they likely don't want all of their OS eggs in one basket.



Related:


March 30 2011

On a small screen, user experience is everything

gmail_mobileScreenshot.pngUntil someone comes up with a way to mind meld with mobile devices, the size limitations will largely define the overall experience. That's why a focus on interaction — not just look and feel — is an important part of any mobile project.

In a recent interview, Madhava Enros, mobile user experience lead at Mozilla and a speaker at the Web 2.0 Expo 2011 in San Francisco, discussed three mobile applications that he believes handle user experience quite well.

One that is very dear to my heart — at Mozilla we're really into the whole open web app concept using web technologies — is the Gmail web app for the iPhone. It's arguably a better mail client than the native one on the iPhone, and they're using a lot of really cool open web technologies to do it.

Another that I really like is the Kindle. I love the hardware itself, but Amazon really seems to have understood that mobile usage is about a constellation of devices. It's not just about the one phone you have. It's being able to read at home on your ereader, but then read on your Android phone when you're on the train, or pick up your iPad when you're elsewhere. That kind of consistency of getting at your stuff across a bunch of devices is a really great insight.

And there's Flipboard, where they understand how people are using some of these emerging technologies, like Twitter, as a protocol more than a service. It's the insight that people aren't using Twitter as an email replacement — though some people are — but as a way of reading a newspaper.

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD

Enros also talked about the trend toward voice activation technology. While typing on mobile devices likely won't prevail in the long term — he said no mobile device does it really well and he described the spectrum as "OK to terrible" — he also doesn't think talking to devices will be the default:

You just have to watch that person on the bus who's yelling into their cell phone to know that talking to a device is not always the answer. Sometimes you need a little bit — or should have a little bit — of privacy.

For more of Enros' thoughts on mobile design and interaction, check out the full interview in the following video:



Related:


March 16 2011

4G is a moving target

Motorola Atrix 4G In the alphabet soup of telecom acronyms, 4G (for "fourth generation") seems like one everyone should be able to agree on. Unfortunately it's turning into a vague and ill-defined marketing term.

The "generations" in cellular standards refer to dramatic shifts in mobile technology infrastructure that are generally not backwards-compatible, use new frequency bands, have significantly increased data rates, and have occurred in roughly 10-year increments. First-generation mobile networks were the original analog cellular technology deployed in the early '80s, second-generation refers to the transition to digital networks in the early '90s, and third-generation was the circa-2001 shift to spread-spectrum transmission that was considered viable for multimedia applications.

So far so good, but the leap to fourth generation is getting confusing. The International Telecommunication Union Radiocommunication sector (ITU-R) is the body that manages and develops standards for radio-frequency spectrum, and as such is the most official arbiter of concepts like "4G." But the ITU can't make up its mind.

Originally the ITU-R defined 4G as technologies that could support downstream data rates of 100 Mbps in high-mobility situations and 1 Gbps in limited mobility scenarios. The next versions of both LTE and WiMax networks (LTE-Advanced and WirelessMAN-Advanced/802.16m) promise these kinds of speeds, but neither is expected to be available commercially until 2014 at the earliest. However "4G" has been widely used by vendors to label other technologies that don't meet the ITU-R definition, including the original versions of LTE and WiMax, and that's where things start getting messy.

Apparently bowing to pressure from carriers on this topic, the ITU-R issued a statement last December backing down from the requirements of its original definition of 4G. Stephen Lawson wrote in Network World that the original ITU-R definition caused chaos by stepping on the vendors who had been already using the term:

After setting off a marketing free-for-all by effectively declaring that only future versions of LTE and WiMax will be 4G, the International Telecommunication Union appears to have opened its doors and let the party come inside.

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD


But loosening the definition of what can be called 4G hasn't pleased everyone and has added to the sense of confusion over what the term really means. Besides being used for what many consider 3G technologies like LTE and WiMax, T-Mobile has also been referring to its HSPA+ technology, which can support up to 42 Mbps downstream, as 4G.


Sascha Segan writes that he wishes there were some "4G police" to enforce the matter in a scathing PCMag article about AT&T's slippery usage of the term. PCMag recently tested some of AT&T's supposedly 4G devices and found them underperforming their 3G counterparts.

AT&T has reached a new low, though, by delivering "4G" devices that are actually slower than the carrier's own 3G devices. Yes, you read it correctly: for AT&T, 4G is a step backwards ... AT&T recently confirmed it is crippling the upload speeds on its two 4G phones, the Motorola Atrix and HTC Inspire.

AT&T has responded that they're still in testing mode on these higher-bandwidth capabilities and will be enabling advanced features at some point in the future. But they don't seem to have any issue calling them "4G" today. Not all of their customers are happy about this. Zack Nebbaki has launched an online petition that has more than 900 signatures, asking AT&T to explain (and presumably stop) the bandwidth-capping they appear to be doing to the Motorola Atrix 4G phones.

Bandwidth-capping seems to becoming more widespread. Clearwire is currently being sued for throttling the download speeds of its WiMax offering while advertising "no limits on data usage." (And the trend isn't just restricted to mobile networks, AT&T recently announced that it will institute data-capping for its DSL and U-Verse services.)

Regardless of the terms the industry uses (or abuses) or whatever bandwidth-limiting practices are put in place, it's the users' perceptions that will determine the success of these future technologies. That's what at least one mobile carrier says it's focusing on. When asked by Network World about the controversy around their use of the term 4G, T-Mobile's senior director of engineering Mark McDiarmid said, "For us, 4G is really about the consumer experience."

March 15 2011

Geolocated images reveal a place's visual identity

Cartagr.am screenshot

The pictures of a place informs you about that location. They provide context. Bloom's brand-new Cartagr.am uses Instagram's new API to do just that. Cartagr.am creates a map out of geolocated images. If you squint you can see the six populated continents above.

Like any good map-based app, Cartagr.am lets you explore at a local level. As you zoom in to an area new photos appear and sizes shift. Unsurprisingly, Austin looks a lot different from Seattle. The images and their relative sizes are selected based on interestingness.

Cartagr.am's platform shows the Stamen roots of the founders. From their blog we learn that it:

... was written using ModestMap.js for the tile mapping and SimpleGeo for the location services and the labels are the Acetate labels from FortiusOne and Stamen.

The fact that an app this rich is all in JavaScript is just amazing. ModestMap.js allows for broader mobile browser support than a Flash-based mapping framework, and it supports multi-touch for a potential iPad version.

Bloom is a not-yet-funded startup headed up by technology veterans. The co-founders are Ben Cerveny (Frog Design, Ludicorp, and others) , Tom Carden (Stamen Design), Robert Hodgin (Barbarian Group — see some of his beautiful work at Flight 404 ), and Jesper Andersen (Trulia).

They recognize that we are all drowning in a sea of data. Their goal is to make interacting with that data a beautiful and pleasurable experience. The tools they'll create will (hopefully) be accessible to anyone, regardless of technical skill level.

Cartagr.am is not the only geo-related Instagram app. Instabam will show you photos near you (via instagramers.com)

Cartagr.am is a great visualization tool and it captures the diversity of Instagram photos across the world. However, it's missing the activity of Instagram, which is is clocking ~6 photos per second. I want to see photos appear and change size as their interestingness score changes.


Tom Carden is speaking at Web 2.0 Expo SF in a session entitled "Data Visualization for Web Designers: You Already Know How to Do This." Save 20% on Web 2.0 Expo registration with the code WEBSF11RAD.

Where 2.0 — April 19-21 in Santa Clara, Calif. — will features lots of associated sessions, including Raffi Krikorian's workshop on "Realtime Geo Data." Save 25% on Where 2.0 registration with the code WHR11RAD.

March 14 2011

Are we too reliant on GPS?

GPSThe availability and ease-of-use of GPS has made it the primary choice for many location-based services, but a recent report from the Royal Academy of Engineering in London raises serious concerns about our reliance on GPS and its fragility.

While most of us associate GPS-based navigation systems with our cars, the report's author, Dr. Martyn Thomas, writes those are the least of our worries:

...the use of GPS signals is now commonplace in data networks, financial systems, shipping and air transport systems, agriculture, railways and emergency services. Safety of life applications are becoming more common. One consequence is that a surprising number of different systems already have GPS as a shared dependency, so a failure of the GPS signal could cause the simultaneous failure of many services that are probably expected to be independent of each other.

The core issue is that GPS technology has been built into many crucial infrastructure applications, from transportation systems to power grids, and in many cases there is no fallback option should the GPS signals suddenly become unavailable. GPS has many advantages, but it is not particularly secure or robust in terms of interference, due to its relatively weak signal strength. GPS hasn't failed in any major way yet, but concerns are growing with recent reports of strong solar storms that have the potential to disrupt GPS satellites, and a troubling, growing black market of GPS-jamming devices.

GPS jamming technology is illegal in the U.S. and many other countries, but the devices are becoming cheap and mass-produced and have the potential to wreak havoc. You can buy a $30 GPS-jamming device online today that plugs into a cigarette lighter socket and blocks GPS signals over tens of square miles. Long-haul truckers looking to mask their locations are one group that's using these devices, but the potential for more nefarious uses is obvious.

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD

Also worrisome is GPS-spoofing technology, which has been demonstrated by Todd Humphreys at the University of Texas. These systems can send inaccurate and untraceable readings to GPS devices. Considering that some banks and brokerages now use GPS timestamps to validate transactions, using a spoofing device to game financial systems is not out of the question.



It's not just the Royal Academy that's concerned. An in-depth New Scientist article on the issue points out that last November a NASA-appointed executive committee warned that GPS jamming devices could be disastrous if activated in cities. The NASA report doesn't mince words:

The United States is now critically dependent on GPS. For example, cell phone towers, power grid synchronization, new aircraft landing systems, and the future FAA Air Traffic Control System (NEXGEN) cannot function without it. Yet we find increasing incidents of deliberate or inadvertent interference that render GPS inoperable for critical infrastructure operations.



The New Scientist article outlines some fascinating incidents that have already occurred, like a trucker that was disrupting operations at a New Jersey airport twice a day for a month and naval signal-jamming tests in San Diego that disrupted GPS and took out local cell-phone use, air-traffic control, emergency pagers, harbor traffic managent, and even ATMs for two hours.

When you consider the scope of systems that rely on GPS today, there does seem to be reason for concern, but not everyone shares the sense of urgency. Ben Rooney at The Wall St. Journal called the terrorist angle "fanciful" and describes Thomas' report as "apocalyptic visions of a cyber-hell."

One possible solution, or backup technology, is eLORAN (Enhanced Long Range Navigation), a land-based radio navigation system that is an enhancement of the original LORAN system used by the U.S. military in the 1950s. eLORAN uses powerful signals that are not as susceptible to jamming as GPS. A group of scientists last week called on the UK government to confirm future funding for eLORAN, and researchers from the GAARDIAN project announced encouraging results for the first ever trial of joint GPS/eLORAN receivers.



Related:




March 09 2011

Why location data is a mess, and what can be done about it

Between identifying relevant and accurate data sources, harmonizing data from multiple sources, and finding new ways to store and manipulate that data, location technology can be messy, says SimpleGeo's Chris Hutchins (@hutchins). But there are ways to clean it up. Hutchins explains how in the following interview.


What makes location data messy?

Chris HutchinsChris Hutchins: The primary reasons are:

  • The ever-complicated restrictions, licenses, and use rights that come with different datasets — this can include requirements to use a company's map tiles, to share back all derivative works, and sponsored listings or advertisements alongside the data.
  • Conflating records that represent the same location/business/place between multiple datasets is an incredibly arduous process.
  • With small datasets, spatial queries are quite simple. However, as datasets grow exponentially in size, indexing that data to enable fast queries becomes difficult.
  • Location is usually an opinion, not a fact. For example, there are very strong views about where neighborhoods start and end.
  • The nature of location-based information requires all technology to handle real-time requests against datasets that are always changing.

What can be done to clean up location data?

Chris Hutchins: Part of cleaning up is understanding the situation. By being aware of the limitations of certain databases or of the restrictions that some datasets require, you can better understand your capabilities.

Specifically related to data, ensuring that your data source is providing clean and up-to-date data means you won't be sending end users to the wrong location or giving them false information. Also, as more companies understand what their core competency is — and what it isn't — they learn to trust other companies to handle the things that require a more niche expertise. Understanding that this technology is new and learning to embrace tools and services in their infancy will certainly give you an edge with location data.

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD



What are the most challenging aspects of location-aware development?


Chris Hutchins: The primary challenges we hear about are a lack of fast and accurate tools for storing, manipulating and querying spatial data, and the fact that most data is expensive and comes with restrictive terms of use. Today's geospatial infrastructure platforms are antiquated, so building the back-end infrastructure for applications takes a long time and requires some very niche skills.

How is SimpleGeo Places being used?

Chris Hutchins: SimpleGeo Places is a free database of business listings and points of interest (POI), which is being used by applications to get an up-to-date view of local businesses without having to manage a large and changing spatial database in-house. Most current POI databases have restrictive terms of use and are expensive. We believe that this has impeded innovation in the development of location-aware services and applications, so SimpleGeo provides an amount of usage of our Places data at no cost to developers and it will always be free of restrictive licensing.

What future developments do you see for location technology?

Chris Hutchins: The future of location is context, where apps will be better at giving you relevant information based on real-time information about where you are and what's around you. I'm really looking forward to a world where by knowing where I've been in the past, the things my friends like, the weather, and more, applications will be able to pinpoint where I might be interested in going and what I might be interested in doing, as well as getting me there.

This interview was edited and condensed.



Related:


March 08 2011

Location data could let retailers entice customers in new ways

locate.pngLocation-based data is an important part of social networking platforms, smartphone navigation tools, and augmented reality apps, to name just a few areas. Now, as more retailers dive into the app market, maximizing the use of location-based data could maximize sales potential as well.

In a recent post about sales in brick-and-mortar bookstores, Joe Wikert touched on the potential of location-based data:

When I open your app and you've detected I'm in-store, offer me special deals which are only good for the next hour. Make sure all the deals are fully redeemable using only my smartphone app. Don't email me coupons. Push them into the app so I can just flash my iPhone at the checkout counter and be on my way without fumbling through my email inbox.

As Wikert notes in the post, the merger of location data, sensor technology, and customer information could lead to hyper-targeted retail windows:

Take a page out of Groupon's play book. Use your nifty new app to track how many customers with common interests are currently standing in your stores. Push a message like this to all of them: "You're a history buff but you've never bought this great ebook about FDR. If at least 100 of you commit to buying it in the next 10 minutes we'll give you all a special discount of x%. Stop by the Biography section to browse the book and see why we think it's perfect for you."

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD


Such technology isn't far off, if not already here. Apps exist to help users locate each other, find a bar nearby, and even push location-based coupons at consumers. Google recently licensed data from new media advertising and consulting agency The Aura Group to provide location-based movie ads and theater listings, linked through so users can buy tickets.

There's also an app called Shopkick that uses a device's microphone (once the app is launched) to detect a customer entering a store. Customers earn "kickbucks" for visiting a store and then redeem them later for things like gift cards, vouchers and movie tickets. There are issues with ease of use, fraudulent activity, and data acquisition, but Shopkick appears to be headed in the right direction (for retailers ... consumers might have a different perspective about these sorts of applications).

These current tools represent just the first wave of location data applications. In a recent interview here on Radar, RunKeeper CEO Jason Jacobs considered future possibilities:

There are interesting potential applications as well. For example, if the application knows that running shoes should be changed every 500 miles, and it sees a user has logged 450 miles, a coupon could be offered for their next pair of shoes. And wouldn't it be great if that coupon is from a retailer that's three blocks from their house?

The pace of innovation in sensors parallels the pace of innovation in smartphones. But what if there was a way to take location functionality and streaming data capabilities beyond a phone; to shrink them down and lower the cost? The potential applications increase and adoption would increase. A larger community and richer set of aggregate data also creates opportunities to do some really interesting things.

As location technology becomes more powerful, and as the transmission of location data becomes frictionless and fully integrated into applications that weren't location-aware in the past, the ramifications are massive. Companies can be so much more thoughtful about the way they deliver services and person-specific functionality. That could apply to fitness, travel, logistics, shipping — there's so many different applications for this technology.

(Disclosure: OATV is an investor in RunKeeper.)

Photo: Locate, Locate, Locate by Gary Bridgman, on Flickr



Related:




March 07 2011

Social media in a time of need

The Red Cross and the Los Angeles Fire Department have been at the forefront of adopting social media in crisis response. That's not entirely by choice, given that news of disasters has consistently broken first on Twitter. The challenge is for the men and women entrusted with coordinating response to identify signals in the noise.

Public expectations for those staff are high, as research released by the Red Cross on at the Emergency Social Data Summit last year showed. Nearly half of respondents ask for help on social media and 3 in 4 would expect help to arrive within the hour. "We've set up an expectation that, because we're present in these spaces, we're listening," said Wendy Harman (@wharman), the Red Cross social media director, at a training conference in Tampa, Fla.

At present, those high expectations don't always match up with the capabilities that first responders possess. That's changing. First responders and crisis managers are using a growing suite of tools for gathering information and sharing crucial messages internally and with the public. Structured social data and geospatial mapping suggest one direction where these tools are evolving in the field. Prototype group-messaging platforms like Loqi.me point to ways they may evolve further this year.

Some tools, such as the Red Cross shelter web app and the Shelter View iOS app, allow the public to access information. "If we're going to ask the public for help in a disaster, we need to give them tools," said Trevor Riggen (@triggen), senior director at the Red Cross.

The Red Cross is working on building better filtering tools and mapping geotagged updates to help improve their situational awareness, said Riggen. By aggregating crisis data, visualization, and analysis into operations, they're pressing the opportunity to see where issues are emerging and introducing more relevance into social streams. "One tweet doesn't tell us to shift," said Riggens. "50 tweets will."

The opportunity that more crisis managers are seeing lies in the situational awareness that comes from analyzing massive amounts of social data generated in disasters. In those moments, "the public is a resource, not a liability," said FEMA administrator Craig Fugate (@CraigAtFEMA) last year. "Social media's biggest power, that I see, is to empower the public as a resource."

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD



Managing online crisis response at the LAFD


Long before the Red Cross joined Twitter and used it in the Haiti earthquake response, Brian Humphrey (@brianhumphrey) of the Los Angeles Fire Department (LAFD) was already online, listening.

"The biggest gap directly involves response agencies and the Red Cross," said Humphrey, who currently serves as the LAFD's public affairs officer. "Through social media, we're trying to narrow that gap between response and recovery to offer 'real-time relief'."

Humphrey, who added the @LAFD account to Twitter in 2006, may well have been the first emergency responder on the service. In the following video, he talks more about how technology has changed during his career and how the LAFD is using various social platforms.

For instance, Humphrey found a use for Foursquare that goes beyond a simple check-in: real-time sourcing for a given location. If he hears about an issue in the greater Los Angeles area, Humphrey looks on Foursquare to see who's checked in near there and reaches out to them to learn more about the conditions on the ground. Any relevant information is then passed along to operational staff responding to the incident.

In a second interview, below, Humphrey offers some of his best practices for online crisis communication. Currently, his department operates more than 80 social media accounts, many of which are populated using email. The LAFD is also using lightweight audio creation tools, like Cinch, to distribute reports. New social platforms don't replace existing communication infrastructure like the radio or broadcast television — they complement them.

While the tools may change, the goals have not: responders learn what's happening in the community, share relevant information, and protect those in need. What's shifted is that in an age where people are increasingly equipped with camera phones and Internet connections, citizens can become sensors in disasters. As more platforms like the recent geolocation app that connects citizen first responders to heart attack victims come online, keeping those ears on will continue to grow in importance.



Related:




March 03 2011

Social data is an oracle waiting for a question

We're still in the stage where access to massive amounts of social data has novelty. That's why companies are pumping out APIs and services are popping up to capture and sort all that information. But over time, as the novelty fades and the toolsets improve, we'll move into a new phase that's defined by the application of social data. Access will be implied. It's what you do with the data that will matter.

Matthew Russell (@ptwobrussell), author of "Mining the Social Web" and a speaker at the upcoming Where 2.0 Conference, has already rounded that corner. In the following interview, Russell discusses the tools and the mindset that can unlock social data's real utility.


How do you define the "social web"?

Matthew RussellMatthew Russell: The "social web" is admittedly a notional entity with some blurry boundaries. There isn't a Venn diagram that carves the "social web" out of the overall web fabric. The web is inherently a social fabric, and it's getting more social all the time.

The distinction I make is that some parts of the fabric are much easier to access than others. Naturally, the platforms that expose their data with well-defined APIs will be the ones to receive the most attention and capture the mindshare when someone thinks of the "social web."

In that regard, the social web is more of a heatmap where the hot areas are popular social networking hubs like Twitter, Facebook, and LinkedIn. Blogs, mailing lists, and even source code repositories such as Source Forge GitHub, however, are certainly part of the social web.

What sorts of questions can social data answer?

Matthew Russell: Here are some concrete examples of questions I asked — and answered — in "Mining the Social Web":

  • What's your potential influence when you tweet?
  • What does Justin Bieber have (or not have) in common with the Tea Party?
  • Where does most of your professional network geographically reside, and how might this impact career decisions?
  • How do you summarize the content of blog posts to quickly get the gist?
  • Which of your friends on Twitter, Facebook, or elsewhere know one another, and how well?

It's not hard at all to ask lots of valuable questions against social web data and answer them with high degrees of certainty. The most popular sources of social data are popular because they're generally platforms that expose the data through well-crafted APIs. The effect is that it's fairly easy to amass the data that you need to answer questions.

With the necessary data in hand to answer your questions, the selection of a programming language, toolkit, and/or framework that makes shaking out the answer is a critical step that shouldn't be taken lightly. The more efficient it is to test your hypotheses, the more time you can spend analyzing your data. Spending sufficient time in analysis engenders the kind of creative freedom needed to produce truly interesting results. This why organizations like Infochimps and GNIP are filling a critical void.

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD



What programming skills or development background do you need to
effectively analyze social data?


Matthew Russell: A basic programming background definitely helps, because it allows you to automate so many of the mundane tasks that are involved in getting the data and munging it into a normalized form that's easy to work with. That said, the lack of a programming background should be among the last things that stops you from diving head first into social data analysis. If you're sufficiently motivated and analytical enough to ask interesting questions, there's a very good chance you can pick up an easy language, like Python or Ruby, and learn enough to be dangerous over a weekend. The rest will take care of itself.

Why did you opt to use GitHub to share the example code from the book?

Matthew Russell: GitHub is a fantastic source code management tool, but the most interesting thing about it is that it's a social coding repository. What GitHub allows you to do is share code in such a way that people can clone your code repository. They can make improvements or fork the examples into an entirely new form, and then share those changes with the rest of the world in a very transparent way.

If you look at the project I started on GitHub, you can see exactly who did what with the code, whether I incorporated their changes back into my own repository, whether someone else has done something novel by using an example listing as a template, etc. You end up with a community of people that emerge around common causes, and amazing things start to happen as these people share and communicate about important problems and ways to solve them.

While I of course want people buy the book, all of the source code is out there for the taking. I hope people put it to good use.



Related:




January 26 2011

Mobile in the enterprise changes everything

mobile phonesBy now it's clear that mobile is the new global frontier for computing. People on every continent are embracing mobile as the primary method for electronic communications.

Increasingly, many are also using mobile computing for a myriad of day-to-day activities, from purchasing products and services to testing blood insulin levels. Five billion people now use cellphones — about 62 percent of the planet's population — compared to less than two billion who have a personal computer.

Within just a few years more people will access the Internet from a mobile device than from any other technology.

In developing nations the cellphone is a tool of empowerment; it has the power to change economic and political landscapes. In developed nations it is disrupting existing business models and introducing completely new ones. Mobile is enabling us to reinvent everything from healthcare to payment systems.

Smartphones, a subset of the mobile market at a little less than one billion users and growing quickly, has become a domain of hyper-innovation. Fierce competition from big players such as Apple, Google, Microsoft, and Research in Motion (RIM), is driving the rapid delivery of new, innovative capability. The lifespan of a new device or version of an operating system is been compressed. The market appetite continues to grow and there are no signs of fatigue. In fact, the accompanying mobile applications industry is booming. In just over two years, consumers have downloaded 10 billion apps from Apple alone.

Suddenly the PC looks like yesterday, while mobile is today and tomorrow.

It's time to act

For an IT leader, mobile is a game-changer. Unlike many other emerging technologies where an immediate strategy is not a concern, mobile is front and center now to your users and customers. This requires new thinking with regard to how data is accessed and presented, how applications are architected, what kind of technical talent is brought on board, and how companies can meet the increasingly high expectations of users.

As if there wasn't enough pressure already!

There are two audiences for mobile applications and capability: your internal audience and the customers you serve in the marketplace. Both have different needs, but both have expectations that have already been set. The benchmark is not your best internal web-based app; instead, it is the most recent best-in-class mobile app that any one of your users or customers have recently downloaded. While your internal users might be more forgiving for a less than optimum user-experience, you're pretty much guaranteed your external users won't be.

Many IT leaders are not yet fully embracing mobile (I'm not talking about email and calendar access here — those are the essential basics). Part of the reluctance to fully come to terms with mobile is simply change fatigue. Remember, the move to the web is still in full swing and many organizations still struggle with core system integration issues. But another part of the issue is that the magnitude of the change ahead is not well understood. A mobile strategy is not the equivalent of making your web applications accessible via a mobile device. In the short-term, that may suffice for some (but barely). In the medium-term a mobile strategy means thinking completely differently about the user experience.

In the world of mobile, IT leaders and business stakeholders must consider how new capability such as geolocation, sensors, near field communications, cameras, voice, and touch can be integrated into functionality. It also means that core issues such as security, device form-factor, and limited screen real-estate must be addressed.

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD


Mobile crashed the party

For a time there was a positive trend trajectory where the ubiquity of browsers on computers were making application development almost hardware agnostic. This was a great story and it had a decent run. The proliferation of devices and operating systems in the mobile space is a considerable spoiler. Now, any mobile application worth its salt must have versions — at minimum — for the web, for the iPhone, and for the Android platforms (that's the basics before you consider others such as the BlackBerry, iPad, and Windows Phone). That may mean multiple development and design efforts.

In other words, just when IT leaders were beginning to see some platform stability, everything changed.

Not all industries will need to adopt mobile strategies at the same rate and not all industries will have to deal with providing solutions for their end-users in the near-term. There is no question that if your product or service is business-to-consumer and it already supports a good deal of its business via the web, then this scenario demands an aggressive approach to mobile. While this offers some consolation for everyone else, it's merely temporal in nature.

Every business and every IT leader will need to quickly find the right response to the momentum that is nothing less than a mobile revolution.

At the end of the day, those of us who work with technology do it because of these types of major disruptions. The move to mobile represents yet another technology cycle that we must embrace. These cycles often start and end in different places. Who could have imagined that the web would change so much about our world in the way it has? I think it's fair to say that mobile has the capacity to change the world in ways we cannot even fathom today.

I don't know about you, but that makes me excited about our industry and the future.



Related:




November 11 2010

Boston's real-time transit data: "Better than winning the World Series"

What happens when you combine two risk-taking government employees, an active developer community, and a bus schedule? Unlimited amounts of innovation, improved customer service, praise for an embattled government agency, and a model for building a government/citizen developer partnership. Hear how the Massachusetts Department of Transportation learned from TriMet that open is better.

That was the pitch for Laurel Ruma's Ignite Gov talk. Ruma, who works at O'Reilly, is the co-chair of the upcoming Where 2.0 conference, which will focus on innovation in open data, civic innovation and geolocation, along with many other aspects of mobile technology.

"This was the best thing the MBTA had done in its history," said Ruma, exploring the backstory behind the Massachusetts Bay Transit Authority's (MBTA) move to make real-time data available.

The decision to release and support open transit data online has spawned a new ecosystem of mobile applications, many of which are featured at MBTA.com. The addition of real-time transit data could add more value to the apps offering help for MBTA riders that went online in 2009, like the Mass Transit app that has been making money for SparkFish Creative.

While some diehard Red Sox fans might differ with Ruma on whether real-time transit data is better than breaking the Curse in 2004, even the most hardened New Englander can see the value in knowing when the next T is going to arrive. Or, as cynical Boston mass transit commuters might hasten to add, when it won't.


October 21 2010

Welcome Laurel Ruma to Where 2.0

laurel rumaWhere 2.0, now entering its seventh year, will explore the mobile, location, social and mapping space at the Santa Clara Convention Center from April 19-21, 2011. The CFP for Where 2.0 2011 is open until Monday 10/25.

This year I'm welcoming a co-chair to the stage. Laurel Ruma, out of our Cambridge, Mass. office, is going to be working on the program and she'll be on-site with me. She's most recently been working in the Gov 2.0 area. In that role Laurel works with various groups, including Code for America, the Boston Urban Mechanics Office, and the Massachusetts Department of Transportation, to brainstorm, provide updates of the Gov 2.0 ecosystem, and create a bridge between the technologist/developer and government communities. Laurel was also the co-editor of our book "Open Government."

A lot of the innovation in location has been on the back of government support (GPS and TIGER data are examples). Laurel will bring her contacts and support to the Where program, and she'll also be available to meet with companies on the east coast.

October 14 2010

Where 2.0 2011 call for proposals is open

where 2.0Yesterday, Google's Marissa Mayer switched jobs from search to location. Google clearly thinks location is one of the next big things and it's giving the location team more horsepower. As John Battelle wrote: Marissa moving to run location is big.

At the next Where 2.0 (April 19-21, 2011), we'll look at the trends that got Google to rethink its location and mobile strategies. As Battelle notes in his post, "location is a key factor in the future of search, social, commerce, and media." We're going to examine those intersections and hear from the companies and people that are making the future happen.

The Where 2.0 call for proposals is open until October 25. Submit your talk idea now. Below I've included some of the main areas we're expecting to cover this year.

Location

New technologies enable a level of data collection, retrieval, and analysis never previously possible. Every data point, from what songs are most popular on a particular block to the coordinates where you last bought coffee, can now be used to guide strategic business practices.

Mobile

Mapping and mobile technologies are joined at the hip, and developers have recognized that "There's gold in them thar phones!" Companies in the Where 2.0 space are betting smartphone users will share their locations and that they'll make gobs of money with that data, fueling the mobile platform war. As society makes a shift in privacy -- or lack thereof -- consumer expectations for products and services will also adjust accordingly.

Social

Thanks to the smartphone, location has gone social. This allows users to communicate, connect, and share in new ways through powerful applications, notably Facebook Places, a seemingly innocuous feature that brought location-sharing -- and privacy concerns -- to 100+ million people, simultaneously. Companies are also relying on open data sets. Who will be testing the social-location limits next?

Sessions and workshops

Where 2.0 will have sessions and workshops on:

  • HTML5 -- A location app needs a map. The release of JavaScript mapping frameworks Polymaps and Cartagen prove that modern browsers and HTML5 can duplicate Flash, and that geo apps can be built. Is this the next killer tech?
  • Data Collections -- Mobile apps are not just about check-ins, they're about users and data. With more users comes more realtime data, and with more data comes the ability to get even more users via better services.
  • Users vs. Features -- Location apps are often so powerful, users don't always understand what's being done with their data. Services have to protect their users. Who is doing it successfully and what are the cautionary tales?
  • Public vs. Private -- Geodata is often a fact. This house is here. This person was there at this time. But just because it's a fact doesn't mean it should be public. Or does it?
  • Ads vs. Subscriptions -- Location apps need to make money and the business model debate continues to rage about the best way to go about it.
  • Interfaces -- Augmented reality is the interface du jour, and there is always a 2D vs. 3D debate.
  • Future of Mapping -- Maps are the heart of Where 2.0. There are new imagery, recording, and collection technologies that will change maps in the coming years.
  • Government & Humanitarian -- Location, mobile, and social technologies are being used to save lives and open governments around the world. Look no further than the use of Ushahidi in Haiti to see a successful example.

April 02 2010

APIs launched at Where 2.0: a pocket guide

Where 2.0 has become a launch-pad for new geo products. As a sign of the times, these announcements focus on APIs rather than the usual feature-increments or partnership propaganda (we geo folk always prefer the Walk over the Talk). Here's a handy reference list in no particular order:

Placecast Match API

The free service "simplifies the process of de-duplicating and matching content like business listings, reviews, check-ins and events to their true location"; it is the first of what will certainly be many Local Disambiguation platforms we see that attempt to generate concordance between how Places are called. With this product Placecast appears to be moving out of the ad-world and into the geo platforms space. Details are few as the API is not yet out the door, but you can sign up in advance at the link above.

Skyhook Local Faves

An all-in-one iphone SDK that brings location to social sharing applications: "in a Local Faves-enabled wine app, users could check-in to a restaurant via the wine they are drinking, giving the app developer the opportunity to broadcast user location" or allow users to "see where around the world their favorite song is playing, or view all songs playing locally". I'm not yet convinced that the overly broad use cases demonstrate anything otherwise unavailable to developers, but this product (and the company's recent launch of SpotRank) shows how Skyhook is positioning itself in the center of the geoweb to become a critical provider of location context. The API comes out 'mid April'; register in advance here.

Yahoo Concordance

One for the toponymsters: a geographic Rosetta Stone" that lets you convert between GeoPlanet WOEIDs (Yahoo's open place-based identification namespace) and other geographic naming systems including, most notably, Geonames. A step in the right direction from my geo alma mater, but I would hope that Yahoo is planning to do something very similar with Business Listings if it is to avoid being out-maneuvered entirely in the immediately forthcoming Social/Local landscape.

SimpleGeo

SimpleGeo is only about a year old now but is successfully positioning itself as a dead-easy geo service and recently boosted its small team with a few seasoned hires. The nascent API is available via XML and JSON and includes supported iphone SDK and PHP PEAR libs. The functionality is basic, but growing: object-based point storage, proximity lookups, reverse geocoding (in US) and spatiotemporal headcounts via Skyhook's SpotRank (above). Expect more services to be integrated via their recent deCarta and Quova deals. Use it now for free for up to 1m calls-per-month, and then scale up as far as Amazon's S3 will take you.

Have I missed any? (Let me know.) These products are the litmus of the geo sector: what we are seeing here is the importance of Business Listings as first-person reference (soon to obviate Yellow Pages entirely), the continued advent of simple-and-deep over shallow-and-feature-rich product methodologies, and the "virtuous circle of data," where the use of these products generates value (data about us), which is exposed again through the API. Platform plays like these -- not the next iPhone app -- are what genuinely carry the sector forward.

April 01 2010

Location in the Cloud (Part 1)

I’m a guest blogger this week at the 2010 Where 2.0 conference. I’ve been working with mobile location services and systems since 2000. In lieu of a heavy focus on mobile at Where 2.0 this year, Brady Forrest invited me to write a few words and offer insights into a theme around two emerging areas of mobile location data access—Wireless Location in the Cloud and Social Location in the Cloud. This post is the first in a two-part series. 

Wireless Location in the Cloud 

Wireless location data and access to it has been a highly coveted wireless network asset since the early days of e9-1-1 in the late 1990s. To support 9-1-1, wireless carriers in the US made large investments to deploy life-saving emergency services location infrastructure capable of pinpointing the location of any wireless 9-1-1 call, on any phone, as mandated by the FCC. Today, these systems have been augmented to support a separate commercial service delivery capability designed for commercial applications and services, and as with short messaging services, these services are now becoming available through cross-carrier aggregators supported by most tier one wireless carriers such as AT&T, Sprint, T-Mobile, and Verizon Wireless. Attendees at Where 2.0 today got a glimpse of what’s now available.

Cross-Carrier Aggregation is Finally Real 

Veriplace is a new location aggregation service offering by Wavemarket. The company has a decade-long history building credibility and trust with North American wireless carriers by supplying them with white labeled family finder and child locator applications, which have robust privacy management functions built-in. With an earned trust and understanding of privacy management in a carrier environment, Wavemarket and their new Veriplace service is one of first location aggregation services to offer proxy access into most tier-one wireless carrier location services systems. Think of Veriplace as a central hub in the cloud with one common web services API that can access the location of every mobile phone or connected machine-to-machine device in the US, supported by an OAuth framework that insulates developers from working through stringent wireless carrier privacy policies. 

Two Concepts Supporting Location beyond the Smartphone 

With their location API, Veriplace introduced two concepts today:
  1. Ubiquity. Veriplace is capable of locating 150M phones of all types on AT&T, Sprint and T-Mobile. Verizon is sure to arrive soon.
  2. Cloud-based. Since the service uses the wireless network, it’s possible to locate mobile devices via a web service in the cloud and there’s nothing to install devices. 
Veriplace wants to encourage developers to think about mobile location data as an accessible ingredient common to all mobile devices, not just high end Android, Blackberry, or iPhone smartphones. They also want to help developers think beyond native apps towards mobile services—services that use lower common denominator communication modes such as short and multimedia messaging services, interactive voice response, the Web, and even WAP from feature phones lacking full HTML browsing support. 

The Veriplace Developer Contest 

Specific developer opportunities they cited include Social Networking, Messaging & Alerting, Fraud Detection, Roadside Assistance, Government services to citizens, and more. To stimulate work in these areas, The Veriplace Developer Contest was announced with cash awards and prize incentives for interested contestants.

March 26 2010

Base Map 2.0: What Does the Head of the US Census Say to Open Street Map?

where 2.0 foursquare badge

Ian White, the CEO of Urban Mapping, makes his living collecting and selling geo data. For next week's Where 2.0 has put together a panel of government mapping agencies (the UK's Ordnance Survey and the US's Census Department) and community-built mapping projects (Open Street Map and Waze). Crowdsourced projects like Waze and Open Street Map have forced civic agencies to reconsider their licensing. They have similarly encouraged larger companies like Google, NAVTEQ and Tele Atlas to implement their own crowdsourcing platforms (like Google Mapmaker and Tele Atlas' MapShare). Ian and his panelists will discuss all of this in Base Map 2.0 on Thursday - you can consider their conversation as Part 1 of the panel.

Ian gathered the panelists (listed below) for a pre-conference phone call. Here's Ian's descriptions of the panelists:

  • Peter ter Haar - the Head of Products for the Ordnance Survey which is the UK's national mapping agency.
  • Tim Trainor - the Head of Geography at the Bureau of Census within the Department of Commerce in the US Government. His group is responsible for TIGER which is a lot of the line geometry as the US as well as the master addressing file, a lot of improvements there in the 2010 Census.
  • Steve Coast - the Founder of OpenStreetMap and also Cofounder of the commercial entity CloudMade whose role is to commercialize the OpenStreetMap data, although I believe he's speaking primarily with the OpenStreetMap hat on today.
  • Noam Bardin - the CEO of Waze, a company which effectively offers a crowd source navigable base map with an emphasis on real-time data.

The edited transcript is after the jump:

Ian White: Okay. Thank you all for joining us. This is a preview podcast or streaming or transcribed version of a session to take place at the O'Reilly Where 2.0 Conference in San Jose on April 1st at 2:00 p.m.. The panel is Base Map 2.0 or Base Map Revisited. My name is Ian White. I am founder of Urban Mapping, moderating the panel. And I'm very excited to have gathered these four very well-regarded individuals together virtually today and then in the flesh a few weeks from now. And we're looking forward to having part two of this in the session on April 1st.

Peter, the Survey is going through quite, let's say, a relative tumultuous period. There's been political pressure on the Survey to rethink its business model. By any estimation, there are three possible courses of direction that the Survey might take. We have just concluded as of yesterday what is called the consultation period, which is a period of time for the public to provide input. And it seems like there are three general directions the Survey could go. One, stay the course; change nothing. Two, make all Survey maps and digital data available for free reuse under some sort of Creative Commons-style license. Or a third direction could be make a subset of the OS's digital maps available for free reuse on some sort of Creative Commons-style license. Now the Survey is a bit of an interesting entity in that it is known as a trading fund so it is able to generate revenue from public sector and private sector and has a bit of a reputation for having a stranglehold on development of commercial-based map products in the UK. Peter, so the first question for you is can you comment on the state of affairs relative to the consultation period and what might shake out from that and how what you say on this recording might be different than April 1st when a few more weeks have elapsed?

Peter ter Haar: Yes, because it's good to [inaudible] the question. It's good to understand that we're recording this on the 18th, which is one day after the consultation. And obviously, Survey is not the organization that is running the consultation themselves. It is the UK government that is doing that. And, therefore, I don't think I should anticipate or I should, yeah, anticipate on the outcome of the consultation. There's one thing that I would like to qualify what you said. You stated Survey as a trading fund is able to generate revenue from the base map, from the IP. I would like to qualify that as Survey has to generate revenue from the IP because it is the only source of income that Survey has. So if we would not be generating revenues from the base map, then we would not be able to maintain the base map. So I think that is an important qualification to make. So, therefore, everything that we do also as part of the outcome of the consultation will have to be constructed in such a way that the creation and maintenance of the base mapping of Great Britain is a sustainable activity in some way or form because we do underpin quite significant parts of government but also of private enterprise.

Ian White: Okay. Noam, Waze as a crowd-sourced product is effectively based on the goodwill of the community. We know that from the experience in Wikipedia you have a very small number of users responsible for a large percentage of the contributions. So the Wiki approach can effectively give you a lot of hopes and fears all baked into one. Can you comment a little bit on how Waze sees these users? And I understand that there are kind of more active and passive users in terms of the edits and contributions that they're making. And how has this growth evolved over time? And specifically, I'm interested in the US versus Israel which is your home country and the of the world?

Noam Bardin: All right. So at Waze, we have a free mobile app which the average user just uses in their daily driving. You get free turn-by-turn navigation. And they get real-time traffic and a lot of community and social features. And 99 percent of our users just drive around, the contribution they would do from the client would be their house numbers to mark and problems you can map, things like that. The one percent who are really active are those who actually go online and edit the map and record new roads and help maintain it. But every user who just uses the map and 99 percent are actually contributing, so collecting their data. This is a little different than a Wikipedia model where 99 percent just consume the data and one percent actually create it. In our case, 99 percent are contributing data; one percent edit that data or manage it or help control it. And so in that sense, I think it's a little different. At the same time, having a consumer-facing application really makes it appealing to a much larger subset. And part of our challenge going forward is to engage a larger and larger percentage of them in contributing to the application. So I think when you look at our growth rate where we just passed 650,000 users. We just passed the point where Israel was the leading country. So today, almost 60 percent of our users are not from Israel. We began in Israel in the beginning of '09. And we opened up to the world in November [inaudible] in August. So I think in that sense, when we look at the community challenge here, the community is very involved. But different parts of the community do different things. And allowing each part to do something else really keeps them engaged and doing what is right for them.

Ian White: Okay. Tim, from the perspective of US Census and just by way of a ten-second overview, US Census produces TIGER which is essentially the base map of the United States at a street level detail, primarily used for enumeration but also has many derivative uses. For instance, forming the basis of other commercial base maps in the case of possibly NAVTEQ or possibly OpenStreetMap, of course much more annotated by these parties. Tim, can you comment just for this is kind of a public service message in about 30 seconds on with the upcoming census how the master address file and TIGER's line work will be different or I'd love to say actually improved and made available to any interested parties and what rough timeline that will follow?

Tim Trainor: Well, thanks. I guess the first thing if I could just put a plug in and say for all of those in the US that get a form, we're hoping you fill it out and send it back and save us money. And it's good for your communities. Okay. That said, we actually maintain a master address file on an ongoing basis to support our surveys. And we use it to support the taking of the census for the decennial census. Along with that, in addition to needing the addresses to do mail out of questionnaires for surveys and censuses, we also need to maintain the street network so that we can do things like geo coding, so that we can get enumerators to locations where they have to go out and conduct work on the ground. And so in order for us to do that, we've employed the use of newer technologies for the 2010 Census that we've never done before. We're using GPS or we have used GPS technology for the 2010 Census. And in order to use the GPS, it really required that we realign the road network in TIGER so that it was more positionally accurate, so that we could assure that when we collected those GPS locations of housing units that they actually fell within the correct census block which then allowed for the correct allocation of that population within that collection block. So it was the need for from an operational perspective and the use of technology that drove the need to improve the basic road content and the road accuracy.

Ian White: Okay. With the use of GPS and other technologies, do you anticipate the availability of data to be more rapid than in years past or is it simply a matter of maintaining quality and, therefore, that doesn't necessarily change the new release dates?

Tim Trainor: Over the couple of years leading up to 2010, we were actually doing two releases of our TIGER data a year. And that was really to get back to the partners, those who had provided data to us. We wanted to get back data to them so that they could --

Ian White: And this is primarily other government agencies or departments?

Tim Trainor: That's right. That's right. These would be local and state governments, tribal governments and so forth. That's right. Since we're now in the census realm and we're trying to manage the volume of work we have, we're now back to a once a year delivery of the TIGER data. But I think that is largely going to be driven by the need from the use of the surveys as well as the need for the use of the data itself. We can increase that number if we had to.

Ian White: Steve Coast, OpenStreetMap, can you give some let's call it "advice" to know I mean how to manage a community of users in a geographic domain where you're looking to generally support the initiative of the commercial product but recognizing also in the case of OpenStreetMap that part of the success has been because of your hands-off approach?

Steve Coast: Yeah, I mean you just nailed it. It's the more freedom, the better. The more tools you give to the users, the more open your API, the more open the licenses, the more contributions you're going to get; the more contributions you're going to get from unexpected places. And that's true across any crowd source project pretty much. The more you try to constrain your users, the less interest they're going to have, the less data and community and so on you're going to get out of it. I think that's really the key.

Ian White: Noam, could you comment a bit? When we've discussed it previously, you've said that the opportunity to work with some national mapping providers has only emerged with the advent of kind of the gorilla not in the room. So this is Google with its kind of go it alone base map approach and commercial entities or national mapping entities looking to you as a viable alternative. So how has Google helped or hindered in terms of ways of ultimately offering a free navigation-based handheld product?

Noam Bardin: One of the advantages we have because we keep the data under license and control the terms of use of license is that we can partner with commercial entities. So we can preserve their licensing model and we can create revenue opportunities for them; they can create for us. And also, we can update their data. But to make this happen, they need to have an incentive to work with us. And overall, up until the point when Google came out and announced their initiative, we were seen as the enemy by local map providers because we were giving it away for free navigation-wise to consumers and updating it. And they saw us as competitive. When Google announced what they were doing, suddenly we became the ally. And so all over the world, we're getting approached by map providers and owners of base maps on different quality levels who basically have the challenge of how do I keep updating my map fast enough, add real-time layers when the requirement of the navigation center keeps going up in terms of how fast you're updating your map. Well, the cost structure is definitely not going down. And community is really the way to do it and crowd sourcing, too. And so this has allowed us to have many commercial partnerships [inaudible] Latin America. And if you look into our map in Latin America, most countries out there [inaudible] Columbia, Peru, Argentina, Chile, probably have Brazil and Mexico by the end of the year, this commercial-grade maps in there which we've got base maps from our partner there, Location World. And then we update these maps and can both share in the data that's being created as well as in revenue that's being derived from this data. And so this has been a great opportunity for us. And all over the world, we're partnering with the local players. And at the end of the day, maps are an extremely local activity. And we are a global player that provides a slice of the map infrastructure.

Ian White: Any idea why the countries you just mentioned I think most or all of them are Latin/South America? Is it a coincidence or do they share something?

Noam Bardin: Our first partner is Latin America. But we're negotiating similar relationships in Asia, in Europe and really around the world. If you own a base map today, you're basically threatened with how do I lower my cost structure dramatically. And this is sort of what we do come in and help out.

Ian White: Steve, with OpenStreetMap, do you see something similar in other parts of the world where you could help kind of or a national mapping agency could kind of defray or spread their costs across kind of a wiki kind of geo user base to help maintain that?

Steve Coast: Yeah. I mean I wouldn't say that Waze is the only sort of company and entity in the world that's able to partner with commercial entities, right? I mean it's not like OpenStreetMap hasn't done that or CloudMade hasn't done that. And everywhere we do, we see massive increases in the amounts of data, both commercial data but then obviously places like Haiti which is what I think you're trying to get at where straight after an earthquake everybody contributes and uses OpenStreetMap as the central repository of all map data. And it leads to just incredible quality of stuff because we just get out of the way and let people add exactly what they want. And today, you see OpenStreetMap in the disaster areas. It's not only the default map; it's usually also the only map that people can use because there simply isn't map data.

Ian White: Right. So just by way of editorial clarification for those listening or reading, in the wake of the Haiti earthquake and, I think, the Chilean earthquake, weeks back there was a groundswell of support of individuals and NGOs and others contributing base map a la OpenStreetMap as opposed to other map platforms that might be out there and could've supported these initiatives and so Steve was commenting on that. Steve, do you have anything else you could comment on relative to kind of response time, other map platforms? I know Google has Map Maker which has a similar kind of aim as OpenStreetMap. Although, clearly there are terms of service and there is a clear business model behind it, meaning commercial use is quite restrictive. Any perspective there?

Steve Coast: I think people like Google and others, they have a clear company goal of getting out of revenue fees being paid to people like [inaudible]. So their primary goal is that they want to replace those guys. So that means road maps with rich points of interest and a few other things as the first step. So helping out places like Haiti, while nice, is incidental and a bit outside the scope of exactly what they're trying to do with Map Maker. And also slightly outside of the scope because Map Maker is a lot more constrained than OpenStreetMap; it would be hard for people to use it in the same way, both because the tools are very different and also because the license just doesn't allow a lot of people to use the data. So I think there's clear advantages.

Ian White: To continue on the doom and gloom topic, Peter, could you comment on the Survey and the role that they play in any kind of rapid response? I don't necessarily mean production of a paper map based on data that's maintained, but if there's some kind of incident that's called a natural disaster and I, pardon, can't think of any in the UK in recent memory, meaning the last few months, but the role that your organization or other kind of mapping entities or then the UK Government might support in terms of public interest and helping to very quickly respond in kind of a geographic sense?

Peter ter Haar: Well, one of the examples was obviously when two-and-a-half-years ago some of the rivers decided not to stay within their normal course. In the summer of 2007, in Carlisle, we had the problem that the emergency center of the local authority was housed in the basement in a building in one of the lowest areas of town. Obviously, that flooded and all of the emergency response almost came to a halt because people didn’t have the mapping anymore. It didn't really feel like Haiti, but it was quite grim. We have something called mapping for emergencies. And we have some of our staff then go in and do the analysis on behalf of the local authority and of the emergency services there to make sure that we knew which houses to evacuate; which roads actually were safe to drive on and also to look at what potentially other areas could flood if specific dams would break, et cetera. It's that kind of fairly low-level response that we do quite regularly for natural disasters, but also sometimes for let's say to support security services or the police in all kinds of incidents.

Ian White: Okay. Tim, from the perspective of US Census or possibly other agencies within government Katrina was obviously a pretty well-known national incident in terms of government response. Can you discuss briefly the role that your organization was able to play? And I'm most interested in the time to respond and what was kind of custom as opposed to a traditional months to weeks development cycle of turning out data to reducing that to much shorter intervals?

Tim Trainor: Yeah. The fact that we have the data sitting here, it's a very valuable resource. So when we got the call from the administration to try to get some information out from a demographic perspective, you know, what's the condition on the ground for the type of population, the age, the sex, where the senior citizens were, the children, those kinds of things. We were able to generate data from TIGER, maps from TIGER to show the characteristics of the population, get them posted to our website very quickly and get them out there. And then that generated more interest. And we then produced more products as a result of those questions coming in. And so it was a very quick, timely response.

Ian White: Okay. Noam, can you comment on -- I don't think this is something that Waze has done, but to what degree do you see something like Waze potentially being used as a tool with potentially hundreds of thousands of probes out in the field?

Noam Bardin: Well, we haven't really had an incident where it made sense to use us. But we've had similar response on the commercial side, meaning commercial map customers who decided to purchase our map. And one of the compelling features for them was the fact that they could go online whenever they wanted, update things that are important to their customers and the next data that we compiled into the map. That's something obviously you can't get from a Tele Atlas or a NAVTEQ or anyone else. And that goes back to the real-time side. Now if you have a customer who has an entrance for their truck into their loading dock, it's critical for them. There's no way they're Tele Atlas to add that to their base map. But for us, [inaudible]. They went in online, added information; the next day, we had it. And so I think it's similar in that sense of how fast you actually build the map. So a crisis situation is the same thing, right? It's how fast can you build and share this data? So we haven't had a crisis situation where either a community or data could help. Of course, we would make it available if it could. But we have seen it in commercial entities.

Ian White: Okay. We're coming up on time. I wanted to keep this to 25 minutes, so I do want to conclude.

Peter ter Haar: Absolutely. Go to Mapaction.org and see what you can do to support emergency mapping. That's all I wanted to say, Map Action. Very good.

Ian White: Okay. Thank you, Peter. And actually, that Map Action's not to be confused with the patented Map Action technology that Urban Mapping uses in its printed map product. Final question: Which of the entities in the room would you be interested in partnering with and why, in one sentence?
Peter ter Haar: I would love to be partnering with Waze to understand what we can do, how they can help us keep our mapping up-to-date even more.

Tim Trainor:
Well, I'd like to partner with Steve. I think that the time is here where not only have we relied on local governments and other sources for data, but with the volunteer geographic information activity going on, it's a great source for getting what seems to be very high quality data from a large labor source. And I think that's a great opportunity to contribute to a national good.

Steve Coast: I'd reflect exactly the same thing. I mean OpenStreetMap, US Census, USGS and other people should be working together to create these free and open base maps that are eventually going to just do away with every other proprietary map on the planet.

Noam Bardin: I think we have a bingo here because I'm actually in the process of writing an email to Peter about how I think we should cooperate in the UK and so we'll take that offline.

Ian White: Okay. Good. And I'll take only five percent, which is saving two percent off the regular investment banking fees. So thank you all for participating in this "podcast" or "remote cast." Again joining us today, we had Peter ter Haar who is Head of Products for Survey, the UK's national mapping agency; Tim Trainor, Head of Geography at US Census, responsible for TIGER and the master addressing file; Steve Coast, Founder of OpenStreetMap; and Noam Bardin who is CEO of Waze. My name is Ian White. I am Founder of Urban Mapping. We look forward to having all of you listening and many more. Join us at the O'Reilly Where 2.4 Mapping Conference in San Jose this April 1st -- this is not a joke -- on April 1st at 2:00 p.m.. Thank you very much. And please do contact us with any other questions or things you'd like to hear at the panel. [End of recording]

January 23 2010

CrisisCamps and the Pattern of Disaster Technology Innovation


Over the past three years I have been working to bridge gaps between the tech community & traditional emergency management organizations.  I've focused on helping technologists adapt technologies to support humanitarian missions, often in response to a disaster.  
After Hurricane Katrina, Mikel Maron and I discovered a pattern for successful innovation during and after disasters.  Understanding this pattern is crucial to "Serving Those that Serve Others".
Pattern for DisasterTech Innovation 1. Disaster2. Ad-Hoc Adaptation 3. Championship 4. Iterative Improvement
crisiscamp-logo.pngThere is an unprecedented amount of interest and attention in finding ways to help in Haiti & around the world.  The CrisisCamp & CrisisCommon projects are coordinating events and helping match organizations with needs to volunteers with skills.  I encourage you to participate, and volunteer your time, knowledge, and resources.  
Serve those that serve others.  You can make a difference now.

Upcoming Crisis Camps



November 17 2009

The iPhone: Tricorder Version 1.0?

The iPhone, in addition to revolutionizing how people thought about mobile phone user interfaces, also was one of the first devices to offer a suite of sensors measuring everything from the visual environment to position to acceleration, all in a package that could fit in your shirt pocket.

On December 3rd, O'Reilly will be offering a one-day online edition of the Where 2.0 conference, focusing on the iPhone sensors, and what you can do with them. Alasdair Allan (the University of Exeter and Babilim Light Industries) and Jeffrey Powers (Occipital) will be among the speakers, and I recently spoke with each of them about how the iPhone has evolved as a sensing platform and the new and interesting things being done with the device.

Occipital is probably best known for Red Laser, the iPhone scanning application that lets you point the camera at a UPC code and get shopping information about the product. With recent iPhone OS releases, applications can now overlay data on top of a real time camera display, which has led to the new augmented reality applications. But according to Powers, the ability to process the camera data is still not fully supported, which has left Red Laser in a bit of a limbo state. "What happened with the most recent update is that the APIs for changing the way the camera screen looks were opened up pretty much completely. So you can customize it to make it look any way you want. You can also programmatically engage photo capture, which is something you couldn't do before either. You could only send the UI up and the user would have to use the normal built-in iPhone UI to capture. So you can do this programmatic data capturing, and you can process those images that come in. But as it turns out, at the same time, shortly after 3.1, the method that a lot of people were using to get the raw data while it was streaming in became a blacklisted function for the review team. So we've actually had a lot of trouble as of late getting technology updates through the App Store because the function we're using is now on a blacklist. Whereas it wasn't on a blacklist for the last year."

Powers is hopeful that the next release of the OS will bring official support for the API calls that Red Laser uses, based on the fact that the App Store screeners aren't taking down existing apps that use the banned APIs. Issues with the iPhone camera sensors pose more of a problem for him. "In terms of science, it's definitely a really bad sensor, especially if you look at the older iPhone sensor, because it has what's called a rolling shutter. A rolling shutter means that as you press capture or rather as the camera is capturing video frames or as you capture a frame, the camera then begins to take an image. And it takes a finite number of milliseconds, maybe 50 or so, before it is actually exposed to the entire frame and stored that off into a sensor. Because it's doing something that's more like a serial data transfer instead of this all at once parallel capture of the entire frame, what that causes is weird tearing and odd effects like that. For photography, as long as it's not too dramatic, it's not a huge deal. For vision processing, it's a huge deal because it breaks a lot of assumptions that we typically make about the camera. That has gotten better in the 3GS camera, but it's still not perfect. It is getting better, especially when the camera's turned on the video mode."

One thing that has significantly improved with the iPhone 3GS is the actual camera optics. Most people know that the 3G and the first gen phone don't have autofocus at all. So their optics is just a fixed-focus simple plastic lens that doesn't allow you to focus up close. For anybody trying to do macro imagery, something up close, you're just not going to be able to do it on the 3G or the first gen phone. When we set out to build our application, we specifically had to work around that problem. A lot of why our application was successful was because we did focus on that problem. Then in the 3GS, the autofocus mode was enabled which is actually a motor-based autofocus system that can autofocus not only on the center of the image, but also somewhere that you pick specifically. And one more thing is that the autofocus system doesn't just change the focus, it also changes the exposure, which is something a lot of people don't notice. "

Another benefit the 3GS has brought to the table for vision processing is the dramatically increased processor speed. "With the 3GS, it's actually an incredibly powerful device," says Powers. "So we think right now that there's actually a lot of power there that hasn't been exposed. So I mean, there obviously are limits. But I don't think we've seen software that really hits those limits. Honestly, the limits that we're seeing right now are just in the SDK and what you can and can't do. One of the things about the iPhone is, as I was alluding to earlier when I talked about previous problems with the Android which are now being addressed, is that you could code at the lowest level on the iPhone, whereas you could not code at the lowest level on the Android. What that means to the iPhone is that you can actually write on ARM assembly if you want.

Almost everyone who's doing any sort of image processing today on the iPhone isn't taking advantage of that. We are to a very small extent in Red Laser, but there's certainly juice that can be extracted by just spending time optimizing for the platform, which is something that the iPhone lets you do. And the other thing to add to that is there are new instructions enabled by the ARM 7 Instruction Set which is used on the 3GS, which wasn't available previously. And, again, I actually haven't heard of anyone utilizing those functions yet. So there's a lot of power there that is yet to be exposed."

Although the iPhone has been an interesting platform for Powers, he is turning his attention toward the Droid at the moment. "From our perspective, we would love to keep developing our vision software on the iPhone, but because of the fact that the APIs are so restrictive right now and we have no ETA on when that'll be fixed, we're actually looking to the Android now, specifically, the new Droid, as an interesting platform for computer vision and image processing in real-time. Again, if it's not a real-time task, the iPhone's a great platform. If you can just snap an image, process it, you can do anything on the iPhone that has that characteristic. But if you want to process in real-time, Android is really your best bet right now because of the fact that A, the APIs do let you access the video frames and B, you can now actually write on the metal of the device and write things in C and C++ with the new Android OS which, again, you couldn't do before. "

Alasdair Allan is approaching the iPhone from a different direction, using it as a way for astronomers to control their telescopes remotely while "sitting in a pub." While he's seen some primitive scientific applications of the iPhone for thing such as distributed earthquake monitoring, he thinks that the real benefit of the iPhone over the next few years will be as a visualization tool using AR.

That isn't to say that he isn't impressed with the wide variety of sensors available on the iPhone. "You have cellular for the phones. All of the devices have wifi. And most of the devices, apart from the first gen iPod Touch have Bluetooth. You, of course, have audio-in and speaker. The audio-in is actually quite interesting because you can hack that around and actually use it for other purposes. You can use the audio-in as an acoustic modem into an external keyboard for the iPhone, I think that's in iPhone Hacks, the book. It's quite interesting. Then on the main sensor side, you've got the accelerometer, the magnetometer, the digital compass. It's got an ambient light sensor, a proximity sensor, a camera, and it's also got the ability to vibrate. "

According to Allan, the iPhone sensor that the least people know about is the proximity sensor. "The proximity sensor is an infrared diode. I think it's actually now a pair of infrared diodes in the iPhone 3G. It's the reason why when you put your iPhone to your head, the screen goes blank. It basically just uses this infrared LED near the earpiece to detect reflections from large objects, like your head. If you actually take a picture of the iPhone when it's in call mode, with a normal web cam, you'd actually be able to see right next to the earpiece a sort of glowing red dot which is the proximity sensor. Because, of course, web cam CCDs are sensitive in the infrared so it would actually show up. This was a bit of a scandal early on in the iPhone's life. The original Google Search app used undocumented SDK call to use this so you could actually speak into the speech search, and Apple and everyone really was very annoyed about this. So they actually enabled it for everyone in the 3.0 SDK."

Unfortunately, Allan doesn't know of anyone who has been able to make practical use of the prox sensor, partially because it has such a short range. On the other hand, the newly added magnetometer in the 3GS has opened the door to a host of AR applications. But Allan points out that like any magnetic compass, it can be very sensitive to metal and other magnetic interference in the surrounding environment. "It is very susceptible to local changing magnetic field monitors, CPUs, TVs, anything like that will affect it quite badly."

Also, he adds, to do any really accurate AR applications, you need to use the sensors in concert. "By default, what you're measuring, of course, is the ambient magnetic field of the Earth. And that's how you can use it as a digital compass, because there are tables that will show you how to do deviations from magnetic north to true north, depending on your latitude and longitude. Which is why to do augmented reality apps, you need both the accelerometer and the magnetometer, so it can get your pitch and roll to the device and the GPS to get the latitude and longitude so you know the deviation from true north."

Allan thinks that although the current sensor suite has limited uses for scientific data capture, things will improve quickly. "I think the science usage is definitely going to grow. When the sensor get slightly more sophisticated than they are today, for instance gyros or you can imagine slightly better accelerometers or light sensors or sort of other things. You could even put LPG or methane gas sensors in there very easily. They're both sensors that are very small now. You could certainly get going in science doing environmental monitoring, all of that sort of stuff going very easily. And it would quite easily piggyback off sort of social networking ideas as well. I do see the very high-end smartphones contributing to growth in citizen-level science and people in the street getting out to do science and help people build large datasets that can actually be used to predict long-term trends and that sort of thing."

Powers concurs. "It behaves more like a tricorder than a communicator, right, because certainly voice communicating isn't all we're doing anymore. And I think if you take voice communication as a fraction of the utilization of a phone, you're going to see that there's definitely a trend that goes down all the time. I don't think it'll ever go to zero, but it'll certainly go to a smaller fraction. At the same time, the sensors are increasing. I would like to see not necessarily barometric or environment measurement sensors, but things like solid-state gyroscopes on phones and maybe a pair of cameras and maybe even different sensors that can allow us to read credit cards and do transactions on the device. I think there's even some talk of that appearing in the next gen iPhone so you can actually do transactions just by swiping your phone into a register. So I would agree with the assessment that they're becoming more like tricorder."

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl