Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 23 2011

TOC Debate: Amazon vs Apple

One day in the not-so-distant past, Joe Wikert and I had a semi-heated Skype argument about the publishing topic du jour. I knew I wouldn't win, but as I continued to make my case Joe suddenly interrupted with the prescient statement: "This would make for a great webcast!"

Maybe it was just a clever way of disarming me, and if so — it worked. But no sooner did we end our Skype chat then I was on Twitter, inquiring of my bookish-techy tweeps, "Would you be interested in such a webcast?" The answer was "yes." Joe jumped into the tweet fray and suggested the topic for what will be our first in the TOC Debates: Amazon vs. Apple.

Joe valiantly (ahem) volunteered to take up the cause of Amazon and act as Team Bezos. But, who, pray tell would be brave enough to stand up for Apple? The most eloquent arguer one could hope for — Ms. Booksquare herself — Kassia Krozser stepped up to the challenge to represent Team Jobs.

So, what began as a disagreement between colleagues has blossomed into what we think might become the most fun you can have fighting at work: the TOC Debates.

Register to attend the free Amazon vs Apple webcast.

As a bit of a preview, I offer a statement from each of the opponents (also available in audio form, above). As we say in Arizona, "Them's fighting words ..."

Joe Wikert (GM and Publisher, O'Reilly Media):

Joe WikertThis will be like shooting fish in a barrel. When I said I'd like to see a debate about Amazon vs. Apple's ebook platforms, and said I'd take the Amazon position, I never thought I'd find someone who would represent Apple! Shortly after that, Amazon announced their Kindle Cloud Reader initiative. Wow, yet another point for my case! I have loads of respect for Kassia Krozser, but she's got her work cut out for her. I'm very much looking forward to the debate as well as addressing questions from the audience.

Kassia Krozser, (proprietress of Booksquare.com):

Kassia Krozser Joe may wish he chose the Apple side! I won't deny that Amazon has many strengths, but they also have an amazing weakness: selective vision. The Kindle platform is based on an old, creaky format. Apple gets the web, and it gets the technology, hardware and software, that runs the web ... which, we all know, will expand both our ability to read all kinds of books in digital format and our definition of a "book."

Our first fight/debate takes place live via webcast on Thursday, Sept. 15 at 10am PT. Register for free here. And if you have ideas for publishing debates you'd like to see, please let me know.

TOC Debate: Amazon vs Apple — What began as a disagreement between colleagues has blossomed into the most fun you can have fighting at work: the TOC Debates. Our first debate, "Amazon vs. Apple," features Joe Wikert and Kassia Krozser.

Join us on Thursday, September 15, 2011, at 10 am PT
Register for this free webcast



Related:


August 19 2011

Go inside Google+ with Tim O'Reilly and Bradley Horowitz

Inside Google+ free webcastGoogle's past data efforts were largely tied to the discovery and categorization of information, but Google+ adds a new component to that mix: people.

This shift raises a host of important questions:

  • How will the social data from Google+ be put to use?
  • Is social data critical to Google's mission of organizing the world's information, or is it more aligned with the company's advertising model?
  • What long-term value does this data bring to users and Google itself?

Tim O'Reilly and Google VP of Product Management Bradley Horowitz will explore these questions and others during a free webcast on Tuesday, August 23 at 10 am PDT / 1 pm EDT.

Register to attend this free online event.

The webcast offers an extended preview of another conversation taking place at O'Reilly's Strata Summit, Sept. 20-21, in New York City. Part of a week-long series devoted to data, Strata Summit offers two days of high-level strategies for thriving in "the harsh light of data," delivered by the business and technology pioneers currently leading the way.

Strata Summit New York 2011, being held Sept. 20-21, is for executives, entrepreneurs, and decision-makers looking to harness data. Hear from the pioneers who are succeeding with data-driven strategies, and discover the data opportunities that lie ahead.

Save 35% on registration with the code STRATA
(Offer expires 8/22)



Related:


May 11 2010

What is Gov 2.0? Come find out this afternoon

Gov 2.0
Gov 2.0
Tim O'Reilly will host a one-hour webcast this afternoon at 1pm PDT / 4pm EDT. He'll be taking an in-depth look at Gov 2.0 -- what it is, where the opportunities lie, and how you can get involved.

Topics will include government efficiency and transparency, government as a platform, and how technologists can play key roles in this important transformation. There will be time for audience questions as well.

Registration is free, but slots are limited. Sign up here.

November 17 2009

The iPhone: Tricorder Version 1.0?

The iPhone, in addition to revolutionizing how people thought about mobile phone user interfaces, also was one of the first devices to offer a suite of sensors measuring everything from the visual environment to position to acceleration, all in a package that could fit in your shirt pocket.

On December 3rd, O'Reilly will be offering a one-day online edition of the Where 2.0 conference, focusing on the iPhone sensors, and what you can do with them. Alasdair Allan (the University of Exeter and Babilim Light Industries) and Jeffrey Powers (Occipital) will be among the speakers, and I recently spoke with each of them about how the iPhone has evolved as a sensing platform and the new and interesting things being done with the device.

Occipital is probably best known for Red Laser, the iPhone scanning application that lets you point the camera at a UPC code and get shopping information about the product. With recent iPhone OS releases, applications can now overlay data on top of a real time camera display, which has led to the new augmented reality applications. But according to Powers, the ability to process the camera data is still not fully supported, which has left Red Laser in a bit of a limbo state. "What happened with the most recent update is that the APIs for changing the way the camera screen looks were opened up pretty much completely. So you can customize it to make it look any way you want. You can also programmatically engage photo capture, which is something you couldn't do before either. You could only send the UI up and the user would have to use the normal built-in iPhone UI to capture. So you can do this programmatic data capturing, and you can process those images that come in. But as it turns out, at the same time, shortly after 3.1, the method that a lot of people were using to get the raw data while it was streaming in became a blacklisted function for the review team. So we've actually had a lot of trouble as of late getting technology updates through the App Store because the function we're using is now on a blacklist. Whereas it wasn't on a blacklist for the last year."

Powers is hopeful that the next release of the OS will bring official support for the API calls that Red Laser uses, based on the fact that the App Store screeners aren't taking down existing apps that use the banned APIs. Issues with the iPhone camera sensors pose more of a problem for him. "In terms of science, it's definitely a really bad sensor, especially if you look at the older iPhone sensor, because it has what's called a rolling shutter. A rolling shutter means that as you press capture or rather as the camera is capturing video frames or as you capture a frame, the camera then begins to take an image. And it takes a finite number of milliseconds, maybe 50 or so, before it is actually exposed to the entire frame and stored that off into a sensor. Because it's doing something that's more like a serial data transfer instead of this all at once parallel capture of the entire frame, what that causes is weird tearing and odd effects like that. For photography, as long as it's not too dramatic, it's not a huge deal. For vision processing, it's a huge deal because it breaks a lot of assumptions that we typically make about the camera. That has gotten better in the 3GS camera, but it's still not perfect. It is getting better, especially when the camera's turned on the video mode."

One thing that has significantly improved with the iPhone 3GS is the actual camera optics. Most people know that the 3G and the first gen phone don't have autofocus at all. So their optics is just a fixed-focus simple plastic lens that doesn't allow you to focus up close. For anybody trying to do macro imagery, something up close, you're just not going to be able to do it on the 3G or the first gen phone. When we set out to build our application, we specifically had to work around that problem. A lot of why our application was successful was because we did focus on that problem. Then in the 3GS, the autofocus mode was enabled which is actually a motor-based autofocus system that can autofocus not only on the center of the image, but also somewhere that you pick specifically. And one more thing is that the autofocus system doesn't just change the focus, it also changes the exposure, which is something a lot of people don't notice. "

Another benefit the 3GS has brought to the table for vision processing is the dramatically increased processor speed. "With the 3GS, it's actually an incredibly powerful device," says Powers. "So we think right now that there's actually a lot of power there that hasn't been exposed. So I mean, there obviously are limits. But I don't think we've seen software that really hits those limits. Honestly, the limits that we're seeing right now are just in the SDK and what you can and can't do. One of the things about the iPhone is, as I was alluding to earlier when I talked about previous problems with the Android which are now being addressed, is that you could code at the lowest level on the iPhone, whereas you could not code at the lowest level on the Android. What that means to the iPhone is that you can actually write on ARM assembly if you want.

Almost everyone who's doing any sort of image processing today on the iPhone isn't taking advantage of that. We are to a very small extent in Red Laser, but there's certainly juice that can be extracted by just spending time optimizing for the platform, which is something that the iPhone lets you do. And the other thing to add to that is there are new instructions enabled by the ARM 7 Instruction Set which is used on the 3GS, which wasn't available previously. And, again, I actually haven't heard of anyone utilizing those functions yet. So there's a lot of power there that is yet to be exposed."

Although the iPhone has been an interesting platform for Powers, he is turning his attention toward the Droid at the moment. "From our perspective, we would love to keep developing our vision software on the iPhone, but because of the fact that the APIs are so restrictive right now and we have no ETA on when that'll be fixed, we're actually looking to the Android now, specifically, the new Droid, as an interesting platform for computer vision and image processing in real-time. Again, if it's not a real-time task, the iPhone's a great platform. If you can just snap an image, process it, you can do anything on the iPhone that has that characteristic. But if you want to process in real-time, Android is really your best bet right now because of the fact that A, the APIs do let you access the video frames and B, you can now actually write on the metal of the device and write things in C and C++ with the new Android OS which, again, you couldn't do before. "

Alasdair Allan is approaching the iPhone from a different direction, using it as a way for astronomers to control their telescopes remotely while "sitting in a pub." While he's seen some primitive scientific applications of the iPhone for thing such as distributed earthquake monitoring, he thinks that the real benefit of the iPhone over the next few years will be as a visualization tool using AR.

That isn't to say that he isn't impressed with the wide variety of sensors available on the iPhone. "You have cellular for the phones. All of the devices have wifi. And most of the devices, apart from the first gen iPod Touch have Bluetooth. You, of course, have audio-in and speaker. The audio-in is actually quite interesting because you can hack that around and actually use it for other purposes. You can use the audio-in as an acoustic modem into an external keyboard for the iPhone, I think that's in iPhone Hacks, the book. It's quite interesting. Then on the main sensor side, you've got the accelerometer, the magnetometer, the digital compass. It's got an ambient light sensor, a proximity sensor, a camera, and it's also got the ability to vibrate. "

According to Allan, the iPhone sensor that the least people know about is the proximity sensor. "The proximity sensor is an infrared diode. I think it's actually now a pair of infrared diodes in the iPhone 3G. It's the reason why when you put your iPhone to your head, the screen goes blank. It basically just uses this infrared LED near the earpiece to detect reflections from large objects, like your head. If you actually take a picture of the iPhone when it's in call mode, with a normal web cam, you'd actually be able to see right next to the earpiece a sort of glowing red dot which is the proximity sensor. Because, of course, web cam CCDs are sensitive in the infrared so it would actually show up. This was a bit of a scandal early on in the iPhone's life. The original Google Search app used undocumented SDK call to use this so you could actually speak into the speech search, and Apple and everyone really was very annoyed about this. So they actually enabled it for everyone in the 3.0 SDK."

Unfortunately, Allan doesn't know of anyone who has been able to make practical use of the prox sensor, partially because it has such a short range. On the other hand, the newly added magnetometer in the 3GS has opened the door to a host of AR applications. But Allan points out that like any magnetic compass, it can be very sensitive to metal and other magnetic interference in the surrounding environment. "It is very susceptible to local changing magnetic field monitors, CPUs, TVs, anything like that will affect it quite badly."

Also, he adds, to do any really accurate AR applications, you need to use the sensors in concert. "By default, what you're measuring, of course, is the ambient magnetic field of the Earth. And that's how you can use it as a digital compass, because there are tables that will show you how to do deviations from magnetic north to true north, depending on your latitude and longitude. Which is why to do augmented reality apps, you need both the accelerometer and the magnetometer, so it can get your pitch and roll to the device and the GPS to get the latitude and longitude so you know the deviation from true north."

Allan thinks that although the current sensor suite has limited uses for scientific data capture, things will improve quickly. "I think the science usage is definitely going to grow. When the sensor get slightly more sophisticated than they are today, for instance gyros or you can imagine slightly better accelerometers or light sensors or sort of other things. You could even put LPG or methane gas sensors in there very easily. They're both sensors that are very small now. You could certainly get going in science doing environmental monitoring, all of that sort of stuff going very easily. And it would quite easily piggyback off sort of social networking ideas as well. I do see the very high-end smartphones contributing to growth in citizen-level science and people in the street getting out to do science and help people build large datasets that can actually be used to predict long-term trends and that sort of thing."

Powers concurs. "It behaves more like a tricorder than a communicator, right, because certainly voice communicating isn't all we're doing anymore. And I think if you take voice communication as a fraction of the utilization of a phone, you're going to see that there's definitely a trend that goes down all the time. I don't think it'll ever go to zero, but it'll certainly go to a smaller fraction. At the same time, the sensors are increasing. I would like to see not necessarily barometric or environment measurement sensors, but things like solid-state gyroscopes on phones and maybe a pair of cameras and maybe even different sensors that can allow us to read credit cards and do transactions on the device. I think there's even some talk of that appearing in the next gen iPhone so you can actually do transactions just by swiping your phone into a register. So I would agree with the assessment that they're becoming more like tricorder."

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl