Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

July 31 2012

New life for used ebooks

This post originally appeared on Joe Wikert’s Publishing 2020 Blog (“The Used Ebook Opportunity“). This version has been lightly edited.

Used Books by -Tripp-, on FlickrUsed Books by -Tripp-, on Flickr

I’ve got quite a few ebooks in two different accounts that I’ve read and will never read again. I’ll bet you do, too. In the print world, we’d pass those along to friends, resell them or donate them to the local library. Good luck doing any of those things with an ebook.

Once you buy an ebook, you’re pretty much stuck with it. That’s yet another reason why consumers want low ebook prices. Ebooks are lacking some of the basic features of a print book, so of course they should be lower-priced. I realize that’s not the only reason consumers want low ebook prices, but it’s definitely a contributing factor. I’d be willing to pay more for an ebook if I knew I could pass it along to someone else when I’m finished with it.

The opportunity in the used ebook market isn’t about higher prices, though. It’s about expanding the ebook ecosystem.

The used print book market helps with discovery and affordability. The publisher and author already got their share on the initial sale of that book. Although they may feel they’re losing the next sale, I’d argue that the content is reaching an audience that probably wouldn’t have paid for the original work anyway, even if the used book market didn’t exist.

Rather than looking at the used book world as an annoyance, it’s time for publishers to think about the opportunities it could present for ebooks. I’ve written and spoken before about how used ebooks could have more functionality than the original edition. You could take this in the other direction as well and have the original ebook with more rich content than the version the customer is able to either resell or pass along to a friend; if the used ebook recipient wants to add the rich content back in they could come back to the publisher and buy it.

As long as we look at the used market through the lens of print products, we’ll never realize all the options it has to offer in the econtent world. That’s why we should be willing to experiment. In fact, I’m certain one or more creative individuals will come up with new ways to think about (and distribute) used ebooks that we’ve never even considered.

Publishers Weekly recently featured an article about ReDigi, a startup that “lets you store, stream, buy and sell pre-owned digital music.” As the article points out, ebooks are next on ReDigi’s priority list. Capitol Records is suing to shut down ReDigi; I suspect the publishing industry will react the same way. Regardless of whether ReDigi is operating within copyright law, I think there’s quite a bit we can learn from their efforts. That’s why I plan to reach out to them this week to see if we can include them in an upcoming TOC event.

By the way, even if ReDigi disappears, you can bet this topic won’t. Amazon makes loads of money in the used book market and Jeff Bezos is a smart man. If there’s an opportunity in the used ebook space, you can bet he’ll be working on it to further reinforce Amazon’s dominant position.

Photo: Used Books by -Tripp-, on Flickr

Related:

Discovering science

The discovery of the Higgs boson gave us a window into the way science works. We’re over the hype and the high expectations kindled by last year’s pre-announcement. We’ve seen the moving personal interest story about Peter Higgs and how this discovery validates predictions he made almost 50 years ago, and which ones weren’t at the time thought “relevant.” Now we have an opportunity to do something more: to take a look at how science works and see what it is made of.

Discovery

Higgs boson image via Wikimedia CommonsHiggs boson image via Wikimedia CommonsFirst and foremost: Science is about discovery. While the Higgs boson was the last piece in the puzzle for the Standard Model, the search for the Higgs wasn’t ultimately about verifying the Standard Model. It has predicted a lot of things successfully; it’s pointless to say that it hasn’t served us well. A couple of years ago, I asked some physicists what would happen if they didn’t find the Higgs, and the answer was uniformly: “That would be the coolest thing ever! We’d have to develop a new understanding of how particle physics works.” At the time, I pointed out that not finding the Higgs might be exciting to physicists, but it would certainly be disastrous for the funding of high-energy physics projects. (“What? We spent all this money to build you a machine to find this particle, and now you say that particle doesn’t even exist?”) But science must move forward, and the desire to rebuild quantum mechanics trumps funding.

Now that we have the Higgs (or something like it), physicists are hoping for a “strange” Higgs: a particle that differs from the Higgs predicted by the Standard model in some ways, a particle that requires a new theory. Indeed, to Nobel laureate Steven Weinberg, a Higgs that is exactly the Higgs predicted by the Standard Model would be a “nightmare.” Discovering something that’s more or less exactly what was predicted isn’t fun, and it isn’t interesting. And furthermore, there are other hints that there’s a lot of work to be done: dark matter and dark energy certainly hint at a physics that doesn’t fit into our current understanding. One irony of the Higgs is that, even if it’s “strange,” it focused too much attention on big, expensive science, to the detriment of valuable, though less dramatic (and less expensive) work.

Science is never so wrong as when it thinks that almost all the questions have been answered. In the late 19th century, scientists thought that physics was just about finished: all that was left were nagging questions about why an oven doesn’t incinerate us with an infinite blast of energy and some weird behavior when you shine ultraviolet light onto electrodes. Solving the pressing problem of black body radiation and the photoelectric effect required the the idea of energy quanta, which led to all of 20th century physics. (Planck’s first steps toward quantum mechanics and Einstein’s work on the photoelectric effect earned them Nobel Prizes.) Science is not about agreement on settled fact; it’s about pushing into the unknown and about the intellectual ferment and discussion that takes place when you’re exploring new territory.

Approximation, not law

Second: Science is about increasingly accurate approximations to the way nature works. Newton’s laws of motion, force equals mass times acceleration and all that, served us well for hundreds of years, until Einstein developed special relativity. Now, here’s the trick: Newtonian physics is perfectly adequate for anything you or I are likely to do in our lifetimes, unless SpaceX develops some form of interstellar space travel. However, relativistic effects are observable, even on Earth: clocks run slightly slower in airliners and slightly faster on the tops of mountains. These effects aren’t measurable with your wristwatch, but they are measurable (and have been measured, with precisely the results predicted by Einstein) with atomic clocks. So, do we say Newtonian physics is “wrong”? It’s good enough, and any physicist would be shocked at a science curriculum that didn’t include Newtonian physics. But neither can we say that Newtonian physics is “right,” if “right” means anything more than “good enough.” Relativity implies a significantly different conception of how the universe works. I’d argue that it’s not just a better approximation, it’s a different (and more accurate) world view. This shift in world view as we go from Newton to Einstein is, to me, much more important than the slightly more accurate answers we get from relativity.

What do “right” and “wrong” mean in this context? Those terms are only marginally useful. And the notion of physical “law” is even less useful. “Laws” are really semantic constructs, and the appearance of the words “physical law” usually signify that someone has missed the point. I cringe when I hear people talk about the “law of gravity.” Because there’s no such thing; Newtonian gravity was replaced by Einsteinian general relativity (both a better approximation and a radically different view of how the universe works), and there are plenty of reasons to believe that general relativity isn’t the end of the story. The bottom line is that we don’t really know what gravity is or how it works, and all we really know about gravity is that our limited Einsteinian understanding probably doesn’t work for really small things and might not work for really big things. There are very good reasons to believe that gravity waves exist (and we’re building the LIGO gravitational interferometer to detect them), but right now, they’re in the same category the Higgs boson was a decade ago. In theory, they should exist, and the universe will get a whole lot more interesting if we don’t find them. So, the only “law of gravity” we understand now is an approximation, and we have no idea what it approximates. And when we find a better approximation (one that explains dark energy, perhaps, or one that shows how gravity worked at the time of the Big Bang), that approximation will come with a significantly different world view.

Whatever we nominate as physical “law” is only law until we find a better law, a better approximation with its own story. Theories are replaced by better theories, which in turn are replaced by better theories. If we actually found a completely accurate “theory of everything” in any discipline, that might be the ultimate success, but it would also be tragic; it would be the end of science if it didn’t raise any further questions.

Simplicity and aesthetics

Aesthetics is a recurring principle both in the sciences (particularly physics) and in mathematics. It’s a particular kind of minimalist aesthetics: all things being equal, science prefers the explanation that makes the fewest assumptions. Gothic or rococo architecture doesn’t fit in. This principle has long been known as Occam’s Razor, and it’s worth being precise about what it means. We often hear about the “simplest explanation,” but merely being simple doesn’t make an explanation helpful. There are plenty of simple explanations. “The Earth sits on the back of four elephants, which stand on a turtle” is simple, but it makes lots of assumptions: the turtle must stand on something (“it’s turtles all the way down”), and the elephants and turtles need to eat. If we’re to accept the elephant theory, we have to assume that there are answers to these questions, otherwise they’re merely unexamined assumptions. Occam’s Razor is not about simplicity, but about assumptions. A theory that makes fewer assumptions may not be simpler, but almost always gives a better picture of reality.

One problem in physics is the number of variables that just have to have the right value to make the universe work. Each of these variables is an assumption, in a sense: they are what they are; we can’t say why any more than we can explain why we live in a three-dimensional universe. Physicists would like to reduce the number to 1 or, even better, 0: the universe would be as irreducible as π and derivable from pure mathematics. I admit that I find this drive a bit perplexing. I would expect the universe to have a large number of constants that just happen to have the right values and can’t be derived either from first principles or from other constants, especially since many modern cosmological theories suggest that universes are being created constantly and only a small number “work.”

However, the driving principle here is that we won’t get anywhere in understanding the universe by saying “what’s the matter with complexity?” In practice, the drive to provide simpler, more compelling descriptions has driven scientific progress. Copernicus’ heliocentric model for the solar system wasn’t more accurate than the geocentric Ptolemaic system. It took Kepler and elliptical orbits to make a heliocentric universe genuinely better. But the Ptolemaic model required lots of tinkering to make it work right, to make the cycles and epicycles fit their observational data about planetary motion.

There are many things about the universe that current theory can’t explain. The positive charge of a proton happens to equal the negative charge of an electron, but there’s no theoretical reason for them to be equal. If they weren’t equal, chemistry would be profoundly different, and life might not be possible. But the anthropic principle (physics is the way it is because we can observe it, and we can’t observe a universe in which we can’t exist) is ultimately unsatisfying; it’s only a clever way of leaving assumptions unchallenged.

Ultimately, the excitement of science has to do with challenging your assumptions about how things work. That challenge lies behind all successful scientific theories: can you call everything into question and see what lies behind the surface? What passes for “common sense” is usually nothing more than unexamined assumptions. To my mind, one of the most radical insights comes from relativity: since it doesn’t matter where you put the origin of your coordinate system, you can put the origin on the Earth if you want. In that sense, the Ptolemaic solar system isn’t “wrong.” The mathematics is more complex, but it all works. So, have we made progress? Counter-intuitive as relativity may seem, in relativity Einstein makes nowhere near as many assumptions as Ptolemy and his followers: very little is assumed besides the constancy of the speed of light and the force of gravity. The drive for such radical simplicity, as a way of forcing us to look behind our “common sense,” is at the heart of science.

Verification and five nines

In the search for the Higgs, we’ve often heard about “five nines,” or a chance of roughly 1 in 100,000 that the result is in error. Earlier results were inconclusive because the level of confidence was only “two nines,” or roughly one in 100. What’s the big difference? One in 100 seems like an acceptably small chance of error.

I asked O’Reilly author and astrophysicist Alasdair Allan (@aallan) about this, and he had an illuminating explanation. There is nothing magical about five nines, or two nines for that matter. The significance is that, if a physical phenomenon is real, if something is actually happening, then you ought to be able to collect enough data to get five nines confidence. There’s nothing “wrong” with an experiment that only gives you two nines, but if it’s actually telling you something real, you should be able to push it to five nines (or six, or seven, if you have enough time and data collecting ability). So, we know that the acceleration due to gravity on the surface of the Earth is 32.2 feet per second per second. In a high school physics lab, you can verify this to about two nines (maybe more if high schools have more sophisticated equipment than they did in my day). With more sophisticated equipment, pushing the confidence level to five nines is a trivial exercise. That’s exactly what happened with the Higgs: the initial results had a confidence level of about two nines, but in the past year, scientists were able to collect more data and get the confidence level up to five nines.

Does the Higgs become “real” at that point? Well, if it is real at all, it was real all along. But what this says is that there’s an experimental result that we can have confidence in and that we can use as the foundation for future results. Notice that this result doesn’t definitively say that CERN has found a Higgs Boson, just that they’ve definitively found something that could be the Higgs (but that could prove to be something different).

Scientists are typically very careful about the results in their claims. Last year’s claims about “faster than light” neutrinos provide a great demonstration of how the scientific process works. The scientists who announced the result didn’t claim that they’d found neutrinos that traveled faster than light; they stated that they had a very strange result indicating that neutrinos traveled faster than light and wanted help from other scientists in understanding whether they had analyzed the data correctly. And even though many scientists were excited by the possibility that relativity would need to be re-thought, a serious effort was made to understand what the result could mean. Ultimately, of course, the researchers discovered that a cable had been attached incorrectly; when that problem was fixed, the anomalous results disappeared. So, we’re safe in a boring world: Neutrinos don’t travel faster than light, and theoretical physicists’ desire to rebuild relativity will have to wait.

While this looks like an embarrassment for science, it’s a great example of what happens when things go right. The scientific community went to work on several fronts: creating alternate theories (which have now all been discarded), exploring possible errors in the calculations (none were found), doing other experiments to measure the speed of neutrinos (no faster-than-light neutrinos were found), and looking for problems with the equipment itself (which they eventually found). Successful science is as much about mistakes and learning from them as it is about successes. And it’s not just neutrinos: Richard Muller, one of the most prominent skeptics on climate change, recently stated that examination of the evidence has convinced him that he was wrong, that “global warming was real … Human activity is almost entirely the cause.” It would be a mistake to view this merely as vindication for the scientists arguing for global warming. Good science needs skeptics; they force you to analyze the evidence carefully, and as in the neutrino case, prevent you from making serious errors. But good scientists also know how to change their minds when the evidence demands it.

If we’re going to understand how to take care of our world in the coming generations, we have to understand how science works. Science is being challenged at every turn: from evolution to climatology to health (Coke’s claim that there’s no connection between soft drinks and obesity, reminiscent of the tobacco industry’s attempts to discredit the link between lung cancer and smoking), we’re seeing a fairly fundamental attack on the basic tools of human understanding. You don’t have to look far to find claims that science is a big conspiracy, funded by whomever you choose to believe funds such conspiracies, or that something doesn’t need to be taken seriously because it’s just a “theory.”

Scientists are rarely in complete agreement, nor do they try to advance some secret agenda. They’re excited by the idea of tearing down their most cherished ideas, whether that’s relativity or the Standard Model. A Nobel Prize rarely awaits someone who confirms what everyone already suspects. But the absence of complete agreement doesn’t mean that there isn’t consensus, and that consensus needs to be taken seriously. Similarly, scientists are always questioning their data: both the data that supports their own conclusions and the data that doesn’t. I was disgusted by a Fox news clip implying that science was untrustworthy because scientists were questioning their theories. Of course they’re questioning their theories. That’s what scientists are supposed to do; that’s how science makes progress. But it doesn’t mean that those theories aren’t the most accurate models we have about how the world, and the universe itself, are put together. If we’re going to understand our world, and our impact on that world, we had better base our understanding on data and use the best models we have.

Higgs boson image via Wikimedia Commons.

April 03 2012

February 28 2012

November 15 2011

Helping educators find the right stuff

Learning RegistryEducation innovation will require scalable, national, open, interoperable systems that support data feedback loops. At the recent State Education Technology Director's Association's (SETDA) Leadership Summit, the United States Department of Education launched the Learning Registry, a powerful step toward creating the ecosystem infrastructure that will enable such systems.

The Learning Registry addresses the problem of discoverability of education resources. There are countless repositories of fantastic educational content, from user-generated and curated sites to Open Education Resources to private sector publisher sites. Yet, with all this high-quality content available to teachers, it is still nearly impossible to find content to use with a particular lesson plan for a particular grade aligned to particular standards. Regrettably, it is often easier for a teacher to develop his own content than to find just the right thing on the Internet.

Schools, states, individuals, and professional communities have historically addressed this challenge by curating lists of content; rating and reviewing sites; and sharing their finds via websites, Twitter and other social media platforms. With aggregated sites to peruse, a teacher might increase his odds of finding that "just right" content, but it is still often a losing proposition. As an alternative, most educators will resort to Google, but as Secretary of Education Arne Duncan told the SETDA members, "Today's search engines do many things well, but they aren't designed to directly support teaching and learning. The Learning Registry aims to fix this problem." Aneesh Chopra, United States CTO, called the project the flagship open-government initiative for the Department of Education.

The Department of Education and the Department of Defense set out to solve the problem of discoverability, each contributing $1.3 million to the registry project. Steve Midgley, Deputy Director for the Office of Educational Technology pointed out, "We didn't build another portal — that would not be the proper role of the federal government." Instead, the proper role as Midgley envisioned it was to create infrastructure that would enable all stakeholders to share valuable information and resources in a non-centralized, open way.

In short, the Learning Registry has created open application programming interfaces (APIs) that allow publishers and others to quickly publish metadata and paradata about their content. For instance, the Smithsonian could assert digitally that a certain piece of video is intended for ages 5-7 in natural science, aligned with specific state standards. Software developers could include algorithms in lesson-planning software systems that extract, sign, and send information, such as: "A third grade teacher used this video in a lesson plan on the bridges of Portland." Browser developers could write code to include this data in search results and to increase result relevance based on ratings and reputations from trusted sources. In fact, Midgley showed the SETDA audience a prototype browser plug-in that did just that.

The virtue of this system comes from the platform thinking behind its design — an open communication system versus a portal — and from the value it provides to users from the very beginning. In the early days, improved discoverability of relevant content is a boon to both the teacher who discovers it and the content owner who publishes it. The APIs are structured in such a way that well-implemented code will collect valuable information about how the content is used as a side effect of educators, parents, and others simply doing their daily work. Over time, a body of metadata and paradata will emerge that identifies educational content; detailed data about how it has been used and interacted with; as well as rating, reputation and other information that can feed interesting new analytics, visualizations, and meaningful presentation of information to teachers, parents, researchers, administrators and developers.

Midgley called for innovative developers and entrepreneurs to take advantage of this enabling system for data collection in the education market. As the simple uses begin to drive use cases that shed increasingly rich data, there will be new opportunities to build businesses based on analytics and the meaningful presentation of rich new data to teachers, parents, students, and others who have an interest in teaching and learning.

I am delighted and intrigued to see the Department of Education leading with infrastructure over point solutions. As Richard Culatta, Education Fellow in Senator Patty Murray's office, said to the audience, "When common frameworks are put in place, it allows smart people to do really creative things."

Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.

Save 20% on registration with the code RADAR20

Related:

October 10 2011

September 02 2011

April 18 2011

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl