Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 13 2012

A study confirms what we've all sensed: Readers are embracing ereading

The recently released Consumer Attitudes Toward E-Book Reading study by the Book Industry Study Group (BISG) showed impressive growth in ereading. From October 2010 to August 2011, the ebook market share more than tripled. Also notable, readers are committing to the technology, with almost 50% of ereading consumers saying they would wait up to three months to read a new ebook from a favorite author rather than reading the same book immediately in print.

In the following interview, BISG's deputy executive director Angela Bole reviews some of the study's data and addresses growing trends in ereading.

Bole will further examine the study's results — including data from the new third volume — at the "Consumer Attitudes Toward E-Book Reading" session at the upcoming Tools of Change for Publishing conference.

Are readers embracing ereading?

AngelaBole.jpgAngela Bole: When the first survey in volume two of BISG's "Consumer Attitudes Toward E-Book Reading" was fielded in October 2010, the market share for ebooks was less than 5%. In the latest fielding, conducted in August 2011, the market share was almost 16%. Clearly, readers are embracing ereading. The greatest interest today seems to lie in narrative fiction and nonfiction, with interest in more interactive nonfiction and education taking longer to develop.

How are most readers consuming e-books?

Angela Bole: In the October 2010 and January 2011 survey fieldings, there were two distinct categories of ereaders — tablets like the iPad and dedicated devices like the Kindle — with a wide functionality difference between them. During the May 2011 and August 2011 fieldings, the NOOK Color and many new Android-based tablets were released, and distinctions between device categories began to blur. Even so, dedicated ereaders remain the favorite ebook reading device for book lovers, especially for reading fiction and narrative nonfiction. The Kindle, in particular, remains strong.

DeviceGraph.jpg
A graph illustrating responses to the study question, "What device do you now use most frequently to read e-books?"

What are the most popular genres for ebooks?

Angela Bole: This depends to a degree on whether you're a "tablet person" or a "dedicated ereader person." Data from the Consumer Attitudes survey shows that the Kindle and NOOK are the preferred devices of survey respondents in all fiction categories, while tablets like the iPad hold the edge in nonfiction categories. In these reports, the data have suggested that dedicated ereaders may be better optimized for narrative reading, while the richer media capabilities of tablets may be more appropriate for nonfiction, education, and scientific and professional titles.

GenreGraph.jpg
A graph illustrating responses to the study question, "What genre(s) do you like to read, overall (in any format)?"

Do people typically buy ebooks on their computers and then transfer them to their devices?

Angela Bole: Until August 2011, our data showed that the computer (desktop or laptop) was the prevailing purchasing platform. Today, however, more and more people are purchasing directly on their dedicated ereaders — 49% of respondents to the August 2011 fielding, up from 36% in May 2011.

Does the research point to digital publishing helping or hurting the publishing industry?

Angela Bole: Consumers who migrate to digital are spending less on physical hardcover and paperback books. The research supports this out quite clearly. That said, respondents to the survey actually report increasing their overall dollar spending as they make the transition to ebooks. Almost 70% of the respondents to the August 2011 fielding reported increasing their ebook expenditures, compared with 49% in the October 2010 fielding. Respondents reported increased spending on books in all formats to a greater degree than they reported decreased spending. Assuming the publishing industry can develop the right business models, this is good news.

This interview was edited and condensed.

TOC NY 2012 — O'Reilly's TOC Conference, being held Feb. 13-15, 2012, in New York City, is where the publishing and tech industries converge. Practitioners and executives from both camps will share what they've learned and join together to navigate publishing's ongoing transformation.

Register to attend TOC 2012

Related:

April 15 2010

Ebook annotations, links and notes: Must-haves or distractions?

Liza Daly's recent piece in the New York Times inspired a great back-channel discussion among O'Reilly's editors. The subject: pros and cons of ebook links, annotations, and notes. There was a lot of interesting back-and-forth, so I asked participants if we could publicly share a handful of excerpts.

Mike Loukides on the reading path:

iBook screenshot showing embedded dictionary... inasmuch as I have lots of questions when I'm reading, I don't think I'd like to have the tools to answer them right at my fingertips. It's too easy, at least for me, to move from Little Dorrit to the entry on the Marshalsea in Wikipedia to a history of debtor's prisons, and sooner or later: what was I reading?

I suppose it depends on the implementation. The "Annotated" series from the 70s was, I think, just annoying. Better to just read the book and go back later for the commentary, rather than shoving it all in the reader's face.

There's an excellent book titled "What Jane Austen Ate and Dickens Knew" that goes into all the nitty-gritty background: how much was rent, how much did bread cost; if someone has an income of 500 pounds, is that a lot or a little? But it's a good thing that this information is packaged up in a separate book, not embedded into my copies of Dickens' books. But if someone could figure out the right way to build this kind of reading experience in a way that wasn't intrusive, that would be really good.

Adam Witwer on annotations as an option:

As a formerly serious student of literature (I got better!), I couldn't agree with Mike's sentiments more. The more difficult and rewarding stuff that I've read required all of my focus and attention. The only secondary aid I wanted was a dictionary, which is why the built-in simplicity of the iPad dictionary is such a beautiful thing.

Still, there are some texts for which the annotations are an indispensable part of the experience. I would have found "Ulysses" to be nearly impenetrable in places if I didn't have the annotations handy. To have those annotations somehow built in to the ebook so that I could easily flick back and forth between text and annotation sounds very appealing to the grad student in me.

Tim O'Reilly on anticipating a reader's needs:

The Oxford edition of Trollope has amazing footnotes, but they really get in the way of reading the book. If you don't ignore them, you don't get the benefit of the narrative because you're constantly distracted.

But I still think back to my days editing. One of my principles was that you had to anticipate the reader's questions and objections, so that just when they were about to leave you anticipated their need and filled it. It's what makes a great book compelling. I was so delighted when a reader wrote in about one of our "X" books to say that just as a question was starting to bubble to the top of his mind, Adrian [Nye] answered it. That's an awesome technical book. So even if the reader can go out for more info, it increases the need for thoughtfulness about what the reader really needs to know.

Russell Jones on a toggle solution:

There's a difference between linked information (where links can become obsolete) and embedded information, which is persistent. I'm sure you've all had the frustrating experience of clicking on a link only to find that the information is no longer available. In contrast, footnotes or endnotes in a book are always available. Ebook publishers can use both, as needed. If the information is critical (and small), embed it; otherwise, link to it.

The UI problem of all the ancillary material getting in the way of a clean reading experience can be solved easily, by simply making the links/extra info invisible until the user reveals them. That can be done through a gesture, a Ctrl+Click or some other unused-in-ebook-reading action. The reveal would be a toggle, so users could turn it off equally easily. That lets publishers include as much ancillary information as they wish without interfering with the reading experience.

And because I can't resist adding my own two cents ...

Ebook discussions sometimes degenerate binary debates. Digital vs. print. Disconnected vs. connected. Sometimes even good vs. bad (although that's a bit much). But what I found most interesting about this conversation is that everyone approached the topic from a use-case perspective. And use cases vary wildly between people, and even within people. It all depends on the particular need, goal or subject. That's precisely why I find the toggle solution proposed by Russell Jones so compelling. There's no "or" involved. You'd have public and private, disconnected and connected. Just flick a switch for your desired experience.

January 07 2010

Pew Research asks questions about the Internet in 2020

Pew Research, which seems to be interested in just about everything,
conducts a "future of the Internet" survey every few years in which
they throw outrageously open-ended and provocative questions at a
chosen collection of observers in the areas of technology and
society. Pew makes participation fun by finding questions so pointed
that they make you choke a bit. You start by wondering, "Could I
actually answer that?" and then think, "Hey, the whole concept is so
absurd that I could say anything without repercussions!" So I
participated in their href="http://www.pewinternet.org/Reports/2006/The-Future-of-the-Internet-II.aspx"
2006 survey and did it again this week. The Pew report will
aggregate the yes/no responses from the people they asked to
participate, but I took the exercise as a chance to hammer home my own
choices of issues.

(If you'd like to take the survey, you can currently visit

http://www.facebook.com/l/c6596;survey.confirmit.com/wix2/p1075078513.aspx

and enter PIN 2000.)

Will Google make us stupid?

This first question is not about a technical or policy issue on the
Internet or even how people use the Internet, but a purported risk to
human intelligence and methods of inquiry. Usually, questions about
how technology affect our learning or practice really concern our
values and how we choose technologies, not the technology itself. And
that's the basis on which I address such questions. I am not saying
technology is neutral, but that it is created, adopted, and developed
over time in a dialog with people's desires.

I respect the questions posed by Nicholas Carr in his Atlantic
article--although it's hard to take such worries seriously when he
suggests that even the typewriter could impoverish writing--and would
like to allay his concerns. The question is all about people's
choices. If we value introspection as a road to insight, if we
believe that long experience with issues contributes to good judgment
on those issues, if we (in short) want knowledge that search engines
don't give us, we'll maintain our depth of thinking and Google will
only enhance it.

There is a trend, of course, toward instant analysis and knee-jerk
responses to events that degrades a lot of writing and discussion. We
can't blame search engines for that. The urge to scoop our contacts
intersects with the starvation of funds for investigative journalism
to reduce the value of the reports we receive about things that are
important for us. Google is not responsible for that either (unless
you blame it for draining advertising revenue from newspapers and
magazines, which I don't). In any case, social and business trends
like these are the immediate influences on our ability to process
information, and searching has nothing to do with them.

What search engines do is provide more information, which we can use
either to become dilettantes (Carr's worry) or to bolster our
knowledge around the edges and do fact-checking while we rely mostly
on information we've gained in more robust ways for our core analyses.
Google frees the time we used to spend pulling together the last 10%
of facts we need to complete our research. I read Carr's article when
The Atlantic first published it, but I used a web search to pull it
back up and review it before writing this response. Google is my
friend.

Will we live in the cloud or the desktop?

Our computer usage will certainly move more and more to an environment
of small devices (probably in our hands rather than on our desks)
communicating with large data sets and applications in the cloud.
This dual trend, bifurcating our computer resources between the tiny
and the truly gargantuan, have many consequences that other people
have explored in depth: privacy concerns, the risk that application
providers will gather enough data to preclude competition, the
consequent slowdown in innovation that could result, questions about
data quality, worries about services becoming unavailable (like
Twitter's fail whale, which I saw as recently as this morning), and
more.

One worry I have is that netbooks, tablets, and cell phones will
become so dominant that meaty desktop systems will rise in the cost
till they are within the reach only of institutions and professionals.
That will discourage innovation by the wider populace and reduce us to
software consumers. Innovation has benefited a great deal from the
ability of ordinary computer users to bulk up their computers with a
lot of software and interact with it at high speeds using high quality
keyboards and large monitors. That kind of grassroots innovation may
go away along with the systems that provide those generous resources.

So I suggest that cloud application providers recognize the value of
grassroots innovation--following Eric von Hippel's findings--and
solicit changes in their services from their visitors. Make their code
open source--but even more than that, set up test environments where
visitors can hack on the code without having to download much
software. Then anyone with a comfortable keyboard can become part of
the development team.

We'll know that software services are on a firm foundation for future
success when each one offers a "Develop and share your plugin here"
link.

Will social relations get better?

Like the question about Google, this one is more about our choices
than our technology. I don't worry about people losing touch with
friends and family. I think we'll continue to honor the human needs
that have been hard-wired into us over the millions of years of
evolution. I do think technologies ranging from email to social
networks can help us make new friends and collaborate over long
distances.

I do worry, though, that social norms aren't keeping up with
technology. For instance, it's hard to turn down a "friend" request
on a social network, particularly from someone you know, and even
harder to "unfriend" someone. We've got to learn that these things are
OK to do. And we have to be able to partition our groups of contacts
as we do in real life (work, church, etc.). More sophisticated social
networks will probably evolve to reflect our real relationships more
closely, but people have to take the lead and refuse to let technical
options determine how they conduct their relationships.

Will the state of reading and writing be improved?

Our idea of writing changes over time. The Middle Ages left us lots of
horribly written documents. The few people who learned to read and
write often learned their Latin (or other language for writing) rather
minimally. It took a long time for academies to impose canonical
rules for rhetoric on the population. I doubt that a cover letter and
resume from Shakespeare would meet the writing standards of a human
resources department; he lived in an age before standardization and
followed his ear more than rules.

So I can't talk about "improving" reading and writing without
addressing the question of norms. I'll write a bit about formalities
and then about the more important question of whether we'll be able to
communicate with each other (and enjoy what we read).

In many cultures, writing and speech have diverged so greatly that
they're almost separate languages. And English in Jamaica is very
different from English in the US, although I imagine Jamaicans try
hard to speak and write in US style when they're communicating with
us. In other words, people do recognize norms, but usage depends on
the context.

Increasingly, nowadays, the context for writing is a very short form
utterance, with constant interaction. I worry that people will lose
the ability to state a thesis in unambiguous terms and a clear logical
progression. But because they'll be in instantaneous contact with
their audience, they can restate their ideas as needed until
ambiguities are cleared up and their reasoning is unveiled. And
they'll be learning from others along with way. Making an elegant and
persuasive initial statement won't be so important because that
statement will be only the first step of many.

Let's admit that dialog is emerging as our generation's way to develop
and share knowledge. The notion driving Ibsen's Hedda Gabler--that an
independent philosopher such as Ejlert Løvborg could write a
masterpiece that would in itself change the world--is passé. A
modern Løvborg would release his insights in a series of blogs
to which others would make thoughtful replies. If this eviscerated
Løvborg's originality and prevented him from reaching the
heights of inspiration--well, that would be Løvborg's fault for
giving in to pressure from more conventional thinkers.

If the Romantic ideal of the solitary genius is fading, what model for
information exchange do we have? Check Plato's Symposium. Thinkers
were expected to engage with each other (and to have fun while doing
so). Socrates denigrated reading, because one could not interrogate
the author. To him, dialog was more fertile and more conducive to
truth.

The ancient Jewish scholars also preferred debate to reading. They
certainly had some received texts, but the vast majority of their
teachings were generated through conversation and were not written
down at all until the scholars realized they had to in order to avoid
losing them.

So as far as formal writing goes, I do believe we'll lose the subtle
inflections and wordplay that come from a widespread knowledge of
formal rules. I don't know how many people nowadays can appreciate all
the ways Dickens sculpted language, for instance, but I think there
will be fewer in the future than there were when Dickens rolled out
his novels.

But let's not get stuck on the aesthetics of any one period. Dickens
drew on a writing style that was popular in his day. In the next
century, Toni Morrison, John Updike, and Vladimir Nabokov wrote in a
much less formal manner, but each is considered a beautiful stylist in
his or her own way. Human inventiveness is infinite and language is a
core skill in which we we all take pleasure, so we'll find new ways to
play with language that are appropriate to our age.

I believe there will always remain standards for grammar and
expression that will prove valuable in certain contexts, and people
who take the trouble to learn and practice those standards. As an
editor, I encounter lots of authors with wonderful insights and
delightful turns of phrase, but with deficits in vocabulary, grammar,
and other skills and resources that would enable them to write better.
I work with these authors to bring them up to industry-recognized
standards.

Will those in GenY share as much information about themselves as they age?

I really can't offer anything but baseless speculation in answer to
this question, but my guess is that people will continue to share as
much as they do now. After all, once they've put so much about
themselves up on their sites, what good would it do to stop? In for a
penny, in for a pound.

Social norms will evolve to accept more candor. After all, Ronald
Reagan got elected President despite having gone through a divorce,
and Bill Clinton got elected despite having smoked marijuana.
Society's expectations evolve.

Will our relationship to key institutions change?

I'm sure the survey designers picked this question knowing that its
breadth makes it hard to answer, but in consequence it's something of
a joy to explore.

The widespread sharing of information and ideas will definitely change
the relative power relationships of institutions and the masses, but
they could move in two very different directions.

In one scenario offered by many commentators, the ease of
whistleblowing and of promulgating news about institutions will
combine with the ability of individuals to associate over social
networking to create movements for change that hold institutions more
accountable and make them more responsive to the public.

In the other scenario, large institutions exploit high-speed
communications and large data stores to enforce even greater
centralized control, and use surveillance to crush opposition.

I don't know which way things will go. Experts continually urge
governments and businesses to open up and accept public input, and
those institutions resist doing so despite all the benefits. So I have
to admit that in this area I tend toward pessimism.

Will online anonymity still be prevalent?

Yes, I believe people have many reasons to participate in groups and
look for information without revealing who they are. Luckily, most new
systems (such as U.S. government forums) are evolving in ways that
build in privacy and anonymity. Businesses are more eager to attach
our online behavior to our identities for marketing purposes, but
perhaps we can find a compromise where someone can maintain a
pseudonym associated with marketing information but not have it
attached to his or her person.

Unfortunately, most people don't appreciate the dangers of being
identified. But those who do can take steps to be anonymous or
pseudonymous. As for state repression, there is something of an
escalating war between individuals doing illegal things and
institutions who want to uncover those individuals. So far, anonymity
seems to be holding on, thanks to a lot of effort by those who care.

Will the Semantic Web have an impact?

As organizations and news sites put more and more information online,
they're learning the value of organizing and cross-linking
information. I think the Semantic Web is taking off in a small way on
site after site: a better breakdown of terms on one medical site, a
taxonomy on a Drupal-powered blog, etc.

But Berners-Lee had a much grander vision of the Semantic Web than
better information retrieval on individual sites. He's gunning for
content providers and Web designers the world around to pull together
and provide easy navigation from one site to another, despite wide
differences in their contributors, topics, styles, and viewpoints.

This may happen someday, just as artificial intelligence is looking
more feasible than it was ten years ago, but the chasm between the
present and the future is enormous. To make the big vision work, we'll
all have to use the same (or overlapping) ontologies, with standards
for extending and varying the ontologies. We'll need to disambiguate
things like webbed feet from the World Wide Web. I'm sure tools to
help us do this will get smarter, but they need to get a whole lot
smarter.

Even with tools and protocols in place, it will be hard to get
billions of web sites to join the project. Here the cloud may be of
help. If Google can perform the statistical analysis and create the
relevant links, I don't have to do it on my own site. But I bet
results would be much better if I had input.

Are the next takeoff technologies evident now?

Yes, I don't believe there's much doubt about the technologies that
companies will commercialize and make widespread over the next five
years. Many people have listed these technologies: more powerful
mobile devices, ever-cheaper netbooks, virtualization and cloud
computing, reputation systems for social networking and group
collaboration, sensors and other small systems reporting limited
amounts of information, do-it-yourself embedded systems, robots,
sophisticated algorithms for slurping up data and performing
statistical analysis, visualization tools to report the results of
that analysis, affective technologies, personalized and location-aware
services, excellent facial and voice recognition, electronic paper,
anomaly-based security monitoring, self-healing systems--that's a
reasonable list to get started with.

Beyond five years, everything is wide open. One thing I'd like to see
is a really good visual programming language, or something along those
lines that is more closely matched to human strengths than our current
languages. An easy high-level programming language would immensely
increase productivity, reduce errors (and security flaws), and bring
in more people to create a better Internet.

Will the internet still be dominated by the end-to-end principle?

I'll pick up here on the paragraph in my answer about takeoff
technologies. The end-to-end principle is central to the Internet I
think everybody would like to change some things about the current
essential Internet protocols, but they don't agree what those things
should be. So I have no expectation of a top-to-bottom redesign of the
Internet at any point in our viewfinder. Furthermore, the inertia
created by millions of systems running current protocols would be hard
to overcome. So the end-to-end principle is enshrined for the
foreseeable future.

Mobile firms and ISPs may put up barriers, but anyone in an area of
modern technology who tries to shut the spiget on outside
contributions eventually becomes last year's big splash. So unless
there's a coordinated assault by central institutions like
governments, the inertia of current systems will combine with the
momentum of innovation and public demand for new services to keep
chokepoints from being serious problems.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl