Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 13 2012

For many publishers, direct sales is an untapped opportunity

This post is part of the TOC podcast series. You can also subscribe to the free TOC podcast through iTunes.


Our June TOC theme is retailing, and there's never been a more important time for publishers to build a direct sales channel for their customers. Far too many publishers still ignore this opportunity, claiming their existing retail partners already do a good job and they don't want to compete with them. That should put a smile on the faces of the biggest book retailers who are only too happy to compete with publishers by creating and distributing their own content.

OR Books isn't like these other publishers. I recently spoke with company co-founder John Oakes (@johnghoakes) about the importance of a direct-to-consumer channel and why OR Books has made it a priority.

Key points from the full video interview (below) include:

  • Start by scrapping distribution & production — It's a classic example of how a startup isn't weighed down by The Innovator's Dilemma; OR Books is an alternative publisher in many ways. [Discussed at the 1:00 mark.]
  • Requires title-specific marketing — Because OR's books cover many different genres, they have to develop unique marketing campaigns for each rather than just trying to get a bunch of copies stocked at a retailer. [Discussed at 2:07.]
  • Another DRM-free advocate — Despite the fact that all OR Books products are sold DRM-free, John points out that piracy has never been an issue for them. [Discussed at 4:07.]
  • Non-returnable, prepaid basis — Those are the terms OR Books has with print book retailers. It probably means they don't get huge placement, but it also eliminates the pain and expense of returns. [Discussed at 6:30.]
  • Less gambling, more hand-selling — John feels there's still an important role for brick-and-mortar retailers, but they need to change their purchase and selling models. [Discussed at 10:55.]

You can view the entire interview in the following video.

Related:


Four short links: 13 June 2012

  1. Warren Buffett Lessons -- nice anthology of quotes, reordered into almost a narrative on different topics. (via Rowan Simpson)
  2. Silent Circle -- Phil Zimmermann's new startup, encrypting phone calls for iPhone and Android for $20/month. "I'm not going to apologize for the cost," Zimmermann told CNET, adding that the final price has not been set. "This is not Facebook. Our customers are customers. They're not products. They're not part of the inventory." (via CNET)
  3. New HTTP Code for "Legally Restricted" -- it's status code 451.
  4. PeerJ -- changing the business model for academic publishing: instead of charging you each time you publish, we ask for a single one off payment, giving you the lifetime right to publish articles with us, and to make those articles freely available. Lifetime plans start at just $99. O'Reilly a happy investor.

June 12 2012

Four short links: 12 June 2012

  1. Amazon's Insanely Crap Royalties (Andrew Hyde) -- Amazon offers high royalty rate to you, but that's before a grim hidden "delivery fee". Check out Andrew's graph of the different pay rates to the author from each medium.
  2. SparkFun Education -- learn electronics from the good folks at SparkFun.
  3. TaskRabbit -- connects you with friendly, reliable people right in your neighborhood who can help you get the items on your To-Do list done. Lots of people and projects sniffing around this space of outsourced small tasks, distributed to people via a web site.
  4. Henry Ford on Bootstrapping (Amy Hoy) -- Amy has unearthed a fascinating rant by Henry Ford against speculative investment and finance. I determined absolutely that never would I join a company in which finance came before the work or in which bankers or financiers had a part. And further that, if there were no way to get started in the kind of business that I thought could be managed in the interest of the public, then I simply would not get started at all. For my own short experience, together with what I saw going on around me, was quite enough proof that business as a mere money-making game was not worth giving much thought to and was distinctly no place for a man who wanted to accomplish anything. Also it did not seem to me to be the way to make money. I have yet to have it demonstrated that it is the way. For the only foundation of real business is service.

June 08 2012

Publishing News: Wattpad raises $17.3 million in series B funding

Here are a few stories that caught my eye this week in the publishing space.

Wattpad raises $17.3 for its storytelling community

Wattpad LogoBookExpo America (BEA) took place this week in New York City. One of the big announcements made at the show was Wattpad's newly raised $17.3 million in financing in a series B funding round led by Khosla Ventures. Wattpad is a social ereading and storytelling platform that connects writers with readers, and according to a story at GigaOm, the company vision is to establish the platform as the YouTube of writing. Andrew Chung, a partner at Khosla Ventures and a new board member at Wattpad, told GigaOm in an interview:

"You're able to upload a story chapter by chapter, folks are able to comment on that chapter, and they can provide encouragement to the writer and actually signal where they'd like the story to go, which creates a type of engagement that's impossible in an offline context. There’s a very strong parallel to the way that YouTube was able to do that for amateur or user-generated video content."

Liz Gannes at All Things Digital took a look at Wattpad's explosive growth, reporting that the platform now hosts five million stories and has about 500,000 added each month. Gannes also highlights the popularity of the site with readers, noting that "a book by teen author Jordan Lynde (a.k.a. XxSkater2Girl16xX on Wattpad) about a relationship between a teacher and a student, has been read nearly 20 million times."

The future of publishing has a busy schedule.
Stay up to date with Tools of Change for Publishing events, publications, research and resources. Visit us at oreilly.com/toc.


Publishers are reducing themselves to book packagers

The BEA show this week also inspired insight. Brett Sandusky took a look at the notions swirling about digital publishing. He argues that publishers have been fooling themselves with the idea that "digital is free and easy," that you can take your existing content, simply change the format and rake in the money. He writes:

"While our processes have arguably improved and modernized, the fact remains: viable digital business models require more than afterthought. Simply converting content and making it available for sale is a recipe for disaster. This prevailing free and easy digital model is actually harmful to our businesses."

Sandusky says the real money in digital is in distribution. He called for a curated distribution experience and emphasized the importance of owning the customer experience:

"Right now, we take so much time to polish our content and our products, and then we just throw them away. All this content curation we're doing (or at the very least talking about) makes no sense at all if we simply hand over the UX ownership to retailers and their locked devices. In fact, not owning the whole customer experience with regards to digital has basically reduced us to little more than book packagers for our retail partners. And, we're not even getting paid for it."

The real take-away from BEA, he says, is that it's time to start focusing on the customer, to "[pay] attention to every touch point, every interaction, every experience and make sure we own it." His post is a must-read this week.

A look at the state we are in

Jeremy Greenfield over at the Wall Street Journal's MarketWatch put together a sort of State of the Publishing Industry post this week, looking at how ebooks are effecting change. He offers a nice roundup of the DOJ lawsuit, the B&N venture with Microsoft, trends in venture capital and important startup entrants to the publishing space, and a look at how children are responding to ebooks (PDF). Greenfield also talks about Pottermore and how J.K. Rowling's moves to set up her own store and sell the Harry Potter ebooks directly to consumers — without DRM — is affecting the industry. He highlights two important points:

  1. "In the first month, she sold $5 million worth of e-books through her own store, Pottermore. ... Pottermore's success has renewed speculation that it's possible for publishers to develop direct-sales channels."
  2. "When Pottermore opened, it sold its e-books without digital rights management (DRM) software that is meant to prevent piracy. This move ran counter to what most book publishers currently do. ... When Pottermore launched, piracy initially spiked, said [Pottermore Chief Executive Charlie Redmayne]. But a backlash from anti-DRM advocates as well as appreciative fans resulted in an overall 25% drop in piracy of Harry Potter e-books."

You can read Greenfield's entire roundup here.

Related:


June 06 2012

Four short links: 6 June 2012

  1. Why Latency Lags Bandwidth (PDF) -- across disk, memory, and networking we see bandwidth growing faster than latency comes down. This paper covers why and what we can do about it. (via Ryan Dahl)
  2. Michael Lewis's Princeton Commencement Speech -- a subtle variation on "work on stuff that matters" that I simply love. Commencement speeches fly around this time of the year, but this one is actually worth reading.
  3. The Amazon Effect (The Nation) -- Readers of e-books are especially drawn to escapist and overtly commercial genres (romance, mysteries and thrillers, science fiction), and in these categories e-book sales have bulked up to as large as 60 percent. [...] Amazon swiftly struck an alliance with Houghton Mifflin Harcourt to handle placing its books in physical stores. In a transparent subterfuge aimed at protecting its tax-avoidance strategies, Amazon intends to publish many of its books under a subsidiary imprint of Houghton’s called New Harvest, thus keeping alive the increasingly threadbare fiction that it has no physical presence in states where it does business online. I did not know these things. (via Jim Stogdill)
  4. Learn by Doing (Slate) -- Dale Dougherty's excellent call to arms to turn away from zombie-producing standardised test classes to learning by making real things. The empty campus on test day horrified me.

June 01 2012

Publishing News: HMTL5 may be winning the war against apps

Here are a few stories that caught my attention in the publishing space this week:

The shortest link between content and revenue may be HTML5

HTML5 LogoA couple weeks ago, MIT Technology Review's editor in chief and publisher Jason Pontin wrote a piece about killing their app and optimizing their website for all devices with HTML5. That same week, Lonely Planet's Jani Patokallio predicted that HTML5 would nudge out the various ebook formats. This week, Wired publisher Howard Mittman shot back in an interview with Jeff John Roberts at PaidContent, insisting that apps are the future, not HTML5.

Roberts reports that "[Mittman] believes that HTML5 will just be part of a 'larger app experience' in which an app is a storefront or gateway for readers to have deeper interactions with publishing brands." I'm not sure, however, that readers need yet another gateway (read: obstacle) to their content, and recent movements in the publishing industry suggest HTML5 may be the more likely way forward.

This week, Inkling founder and CEO Matt MacInnis announced the launch of Inkling for Web, an HTML5-based web client that brings Inkling's iPad app features to any device with a browser. The app and HTML5 technology in this case are intertwined — all content previously owned in the app can now also be accessed via the web, and activity will sync between the app and the web, so notes made on the web will appear in the iPad app and vice versa. MacInnis says in the announcement that the launch is a big part of the company's overall vision to provide service to anyone on any device they choose, one of the major benefits of choosing HTML5 technology.

Also this week, OverDrive announced plans to launch OverDrive Read, an open standard HTML5/EPUB browser-based ebook platform that will allow users to read ebooks online or offline, without having to install software or download an app. Dianna Dilworth at GalleyCat reports on additional benefits for publishers: "Using the platform, publishers can create a URL for each title. This link can include book previews and review copies, as well as browsing capabilities and sample chapters."

In the end, it will all come down to what it always comes down to: money. Roger McNamee's latest piece, "HTML 5: The Next Big Thing for Content," takes a very thorough look at HTML5 in general and specifically in relation to content publishing (this week's must-read). As to money, this excerpt stood out:

"The beauty of these new [HTML5] 'app' models is that each can [be] monetized, in most cases at rates better than the current web standard. Imagine you are reading David Pogue's technology product review column in the New York Times. Today, the advertising on that page is pretty random. In HTML 5, it will be possible for ads to search the page they are on for relevant content. This would allow the Times to auction the ad space to companies that sell consumer electronics, whose ads could then look at the page, identify the products and then offer them in the ad."

As it becomes more and more likely that ads will be incorporated as a revenue stream in ebooks, publishers will embrace whatever technology draws the shortest line and the most avenues between content and revenue, which at this point is looking more and more like HTML5.

The future of publishing has a busy schedule.
Stay up to date with Tools of Change for Publishing events, publications, research and resources. Visit us at oreilly.com/toc.

MIT students present news reporting solutions

MIT Media Lab students were busy this week presenting final projects for their "News in the Age of Participatory Media" class. Andrew Phelps at Nieman Journalism Lab highlighted a few of the interesting projects, which were required to address a new tool, technique, or technology for reporting the news. One student proposed modernizing the hyperlink by attaching semantic meaning to it; another suggested a Wiki-like idea for correlations to put impossibly big numbers — the $15 trillion U.S. national debt, for instance — into context for readers.

The growing importance of data journalism makes another student's suite of tools called DBTruck particularly interesting. As Phelps explains, users can "[e]nter the URL of a CSV file, JSON data, or an HTML table and DBTruck will clean up the data and import it to a local database." The tools also let you compare arbitrary data to provoke deeper insights — in testing, the student discovered a correlation between low birth weights and New York state communities with high teen pregnancy rates, a connection that might not have been otherwise discovered.

Penguin and Macmillan deny participation in an illegal conspiracy

Publishers Penguin and Macmillan responded this week to the Department of Justice's (DOJ) antitrust lawsuit filed earlier this year against the two publishers and Apple (Apple responded to the lawsuit last week).

The New York Times reports that in Penguin's 74-page response (PDF), it "called Amazon 'predatory' and a 'monopolist' that treats books as 'widgets.' It asserted that Amazon, not Penguin, was the company engaging in anticompetitive behavior, to the detriment of the industry."

Laura Hazard Owen called Macmillan's 26-page response (PDF) "shorter and more fiery" than Penguin's. She reports:

"'Macmillan did not participate in any illegal conspiracy,' Macmillan's filing says, and 'the lack of direct evidence of conspiracy cited in the Government's Complaint is telling…[it is] necessarily based entirely on the little circumstantial evidence it was able to locate during its extensive investigation, on which it piles innuendo on top of innuendo, stretches facts and implies actions that did not occur and which Macmillan denies unequivocally.'"

Related:

May 30 2012

May 29 2012

Amazon, ebooks and advertising

This post originally appeared on Joe Wikert's Publishing 2020 Blog ("Why Advertising Could Become Amazon's Knockout Punch"). This version has been lightly edited.

Your Ad Here by KarenLizzie, on FlickrIt all started harmlessly enough with Amazon's Kindle with Special Offers. That's the cheaper Kindle that displays ads when the device is in sleep mode or at the bottom of the screen when paging through the owner's catalog of books. It is very unobtrusive and, since it lowered the price of the device, has made that Kindle an extremely popular device.

Now there are rumors that Amazon is selling ad space on the Kindle Fire's welcome screen. That sounds pretty reasonable, too, as it's a simple way for Amazon to drive a bit of additional income that's pure profit for them.

Given that Amazon's goal is to offer customers the lowest prices on everything, what's the next logical step? How about even lower prices on ebooks where Amazon starts making money on in-book ads? Think Google AdWords, built right into the book. Of course, Amazon won't want to use Google's platform. They'll use their own so they keep 100% of the revenue.

The changes the DOJ is requiring for the agency model means a retailer can't sell ebooks at a loss, but they can still sell them for no profit, or break even. In other words, the 30% the retailer would keep on an agency ebook sale can be passed along to the customer as a 30% discount on the list price, but that's as deep a discount as that retailer can offer.

The rules are different with the wholesale model. Amazon already loses money on sales of many wholesale-model ebooks. Let's talk about a hypothetical wholesale model title with a digital list price of $25. Amazon is required to pay the publisher roughly half that price, or about $12.50 for every copy sold, but that ebook might be one of the many that are listed at $9.99 for the Kindle. So every time Amazon sells a copy, they lose $2.51 ($12.50 minus $9.99). Amazon has deep enough pockets to continue doing this, though, so they're quite comfortable losing money and building market share.

So, what's preventing Amazon from taking an even bigger loss and selling that ebook for $4.99 or $0.99 instead? In the wholesale model world, the answer to that question is: "nothing is preventing them from doing that." And if selling ebooks at a loss for $9.99 makes sense, especially when it comes to building market share, why doesn't it also make sense to sell them at $4.99, $0.99 or even free for some period of time? It probably depends on how much pain Amazon wants to inflict on other retailers and how much attention they're willing to call to themselves for predatory pricing.

Make no mistake about the fact that Amazon would love to see ebook pricing approach zero. That's right. Zero. That might seem outlandish, but isn't that exactly what they're doing with their Kindle Owner's Lending Library program? Now you can read ebooks for free as part of your Prime membership. The cost of Prime didn't go up, so they've essentially made the consumer price of those ebooks zero.

Why wouldn't they take the same approach with in-book advertising?

At some point in the not-too-distant future, I believe we'll see ebooks on Amazon at fire-sale prices. I'm not just talking about self-published titles or books nobody wants. I'll bet this happens with some bestsellers and midlist titles. Amazon will make a big deal out of it and note how these cheaper prices are only available through Amazon's in-book advertising program. Maybe they'll still offer the ad-free editions at the higher prices, but you can bet they'll make the ad-subsidized editions irresistible.

Remember that they can only do this for books in the wholesale model. But quite a few publishers use the wholesale model, so the list opportunities are enormous. And as Amazon builds momentum with this, they'll also build a very strong advertising platform. One that could conceivably compete with Google AdWords outside of ebooks, too.

Publishers and authors won't suffer as long as Amazon still has to pay the full wholesale discount price. Other ebook retailers will, though. Imagine B&N trying to compete if a large portion of Amazon's ebook list drops from $9.99 to $4.99 or less. Even with Microsoft's cash injection, B&N simply doesn't have deep enough pockets to compete on losses like this, at least not for very long.

At the same time, Amazon will likely tell publishers the only way they can compete is by significantly lowering their ebook list prices. They'll have the data to show how sales went up dramatically when consumer prices dropped to $4.99 or less. I wouldn't be surprised if Amazon would give preferential treatment to publishers who agree to lower their list prices (e.g., more promotions, better visibility, etc.).

By the time all that happens, Amazon will probably have more than 90% of the ebook market and a nice chunk of their ebook list that no longer has to be sold at a loss. And oh, let's not forget about the wonderful in-book advertising platform they'll have built buy then. That's an advertising revenue stream that Amazon would not have to share with publishers or authors. That might be the most important point of all.

What do you think? Why wouldn't Amazon follow this strategy, especially since it helps eliminate competitors, leads to market dominance and fixes the loss-leader problem they currently have with many ebook sales?

Photo: Your Ad Here by KarenLizzie, on Flickr

Related:

May 25 2012

Publishing News: Kindle Fire and "your ad here"

Here's what caught my attention this week in the publishing space:

Kindle Fire home screen may be for sale

Kindle FireRumors flew this week saying Amazon plans to launch an ad campaign in which it will sell ads on its Kindle Fire home screen. Jason Del Rey at AdAge reports:

"Amazon is pitching ads on the device's welcome screen, according to an executive at an agency that Amazon has pitched. The company has been telling ad agency execs that they must spend about $600,000 for any package that includes such an ad.

"The ad campaigns would run for two months and also include inventory from Amazon's 'Special Offers' product. For $1 million, advertisers would get more ad inventory and be included in Amazon's public-relations push, according to this executive and an exec at another ad agency.'"

Del Rey says that "[b]oth agency executives have so far declined to participate, citing several concerns. For one, Amazon isn't guaranteeing the number of devices that the welcome-screen ads will reach, telling agencies that it hasn't decided whether the ads will start popping up on devices that have already been purchased or just on new devices."

O'Reilly GM and publisher Joe Wikert assessed the situation on his Publishing 2020 blog. He says this is just the beginning and that other ebook retailers are going to suffer:

"Given that Amazon's goal is to offer customers the lowest prices on everything, what's the next logical step? How about even lower prices on ebooks where Amazon starts making money on in-book ads? Think Google AdWords, built right into the book ... At some point in the not too distant future I believe we'll see ebooks on Amazon at fire sale prices. I'm not just talking about self-published titles or books nobody wants. I'll bet this happens with some bestsellers and midlist titles too. Amazon will make a big deal out of it and note how these cheaper prices are only available thru Amazon's in-book advertising program. ... Imagine B&N trying to compete if a large portion of Amazon's ebook list drops from $9.99 to $4.99 or less. Even with Microsoft's cash injection, B&N simply doesn't have deep enough pockets to compete on losses like this, at least not for very long."

Wikert concludes by asking: "Why wouldn't Amazon follow this strategy, especially since it helps eliminate competitors, leads to market dominance and fixes the loss leader problem they currently have with many ebook sales?"

The future of publishing has a busy schedule.
Stay up to date with Tools of Change for Publishing events, publications, research and resources. Visit us at oreilly.com/toc.

Apple calls foul on the DOJ

Apple this week filed a reply to the Department of Justice's antitrust lawsuit that was filed in April against Apple and five major publishers. PCWorld reports:

"Apple's reply to the court is in line with a statement issued by Apple in April after the DOJ filed its case, in which it said that 'the launch of the iBookstore in 2010 fostered innovation and competition, breaking Amazon's monopolistic grip on the publishing industry.' The company added: 'Just as we've allowed developers to set prices on the App Store, publishers set prices on the iBookstore.'"

The filing, entitled "APPLE INC.'S ANSWER," opens:

The Government's Complaint against Apple is fundamentally flawed as a matter of fact and law. Apple has not 'conspired' with anyone, was not aware of any alleged 'conspiracy' by others, and never 'fixed prices.' ... The Government sides with monopoly, rather than competition, in bringing this case. The Government starts from the false premise that an eBooks 'market' was characterized by 'robust price competition' prior to Apple's entry. This ignores a simple and incontrovertible fact: Before 2010, there was no real competition, there was only Amazon.

Reuters reports that in the filing, "Apple also denied that the government 'accurately characterized' the comment attributed to [Steve] Jobs." The DOJ's complaint (PDF) states:

"77. Apple understood that the final Apple Agency Agreements ensured that the Publisher Defendants would raise their retail e-book prices to the ostensible limits set by the Apple price tiers not only in Apple's forthcoming iBookstore, but on Amazon.com and all other consumer sites as well. When asked by a Wall Street Journal reporter at the January 27, 2010 iPad unveiling event, 'Why should she buy a book for ... $14.99 from your device when she could buy one for $9.99 from Amazon on the Kindle or from Barnes & Noble on the Nook?' Apple CEO Steve Jobs responded, 'that won't be the case .... the prices will be the same.'"

Apple's filing responds:

Apple denies the allegations of paragraph 77. The Government mischaracterizes on its face the alleged statement of Steve Jobs to the press on January 27, 2010, which simply conveyed that a publisher would not have a particular eBook title priced at $9.99 through one distributor and $14.99 through another. Apple's MFN provision would allow it to require the publisher to lower the price to $9.99 on the iBookstore. Apple had no contractual rights to require a publisher to require that it, or any distributor of its products, charge more for eBooks than it chose in a competitive market." [Reference link added.]

You can read Apple's reply in its entirety at Scribd.

It's time to hack digital covers

Hack the CoverCraig Mod (@craigmod) mused on book covers recently in a piece on his website called "Hack the Cover," which also is available as a Kindle Single. He says the way we search for and discover books has changed:

"The covers ... on Amazon.com are tiny on the search results page. Minuscule on new books page. And they're all but lost in the datum slush of the individual item pages. Great covers like Mendelsund's design for The Information disappear entirely.

"Why? Because — What do we now hunt when buying books? Data.

"The cover image may help quickly ground us, but our eyes are drawn by habit to number and quality of reviews. We’re looking for metrics other than images — real metrics — not artificial marketing signifiers. Blurbs from humans. Perhaps even humans we know! And within the jumble of the Amazon.com interface, the cover feels all but an afterthought."

Mod argues that since readers can approach a book from any number of entry points, the entire book should be viewed and treated like a "cover":

"The covers for our digital editions need not yell. Need not sell. Heck, they may very well never been seen. The reality is, entire books need to be treated as covers. Entry points into digital editions aren't strictly defined and they're only getting fuzzier. Internet readers don't casually stumble upon books set atop tables. They're exposed through digital chance: a friend tweeting about a particular passage — and linking, directly, into that chapter ... To treat an entire book as a cover means to fold the typographic and design love usually reserved for covers into everything. Type choices. Illustration styles. Margins and page balance."

Mod's piece is a must read this week.

Related:

May 24 2012

Knight Foundation grants $2 million for data journalism research

Every day, the public hears more about technology and media entrepreneurs, from when they started in the garages and the dorm rooms, all the way up until when they go public, get acquired or go spectacularly bust. The way that the world mourned the passing of Steve Jobs last year and that young people now look to Mark Zuckerberg as a model for what's possible offer some insight into that dynamic.

For those who want to follow in their footsteps, the most interesting elements of those stories will be the muddy details of who came up with the idea, who wrote the first lines of code, who funded them, how they were mentored and then how the startup executed upon their ideas.

Today, foundations and institutions alike are getting involved in the startup ecosystem, but with a different hook than the venture capitalists on Sand Hill Road in California or Y Combinator: They're looking for smart, ambitious social entrepreneurs who want to start civic startups and increase the social capital of the world. From the Code for America Civic Accelerator to the Omidyar Foundation to Google.org to the Knight Foundation's News Challenge, there's more access to seed capital than ever before.

There are many reasons to watch what the Knight Foundation is doing, in particular, as it shifts how it funds digital journalism projects. The foundation's grants are going toward supporting many elements of the broader open government movement, from civic media to government transparency projects to data journalism platforms.

Many of these projects — or elements and code from them — have a chance at becoming part of the plumbing of digital democracy in the 21st century, although we're still on the first steps of the long road of that development.

This model for catalyzing civic innovation in the public interest is, in the broader sweep of history, still relatively new. (Then again, so is the medium you're reading this post on.) One barrier that the Internet has helped lower is in the process of discovering and selecting good ideas to fund and letting bad ideas fall to the wayside. Another is changing how ideas are capitalized through microfunding approaches or how distributing opportunities for participation in helping products or services go to market now can happen though crowdfunding platforms like Kickstarter.

When the Pebble smartwatch received $10 million through Kickstarter this year, it offered a notable data point into how this model could work. We'll see how others follow.

These models could contribute to the development of small pieces of civic architecture around the world, loosely joining networks in civil society with mobile technology, lightweight programming languages and open data.

After years of watching how the winners of the Knight News Challenges have — or have not — contributed to this potential future, its architects are looking at big questions: How should resources be allocated in newsrooms? What should be measured? Are governments more transparent and accountable due to the use of public data by journalists? What data is available? What isn't? What's useful and relevant to the lives of citizens? How can data visualization, news applications and interactive maps inform and engage readers?

In the context of these questions, the fact that the next Knight News Challenge will focus on data will create important new opportunities to augment the practice of journalism and accelerate the pace of open government. John Bracken (@jsb), the Knight Foundation's program director for journalism and media innovation, offered an explanation for this focus on the foundation's blog:

"Knight News Challenge: Data is a call for making sense of this onslaught of information. 'As data sits teetering between opportunity and crisis, we need people who can shift the scales and transform data into real assets,' wrote Roger Ehrenberg earlier this year.

"Or, as danah boyd has put it, 'Data is cheap, but making sense of it is not.'

"The CIA, the NBA's Houston Rockets, startups like BrightTag and Personal ('every detail of your life is data') — they're all trying to make sense out of data. We hope that this News Challenge will uncover similar innovators discovering ways for applying data towards informing citizens and communities."

Regardless of what happens with this News Challenge, some of those big data questions stand a much better chance of being answered because of the Knight Foundation's $2 million grant to Columbia University to research and distribute best practices for digital reporting, data visualizations and measuring impact.

Earlier this spring, I spoke with Emily Bell, the director of the Tow Center for Digital Journalism, about how this data journalism research at Columbia will close the data science "skills gap" in newsrooms. Bell is now entrusted with creating the architecture for learning that will teach the next generation of data journalists at Columbia University.

In search of the reasoning behind the grant, I talked to Michael Maness (@MichaelManess), vice president of journalism and media innovations at the Knight Foundation. Our interview, lightly edited for content and clarity, follows.

The last time I checked, you're in charge of funding ideas that will make the world better through journalism and technology. Is that about right?

Michael Maness: That's the hope. What we're trying to do is make sure that we're accelerating innovation in the journalism and media space that continues to help inform and engage communities. We think that's vital for democracy. What I do is work on those issues and fund ideas around that to not only make it easier for journalists to do their work, but citizens to engage in that same practice.

The Knight News Challenge has changed a bit over the last couple of years. How has the new process been going?

Michael Maness: I've been in the job a little bit more than a year. I came in at the tail end of 2011 and the News Challenge of 2011. We had some great winners, but we noticed that in the amount of time from when you applied in the News Challenge to when you were funded could be up to 10 months, by the time everything was done, and certainly eight months in terms of the process. So we reduced that to about 10 weeks. It's intense for the judges to do that, but we wanted to move more quickly, recognizing the speed of disruption and the energy of innovation and how fast it's moving.

We've also switched to a thematic theme. We're going to do three [themes] this year. The point of it is to fund as fast as possible those ideas that we think are interesting and that we think will have a big impact.

This last round was around networks. The reason we focused on networks is the apparent rise of network power. The second reason is we get people, for example, that say, "This is the new Twitter for X" or "This is the new Facebook for journalists." Our point is actually, you should be using and leveraging existing things for that.

We found when we looked back at the last five years of the News Challenge that people who came in with networks or built networks in accordance with what they're doing had a higher and faster scaling rate. We want to start targeting areas to do that, too.

We hear a lot about entrepreneurs, young people and the technology itself, but schools and libraries seem really important to me. How will existing institutions be part of the future that you're funding and building?

Michael Maness: One of the things that we're doing is moving into more "prototyping" types of grants and then finding ways of scaling those out, helping get ideas into a proof-of-concept phase so users kick the tires and look for scaling afterward.

In terms of the institutions, one of the things that we've seen that's been a bit of a frustration point is making sure that when we have innovations, [we're] finding the best ways to parlay those into absorption in these kinds of institutions.

A really good standout for that, from a couple years ago as a News Challenge winner, is DocumentCloud, which has been adopted by a lot of the larger legacy media institutions. From a university standpoint, we know one of the things that is key is getting involvement with students as practitioners. They're trying these things out and they're doing the two kinds of modeling that we're talking about. They're using the newest tools in the curriculum.

That's one of the reasons we made the grant [to Columbia.] They have a good track record. The other reason is that you have a real practitioner there with Emily Bell, doing all of her digital work from The Guardian and really knowing how to implement understandings and new ways of reporting. She's been vital. We see her as someone who has lived in an actual newsroom, pulling in those digital projects and finding new ways for journalists to implement them.

The other aspect is that there are just a lot of unknowns in this space. As we move forward, using these new tools for data visualization, for database reporting, what are the things that work? What are the things that are hard to do? What are the ideas that make the most impact? What efficiencies can we find to help newsrooms do it? We didn't really have a great body of knowledge around that, and that's one of the things that's really exciting about the project at Columbia.

How will you make sure the results of the research go beyond Columbia's ivy-covered walls?

Michael Maness: That was a big thing that we talked about, too, because it's not in us to do a lot of white papers around something like this. It doesn't really disseminate. A lot of this grant is around making sure that there are convocations.

We talk a lot about the creation of content objects. If you're studying data visualization, we should be making sure that we're producing that as well. This will be something that's ongoing and emerging. Definitely, a part of it is that some of these resources will go to hold gatherings, to send people out from Columbia to disseminate [research] and also to produce findings in a way that can be moved very easily around a digital ecosystem.

We want to make sure that you're running into this work a lot. This is something that we've baked into the grant, and we're going to be experimenting with, I think, as it moves forward. But I hear you, that if we did all of this — and it got captured behind ivy walls — it's not beneficial to the industry.

Related:

May 22 2012

A gaming revolution, minus the hype

In the following interview, "Playful Design" author John Ferrara (@PlayfulDesign) explains what he sees as the real gaming revolution — not "gamification," or the application of gaming characteristics to existing applications and processes, but how games themselves can and will be a "force of cultural transformation." Ferrera also reveals five universal principles of good game design.

Our interview follows.

How are mobile and social technologies affecting game design and the evolution of gaming technology?

John FerraraJohn Ferrara: One of the really surprising things about modern smartphones and tablets is that the've turned out to be such credible gaming platforms. They open doors to new ways of experiencing games by giving designers access to touchscreens, accelerometers, cameras, microphones, GPS, and Internet connectivity through a single device. They also allow games to be experienced in new contexts, enjoyed on the train to work, in the minutes between meetings, and while you're out with friends. The traditional gaming model, where players sit passively in one place in the home and stare at a fixed screen, seems stodgy and limiting by comparison.

The funny thing about social technology is that before we had video games, gaming was almost always a social activity. You needed to have multiple people to play most board games, card games, and sports — in fact, the game was often just a pretense for people to get together. But then video games made solitary experiences more of the norm. Now social technology is bringing gaming back to its multiplayer roots, but it's also going beyond what was ever possible before by enabling hyper-social experiences where you're playing with dozens of friends and family at once. Even though you may be separated from these people in space and time, you have an intimate sense of shared presence and community when you're playing. That's revolutionary.

How do you see the social media aspects of gaming seeping into day-to-day life?

John Ferrara: Games certainly can transform the workplace, though I want to caution that it's very easy to make the mistake of dressing up everyday work activities as games by just tacking on some points and badges. That's not game design, and people will recognize that it's not. In the process of failing, approaches like this generate cynicism toward the effort. Games need to be designed to be games first and foremost. They must be intrinsically rewarding, enjoyed for their own sake.

That said, I absolutely believe that games can work at work. As you suggest, for example, they have great strengths for training. Games create a safe space for people to test out their mastery of a set of skills in ways that aren't possible or practical in the real world. They can also help people figure out how best to handle different situations. Say, for example, that you created a game to develop management skills. You might allow players to assign values to their in-game avatars like "nurturing," "autocratic," or "optimistic," which lead to different behavior paths. Players could then examine how these traits play out in a situation filled with characters who have different values like "dependability," "autonomy," and "efficiency." A structure like this could not only impart insight about management styles, but also invite introspection about how an individual's own personality traits may lead to success and failure in the real world.

In your book's introduction, you say, "I hope to start moving toward a post-hype discussion of how games can most effectively achieve great things in the real world." Who is leading the way — or at least moving in the right direction — and what are they doing?

Playful Design CoverJohn Ferrara: You know, there's so much really inventive work being done right now. Recently, I've been playing a lot of "Zombies, Run!," and I think it's great. This is a game for smartphones that overlays a narrative about survivors in a zombie apocalypse onto your daily run. As you're out getting your exercise, you're listening to the game events as they unfold, and you can hear the zombies closing in. It's a great use of fantasy, and it plays as a true game with meaningful choices and conflict.

There's also a great group at the University of Wisconsin-Madison that's developed a smartphone app called ARIS, which builds game scenarios into physical locations, and they've developed dozens of applications for it. One of them is being developed as a museum tour for the Minnesota Historical Center, giving people quests to complete by scanning objects in the exhibit and then using them to complete objectives in a story line. The museum is actually changing the way the exhibit is laid out to better accommodate the gameplay, moving away from the traditional snaking path to more of an open layout that allows players to move more freely between the interacting displays to solve the game's challenges.

Some of the thought leaders who I really admire include Eric Klopfer and Scot Osterweil at MIT, Ian Bogost at the Georgia Institute of Technology, and Jane McGonigal. A common current among these thinkers is their emphasis on games themselves as a force of cultural transformation, rather than simplistic "gamification" of software applications that lead to little or no meaningful change.

What about engineering games like "Foldit" — with improved UX, could this type of crowdsourced gaming become a viable research tool?

John Ferrara: This is what's been called "human computation," where a group of people work together to solve some complex problem as a by-product of some other action, like playing a game. Luis Von Ahn at Carnegie Mellon describes games as algorithms that are executed by people rather than machines, and I think that's a really fascinating idea. Foldit is a great example. This is a puzzle game where players try to figure out how to fold chains of proteins. This is a problem that's very well suited to human computation because it requires a type of intuitive reasoning that's very difficult for actual computers. Foldit made a big news last fall when the people playing it decoded the structure of a protein related to a virus that causes AIDS in monkeys, which had eluded researchers for years.

This is a wonderful demonstration of how this type of game can be really valuable to researchers. At the same time, I'm very critical of Foldit because I think its gameplay experience is kind of awful. It's very difficult to figure out which actions lead to the results you see on-screen — like why you're awarded points the way you are — and there's not a strong sense of objectives or conflict. These design issues place limits on the appeal of Foldit, and that's a big problem because human computation works better the more people you have playing. If the gameplay were really compelling and fun, then the sky would be the limit.

How do you see the collection and use of gaming data evolving?

John Ferrara: Games can produce enormous volumes of data because it's really simple to gather every little interaction the player has in the game and report it all back to a central server. This has immediate applications for game design itself. Zynga, for example, uses data to determine which design choices create greater tendencies for players to stay engaged longer, involve more friends, or pay to enhance the game experience. I expect this kind of data collection and analysis to become the norm because companies will be more successful the better they can do it.

I would suggest that financial services could be one of the biggest secondary beneficiaries of such data because there's so much to learn about how people make financial decisions under different circumstances. Staying with the Zynga theme, suppose players have the option of investing in any of a variety of different farm crops, each of which has different strengths and vulnerabilities to environmental conditions. How do players choose which ones they should purchase? How do they appraise risk and reward? Which presentations of information lead to a better understanding of a crop's attributes? Which lead people to make more appropriate choices for their goals? All of these questions can be examined quantitatively through games and can lead to greater insights into the innate qualities of human psychology that drive investor behavior and decision making.

What are some emerging best practices for game technology?

John Ferrara: Best practices vary widely depending on the game and the type of player motivations to which it appeals. For example, games meant to promote a sense of immersion like "Red Dead Redemption" remove as much of the user interface elements from immediate view as possible. Data-intensive games like "Tiny Tower" benefit by compressing as much information and as many functional controls as they can into the smallest possible space.

With that in mind, there are some clear universal principles for the design of all games:

  • Skip the manual and embed as much instruction into the gameplay as you can.
  • Fit the game into the player's lifestyle so that he or she can play when and where it's convenient.
  • Don't cheat — people recognize when a game unfairly stacks the odds against them and they resent it.
  • Make sure players always have a clear sense of cause and effect, and that they understand what actions are available to them.
  • Above all, playtest, playtest, playtest. It's impossible to fully anticipate how people will react to a game short of actually watching them play it.

In the book, you argue that games should be used as instruments of persuasion. Why is this?

John Ferrara: To be clear, it's not that all games should be persuasive but that people who want to persuade should look at games very seriously; I believe they present an ideal way to convince people to adopt a particular point of view or to move them to action in the real world. Ian Bogost describes games as a form of "procedural rhetoric," meaning that they communicate messages through participation in the experience. This creates a lot of advantages for persuasion. For example, it allows a kind of self-directed discovery where people adopt the designer's message as a working hypothesis and then test its truthfulness through the gameplay. That's a really powerful way to get your point across. Furthermore, it builds a sense of personal ownership of the insight the player has uncovered.

Are there ethical concerns related to persuasion in gaming environments?

John Ferrara: As there are for any medium, certainly. Film, television, books, billboards, oratory, and posters have all been appropriated for less-than-above-board purposes. Whether it's propaganda, demagoguery, misleading advertising, or dirty politics, you'd expect that games would be subject to the same kinds of unethical practices. It's especially important to be aware of this in the case of games, considering how compelling a procedural rhetoric can be. Rather than casting a negative light on games, however, I think that speaks to their power to effect meaningful change in the real world. I believe that games can achieve great things, and I expect that over the next decade we'll see them doing a lot of good.

This interview was edited and condensed.

Related:

Data journalism research at Columbia aims to close data science skills gap

Successfully applying data science to the practice of journalism requires more than providing context and finding clarity in vasts amount of unstructured data: it will require media organizations to think differently about how they work and who they venerate. It will mean evolving towards a multidisciplinary approach to delivering stories, where reporters, videographers, news application developers, interactive designers, editors and community moderators collaborate on storytelling, instead of being segregated by departments or buildings.

The role models for this emerging practice of data journalism won't be found on broadcast television or on the lists of the top journalists over the past century. They're drawn from the increasing pool of people who are building new breeds of newsrooms and extending the practice of computational journalism. They see the reporting that provisions their journalism as data, a body of work that can itself can be collected, analyzed, shared and used to create longitudinal insights about the ways that society, industry or government are changing. (Or not, as the case may be.)

In a recent interview, Emily Bell (@EmilyBell), director of the Tow Center for Digital Journalism at the Columbia University School of Journalism, offered her perspective about what's needed to train the data journalists of the future and the changes that still need to occur in media organizations to maximize their potential. In this context, while the role of institutions and "journalism education are themselves evolving, they both will still fundamentally matter for "what's next," as practitioners adapt to changing newsonomics.

Our discussion took place in the context of a notable investment in the future of data journalism: a $2 million research grant to Columbia University from the Knight Foundation to research and distribute best practices for digital reportage, data visualizations and measuring impact. Bell explained more about what how the research effort will help newsrooms determine what's next on the Knight Foundation's blog:

The knowledge gap that exists between the cutting edge of data science, how information spreads, its effects on people who consume information and the average newsroom is wide. We want to encourage those with the skills in these fields and an interest and knowledge in journalism to produce research projects and ideas that will both help explain this world and also provide guidance for journalism in the tricky area of ‘what next’. It is an aim to produce work which is widely accessible and immediately relevant to both those producing journalism and also those learning the skills of journalism.

We are focusing on funding research projects which relate to the transparency of public information and its intersection with journalism, research into what might broadly be termed data journalism, and the third area of ‘impact’ or, more simply put, what works and what doesn’t.

Our interview, lightly edited for content and clarity, follows.

What did you do before you became director of the Tow Center for Digital Journalism?

I spent ten years where I was editor-in-chief of The Guardian website. During the last four of those, I was also overall director of digital content for all The Guardian properties. That included things like mobile applications, et cetera, but from the editorial side.

Over the course of that decade, you saw one or two things change online, in terms of what journalists could do, the tools available to them and the news consumption habits of people. You also saw the media industry change, in terms of the business models and institutions that support journalism as we think of it. What are the biggest challenges and opportunities for the future journalism?

For newspapers, there was an early warning system: that newspaper circulation has not really consistently risen since the early 1980s. We had a long trajectory of increased production and actually, an overall systemic decline which has been masked by a very, very healthy advertising market, which really went on an incredible bull run with a more static pictures, and just "widen the pipe," which I think fooled a lot of journalism outlets and publishers into thinking that that was the real disruption.

And, of course, it wasn’t.

The real disruption was the ability of anybody anywhere to upload multimedia content and share it with anybody else who was on a connected device. That was the thing that really hit hard, when you look at 2004 onwards.

What journalism has to do is reinvent its processes, its business models and its skillsets to function in a world where human capital does not scale well, in terms of sifting, presenting and explaining all of this information. That’s really the key to it.

The skills that journalists need to do that -- including identifying a story, knowing why something is important and putting it in context -- are incredibly important. But how you do that, which particular elements you now use to tell that story are changing.

Those now include the skills of understanding the platform that you’re operating on and the technologies which are shaping your audiences’ behaviors and the world of data.

By data, I don’t just mean large caches of numbers you might be given or might be released by institutions: I mean that the data thrown off by all of our activity, all the time, is simply transforming the speed and the scope of what can be explained and reported on and identified as stories at a really astonishing speed. If you don’t have the fundamental tools to understand why that change is important and you don’t have the tools to help you interpret and get those stories out to a wide public, then you’re going to struggle to be a sustainable journalist.

The challenge for sustainable journalism going forward is not so different from what exists in other industries: there's a skills gap. Data scientists and data journalists use almost the exact same tools. What are the tools and skills that are needed to make sense of all of this data that you talked about? What will you do to catalog and educate students about them?

It's interesting when you say that the skills of these clients are very similar, which is absolutely right. First of all, you have a basic level of numeracy needed - and maybe not just a basic level, but a more sophisticated understanding of statistical analysis. That’s not something which is routinely taught in journalism schools but that I think will increasingly have to be.

The second thing is having some coding skills or some computer science understanding to help with identifying the best, most efficient tools and the various ways that data is manipulated.

The third thing is that when you’re talking about 'data scientists,' it’s really a combination of those skills. Adding data doesn’t mean you don't have to have other journalism skills which do not change: understanding context, understanding what the story might be, and knowing how to derive that from the data that you’re given or the data that exists. If it’s straightforward, how do you collect it? How do you analyze it? How do you interpret them and present it?

It’s easy to say, but it’s difficult to do. It’s particularly difficult to reorient the skillsets of an industry which have very much resided around the idea of a written story and an ability with editing. Even in the places where I would say there’s sophisticated use of data in journalism, it’s still a minority sport.

I’ve talked to several heads of data in large news organizations and they’ve said, “We have this huge skills gap because we can find plenty of people who can do the math; we can find plenty of people who are data scientists; we can’t find enough people who have those skills but also have a passion or an interest in telling stories in a journalistic context and making those relatable.”

You need a mindset which is about putting this in the context of the story and spotting stories, as well having creative and interesting ideas about how you can actually collect this material for your own stories. It’s not a passive kind of processing function if you’re a data journalist: it’s an active speaking, inquiring and discovery process. I think that that’s something which is actually available to all journalists.

Think about just local information and how local reporters go out and speak to people every day on the beat, collect information, et cetera. At the moment, most get from those entities don’t structure the information in a way that will help them find patterns and build new stories in the future.

This is not just about an amazing graphic that the New York Times does with census data over the past 150 years. This is about almost every story. Almost every story has some component of reusability or a component where you can collect the data in a way that helps your reporting in the future.

To do that requires a level of knowledge about the tools that you’re using, like coding, Google Refine or Fusion Tables. There are lots of freely available tools out there that are making this easier. But, if you don’t have the mindset that approaches, understands and knows why this is going to help you and make you a better reporter, then it’s sometimes hard to motivate journalists to see why they might want to grab on.

The other thing to say, which is really important, is there is currently a lack of both jobs and role models for people to point to and say, “I want to be that person.”

I think the final thing I would say to the industry is we’re getting a lot of smart journalists now. We are one of the schools where all of our digital concentrations from students this year include a basic grounding in data journalism. Every single one of them. We have an advanced course taught by Susan McGregor in data visualization. But we’re producing people from the school now, who are being hired to do these jobs, and the people who are hiring them are saying, “Write your own job description because we know we want you to do something, we just don’t quite know what it is. Can you tell us?”

You can’t cookie-cutter these people out of schools and drop them into existing roles in news trends because those are still developing. What we’re seeing are some very smart reporters with data-centric mindsets and also the ability to do these stories -- but they want to be out reporting. They don’t want to be confined to a desk and a spreadsheet. Some editors usually find that very hard to understand, “Well, what does that job look like?”

I think that this is where working with the industry, we can start to figure some of these things out, produce some experimental work or stories, and do some of the thinking in the classroom that helps people figure out what this whole new world is going to look like.

What do journalism schools need to do to close this 'skills gap?' How do they need to respond to changing business models? What combination of education, training and hands-on experience must they provide?

One of the first things they need to do is identify the problem clearly and be honest about it. I like to think that we’ve done that at Columbia, although I’m not a data journalist. I don’t have a background in it. I’m a writer. I am, if you like, completely the old school.

But one of the things I did do at The Guardian was helped people who early on said to me, “Some of this transformation means that we have to think about data as being a core part of what we do.” Because of the political context and the position I was in, I was able to recognize that that was an important thing that they were saying and we could push through changes and adoption in those areas of the newsroom.

That’s how The Guardian became interested in data. It’s the same in journalism school. One of the early things that we talked about [at Columbia] was how we needed to shift some of what the school did on its axis and acknowledge that this was going to be key part of what we do in the future. Once we acknowledged that that is something we had to work towards, [we hired] Susan McGregor from the Wall Street Journal’s Interactive Team. She’s an expert in data journalism and has an MA in technology in education.

If you say to me, “Well, what’s the ground vision here?” I would say the same thing I would say to anybody: over time, and hopefully not too long a course of time, we want to attract a type of student that is interested and capable in this approach. That means getting out and motivating and talking to people. It means producing attractive examples which high school children and undergraduate programs think about [in their studies]. It means talking to the CS [computer science] programs -- and, in fact, more about talking to those programs and math majors than you would be talking to the liberal arts professors or the historians or the lawyers or the people who have traditionally been involved.

I think that has an effect: it starts to show people who are oriented towards storytelling but have capabilities which are align more with data science skill sets that there’s a real task for them. We can’t message that early enough as an industry. We can’t message it early enough as an educator to get people into those tracks. We have to really make sure that the teaching is high quality and that we’re not just carried away with the idea of the new thing, we need to think pretty deeply about how we get those skills.

What sort of basic sort of statistical teaching do you need? What are the skills you need for data visualization? How do you need to introduce design as well as computer science skills into the classroom, in a way which makes sense for stories? How do you tier that understanding?

You're always going to produce superstars. Hopefully, we’ll be producing superstars in this arena soon as well.

We need to take the mission seriously. Then we need to build resources around it. And that’s difficult for educational organizations because it takes time to introduce new courses. It takes time to signal that this is something you think is important.

I think we’ve done a reasonable job of that so far at Columbia, but we’ve got a lot further to go. It's important that institutions like Columbia do take the lead and demonstrate that we think this is something that has to be a core curriculum component.

That’s hard, because journalism schools are known for producing writers. They’re known for different types of narratives. They are not necessarily lauded for producing math or computer science majors. That has to change.

Related:

May 21 2012

Social reading should focus on common interests rather than friend status

This post is part of the TOC podcast series. You can also subscribe to the free TOC podcast through iTunes.


Social reading is gaining momentum. There are quite a few startups involved in this space, and most of them simply assume your Facebook friends share the same reading interests you do. ReadSocial is different. In this TOC interview, we hear from ReadSocial co-founder Travis Alber (@screenkapture) on why they're building their platform without tying it to your social graph.

Key points from the full video interview (below) include:

  • Adding conversations into your content — The reading experience needs to flow smoothly, but the reader should have the opportunity to dive into deeper discussions with others along the way without leaving the book environment. [Discussed at 00:39.]
  • Publishers play a role, too — Note that Travis talks about publishers as well as readers here. You can't just have a "build it and they will come" mentality with social reading. Publishers need to take the initiative and add value by inserting comments, managing groups, etc. [Discussed at 2:00.]
  • An open source platform — Open systems are always better than closed ones, and it's great to see that ReadSocial is an open source product. [Discussed at 3:47.]
  • Analytics built in — As publishers we want to learn more about our customers and their reading habits, what they liked in the book, what they skipped over, etc. ReadSocial provides those insights. [Discussed at 4:00.]
  • Hashtags determine what groups you're part of — This functionality gives ReadSocial the flexibility not found in other platforms. It also allows you to be part of just one or many different groups reading the same book. The emphasis here is on common interests rather than a friend status within Facebook, for example. [Discussed at 8:37.]
  • ReadSocial offers API access as well — The entire ReadSocial platform is accessible via API's, which could lead to all sorts of new and innovative applications. [Discussed at 17:00.]

You can view the entire interview in the following video.

The future of publishing has a busy schedule.
Stay up to date with Tools of Change for Publishing events, publications, research and resources. Visit us at oreilly.com/toc.

Related:


May 18 2012

Publishing News: No dismissal for Apple, Macmillan and Penguin

gavel.pngThis week brought a couple of important updates in the lawsuits against Apple, Macmillan and Penguin. First, the antitrust lawsuit filed by 16 States' Attorneys General saw 17 more states jump in, and several new details came to light as previously redacted content was made public in the amended complaint. Laura Hazard Owen takes a look at the highlights over at PaidContent, including how the Big Five got holdout publisher Six to get on board:

"E-mails to Barnes & Noble: Once five publishers and Apple had enacted agency pricing, the complaint says the five publishers 'worked together to force' Random House to adopt it as well. On March 4, 2010, in an exchange also identified in the DOJ's filing, Penguin CEO David Shanks sent Barnes & Noble's then-CEO Steve Riggio an e-mail reading in part, 'Random House has chosen to stay on their current model and will allow retailers to sell at whatever price they wish ... I would hope that [Barnes & Noble] would be equally brutal to Publishers who have thrown in with your competition with obvious disdain for your welfare ... I hope you make Random House hurt like Amazon is doing to people who are looking out for the overall welfare of the publishing industry.'"

Jane Litte over at Dear Author has a thorough analysis of the amended complaint as well, and also covers the second important lawsuit update of the week: U.S. District Judge Denise Cote denied Apple, Penguin, and Macmillan's motion to dismiss the civil class action lawsuit. Litte offers highlights and analysis of both the amended complaint in the states' lawsuit and from Judge Cote's opinion. She says the emphasis on "windowing" — holding back ebook versions of hardcover books in order to sell more of the higher priced editions — is "genius of the DOJ/States' Attorneys General to argue because it sets a pattern of concerted behavior regarding price controls." Litte concludes:

"I think that the defendants (Apple, Penguin and Macmillan) have two options here. Settle now or take their slim chances to jury where I am convinced they will lose and hope that the 2nd Circuit slaps down Judge Cote's per se finding on appeal."

Litte's post is a must-read this week. She also will talk more about the DOJ/States' Attorneys General lawsuits with Kat Meyer on today's Follow the Reader discussion at 4 p.m. eastern on Twitter. You can join in at #followreader.

The future of publishing has a busy schedule.
Stay up to date with Tools of Change for Publishing events, publications, research and resources. Visit us at oreilly.com/toc.

The anti-piracy holy grail?

What if piracy on the Internet could be shut down? That's what Russian-based startup Pirate Pay is aiming to accomplish. The company, which was partially funded by a $100,000 investment from the Microsoft Seed Financing Fund, is targeting its technology at file sharing on BitTorrent. TorrentFreak reports:

"[Pirate Pay] has developed a technology [that] allows them to attack existing BitTorrent swarms, making it impossible for people to share files ... The company doesn't reveal how it works, but they appear to be flooding clients with fake information, masquerading as legitimate peers."

Company CEO Andrei Klimenko talked a bit more in-depth in an interview at Russia Beyond the Headlines:

"It was not so hard to do from inside an I.S.P.'s network. But to turn the technology into global service, we had to convince all I.S.P.s to acquire our solution. This is, what someone could call, mission impossible. So to create a global service, we had to find the way to do it from the cloud. So we needed money for development."

That's where Microsoft came in. In the interview, Klimenko describes the success of the group's first project, protecting the film "Vysotsky. Thanks to God I'm Alive" after its release in December:

"We used a number of servers to make a connection to each and every p2p client that distributed this film. Then Pirate Pay sent specific traffic to confuse these clients about the real I.P. addresses of other clients and to make them disconnect from each other. Not all the goals were reached. But nearly 50,000 users did not complete their downloads."

Whether or not the technology will continue to work in the long term is questionable. The BBC reports: "[University of Cambridge security researcher Richard Clayton], who blogs about such issues, said peer-to-peer networks would eventually adapt, sharing information about 'bogus' peers such as those reportedly utilised by companies like Pirate Pay."

"News you read is different than news you say you read"

In a post at at AdAge Digital, Steve Rubel mused this week on digital media, social sharing and news consumption. Inspired after an executive briefing at Fairfax Media's headquarters in Sydney, he writes:

"'News you read is different than news you say you read,' said Darren Burden, general manager-news and digital publishing for Fairfax, one of Australia's largest companies. The former is driven by what you want or need to know, and the latter by what you want your friends to think.

"Just like that, Burden nailed the psychology that drives subconscious and routine behaviors in the digital age. The media get it. They know that as social networks become a primary pathway to content, news that's crafted to find you must indeed be different from news that's intended for you to find.

"Few companies can execute both styles equally well, however, and the result is a stylistic continental divide as newsrooms tilt toward one or the other."

Rubel's analysis of how various brands are wrestling with the issue is an interesting read. He concludes that content producers are going to need to be "adept in both styles to create the resonance required to stand out in an age with too much content and not enough time."

Related:

Why I haven't caught ereader fever

iPad 2 illustrationO'Reilly GM and publisher Joe Wikert (@jwikert) wrote recently about how he can't shake his ereader. I read his story with interest, as I can't seem to justify buying one. I was gifted a second-generation Kindle a while back, and it lived down to all my low expectations. The limitations were primarily the clumsy navigation and single-purpose functionality. I loaned it to a friend; she fell in love, so my Kindle found a new home.

At this point, I do all my ereading on my iPad 2: books, textbooks, magazines, news, short form, long form ... all of it. I will admit, I found the new Nook Simple Touch with GlowLight that Wikert acquired somewhat tempting. The technology is much improved over the second generation Kindle, and though I haven't yet played with one in the store, I bet the execution is much more enjoyable. Still, my original hang-ups prevail.

First, I don't want to be locked in to one retailer. On my iPad, I have apps that allow me to read books bought from anywhere I choose. I can buy books from Amazon, Barnes & Noble, Apple and other smaller retailers, and they will all work on my iPad. True, this spreads my library around in a less-than-ideal organization, but the ability to buy books from anywhere is more important to me.

Also, I'm not so sure ebooks and ereaders will have a place down the road, making the value proposition of the investment that much less appealing. Much like the music journey from records to MP3s, digital reading technology is advancing, and perhaps at a much faster pace than its music counterpart. Jani Patokallio, publishing platform architect at Lonely Planet, recently predicted the obsolescence of ebooks and ereaders within five years, suggesting the web and HTML5 will become the global format for content delivery and consumption. And publications such as the Financial Times and MIT's Technology Review already are dropping their iOS and Android apps in favor of the web and HTML5.

I doubt my iPad will become obsolete any time soon. I look forward to the day books are URLs (or something similar) and we can read them anywhere on any device — and that day may not be too far off. I think I'm so attached to the iPad experience because it simulates this freedom to the best of its ability.

Ereader shortcomings also are likely to present a rich content hindrance, even before a shift to a web/HTML5 format gets underway. In a separate blog post, Wikert talked about a baseball book that missed its opportunity by not curating video links. He wrote: "The video links I'm talking about would have been useless on either device [his Kindle or Nook], but if they were integrated with the ebook I would have gladly read it with the Kindle app on my tablet." As publishers start realizing content opportunities afforded by digital, I think my iPad will serve me better than a single-purpose ereader.

Another hang-up I have, and this is likely to do with my general aversion to change, is the form factor. Most ereaders are somewhere around mass-market-paperback size, and the Nook Simple Touch and Simple Touch with GlowLight are nearly square. I prefer hardcover or trade paperback size — about the size and shape of my iPad. I might be able to get past this particular issue, but given the others I've mentioned, I just can't justify trying.

I will have to surrender to Wikert on the battery life and weight points — the one thing I really liked about the Kindle was its feather-light weight and the fact that during its short stay with me, I never had to charge the battery. I expect the surrender to be temporary, however. I have faith in our engineering friends — two years ago, a research team at MIT was using carbon nanotubes to improve the battery-power-to-weight ratio ... I can't imagine it will be too much longer before life catches up to research. In the meantime, I expect to remain happily connected at the hip to my iPad.

The future of publishing has a busy schedule.
Stay up to date with Tools of Change for Publishing events, publications, research and resources. Visit us at oreilly.com/toc.

Related:


May 16 2012

May 10 2012

The reinvention of the bookseller

This post originally appeared on Publishers Weekly.

Books Etc Victoria by markhillary, on FlickrIf you're a brick-and-mortar bookseller, does your blood pressure rise when you think about e-retailers and their deep discounts? Do you look at ebooks as a threat or an opportunity? Depending on how you answered those questions, you might need to ask yourself another one: What business are you really in?

If you're simply in the business of "selling books," I believe you're thinking too narrowly. Think of the story of the successful tools salesman who explained why he was able to sell so many drills: "My competitors sell the drill while I focus on selling the hole." In other words, he emphasizes the benefits while others are busy trying to sell a bunch of meaningless features.

What are the benefits you've successfully provided in the past? When I think of my local bookstore, some of the key benefits I see are personalized service and community. If I want to know more about a book I'm considering, I'd rather talk with a real person than simply trust a bunch of reviews on a website, especially if some of those reviews might be planted by the author or publisher. The main advantage a physical bookstore has over an online one is the in-person advice and support the former can offer.

A lesson from Apple

Despite the sluggish economy of the last few years, some brick-and-mortar retailers have found ways to grow their businesses. Apple is a terrific example. Regardless of whether you're an Apple fan, there's always something new and interesting to discover in an Apple store. I can't tell you the last time I felt that way about a bookstore. I'm not talking about eye candy or glitzy merchandising; when you enter an Apple store you know you're in for a treat.

Wouldn't it be awesome if customers entering your bookstore had that same feeling? I realize Apple can invest a lot in its store experience because it's selling higher-priced items, but maybe that means you need to look beyond simply selling $20 or $30 books. I'm not talking about adding stationery and toys, like some bookstores have done over the years. It's time to think much bigger.

These days most bookstores have some sort of coffee shop or snack bar. Years ago it was a brilliant move to add that dimension, as it helped turn bookstores into a hangout rather than just an in-and-out retail destination. If in-store coffee shops were the game-changing idea of the '90s, what's the new one for the current decade? Here's one possibility: an in-store self-publishing resource. Self-publishing is red-hot and still gaining momentum. But what's sorely lacking in the self-publishing world is a reliable place to go to ask all the questions. How do I get started? What's the best platform? How do I create a marketing campaign? Self-publishing enthusiasts are left with a slew of questionable online options and a few in-person events. Why not create an in-person self-publishing resource within your store?

Take a page out of Apple's playbook and create a Genius Bar service for customers interested in self-publishing. Establish your location as the place to go for help in navigating the self-publishing waters. Remember, too, that most of the income earned in self-publishing is tied to services, e.g., editing, cover design, proofreading, and not necessarily sales of the finished product. Consider partnering with an established expert in these areas or build your own network of providers. The critical point is to evolve your business into something more than just selling books.

This doesn't mean you need to invest in self-publishing equipment to enter the field, but it's interesting to hear from someone who has. I spoke about this with Chris Morrow, co-owner of Northshire Bookstore in Vermont, which has had an Espresso Book Machine for a number of years. According to Morrow:

"The Espresso machine has allowed us to create a self-publishing business and more. It has changed how customers view the bookstore. The self-publishing business is a complementary business that takes advantage of technological developments while being true to our mission."

If my self-publishing suggestion isn't the best option for your store, don't simply give up and assume you'll always have a future selling print books. It's clear to me that the number of brick-and-mortar bookstores will continue to decline; more specifically, the number of brick-and-mortar bookstores that mostly rely on selling print books will continue to decline. Bookstores have always been a source of inspiration and an important community resource for their customers. Think about your own store's unique attributes and how they could be extended as print sales decline. If you go about it the right way, the digital reading revolution won't be a threat but rather a once-in-a-lifetime opportunity to reconceive your business.

Photo: "Books Etc Victoria" by markhillary, on Flickr

Related:

May 08 2012

Think of it like a political campaign: Baratunde Thurston's book marketing

Since its release in late January, Baratunde Thurston's book, "How To Be Black," has sold more than 15,000 copies, hitting the New York Time bestseller list out of the gate. Thurston, The Onion's former director of digital, and Craig Cannon, his campaign manager, have employed a slew of creative tactics for selling the book. In a recent interview, Thurston talked with me about what's worked, what hasn't, and the secret sauce for their campaign.

Before you dive in, I'll note that Thurston — in addition to having written a terrific book — has a gift for making people feel like they want to be part of his world. Although I'd read excerpts of the book early on, as he included them in his email newsletter, and although I was given both an electronic and a print copy of the book, I still bought it, just to support him. How can you make that magic happen for your book? Read on.

Any sales numbers we can share for context?

Baratunde ThurstonBaratunde Thurston: We went into this with a goal of significant pre-sales to hit the New York Times bestseller list. How many does it take to do that? Anywhere from 1,000 to 100,000 in sales, and it depends on what else has come out that week. The pre-sales all accrue to one week, so you can stack the deck. We had 20,000 pre-sales as a goal. That was insane. We wound up with several hundred pre-sales, which was helpful, but not a juggernaut. We hit the list at No. 21. Mostly, that was useful because The Times had me do a joint interview with Charles Murray that ran in print and online during Black History Month. And that drew some attention.

What we learned from this is that people do not buy books. They like to talk about books. They like to talk about buying them. But they do not buy them.

Also for context, how important are book sales for you?

Baratunde Thurston: Sales are important. I want people to read the book. I want them to spend money. This wasn't a vanity publishing project, but it wasn't a get-rich scheme, either. It was a way for me as a creative person to point to something solid. I speak, I tweet; it's gone. I publish in very forgettable platforms. A book has some staying power. It's a cultural object, a physical object on which you can focus some attention.

What elements of the campaign worked?

Baratunde Thurston: We decided to treat this like a political campaign — more about the issues than the politician. We asked ourselves: Can we create a sense of movement that has other people seeing themselves in a book about me?

There was a process to arrive at the plan, and it equaled me coming up with the marketing. I knew I had to get on it when I was on a trip somewhere, and I got an email from Harper [Collins Publishers]: "Do you think you'll tweet about the book?"

There was a big research phase, talking to people I know or was introduced to, like Gary Vaynerchuk, Deanna Zandt, Eric Ries, Amber Rae at the Domino Project, and Tim Ferris. There were a lot of conversations had and articles read. There's no excuse to make things up completely or rely on hope.

We went with a content-oriented promotion strategy — check out this video or tweet or interview. So, for example, we wound up with 50 short videos that we could build into the book and the campaign.

The book's website was the heart. We posted a daily question every day in February, seeding it with the video I had already shot.

For speaking gigs I'd already booked, we asked if we could add book sales.

We had field ops — the Street Team. They were the ideal beta group: 115 people, half active and half of those really dedicated. We thought each street team member would equate to sales, but it's turned out to be more important as a group that lets us test ideas.

We also identified the high-value donors — people who are going to deliver a bunch of votes or cash. I went through all my contacts manually, about 4,500 people, and scrubbed that down to about 1,800 real people. I tagged them lightly, looking at them in terms of relevance. And then I started reaching out to them one by one.

"Fresh Air" with Terry Gross worked. MSNBC appearances worked.

How did the Street Team work out?

Baratunde Thurston: We tried to build a very loyal, very intense community. People had to apply. We asked them to participate in web video chats. It was like they made it through basic training. And that was kind of the goal: to have a group of advocates you can deploy in different ways. At launch parties across the country, they help out. Craig crashes on their sofas. They provide a support network; they're the volunteer fire department.

They also became an early-warning system for how the public would interpret the book. They weren't biased the way the other people close to me were. For instance, during Street Team video chats, they asked questions the public would ask. So I'd go to launch parties and interviews really prepared with answers.

Michael Phelps parody photoThis notion of showing the book cover in the hands of people as an image of value — they helped create that. Somebody Photoshopped Michael Phelps holding it, and that was one of first we saw. We seeded that idea with the Street Team, and they ran with it. The Photoshopping became redundant because actual people were holding the book and people were taking their pictures. It turned into a photomeme as people began to post them [to Twitter and the "How To Be Black" website].

We had a roadmap of things we had to do, and one thing we didn't miss was the Amazon reviews. We wanted to get them up within hours of the book's availability to set the trend for five-star reviews. We had a video chat with the Street Team right before the Amazon release. Within hours, we had 10 five-star reviews. That signaled to the Amazon buying market that it was a worthwhile book, and the Street Team provided the initial traction. And it's not just the number of five-star reviews, it's also how many reviews were helpful or not. We basically created our own Amazon Vine program.

What didn't work the way you expected?

Baratunde Thurston: The goal of 20,000 pre-sales didn't work. Every weekday in February, I should have been doing something for Black History Month. That didn't quite work, because the lead time for booking events is six months to a year, and we weren't on top of it early enough. As I mentioned, having the Street Team directly account for a certain number of units distributed didn't quite work.

What role did Craig Cannon play?

Baratunde Thurston: I knew Craig loosely at the Onion [where he was graphics editor]. He invited me to lunch to talk about something he was working on, a project with Skillshare. About five or six months before the book launched, we did a class on how to be black. That was a good test for our relationship.

We had a huge Google doc with everything laid out. Craig set up the Tumblr, the Facebook page, a private group for the Street Team, the tour support, the admin support. He's running the merchandise business. The black card — he just went off and built it.

I would have been able to do a lot of that worse. Even the two of us are only hitting 60% capacity. We should have had merch ready at launch. At some of our book events, we didn't have books.

For people who don't have a Craig, the most important thing is the personal one-on-one outreach. Look at the market of people interested in your topic, interested in you. Start with your inner circle. I had an epiphany with Gary Vaynerchuk. I asked: "Did I ever ask you to buy my book?" He said, "Yeah, I bought it yesterday." I talked about his book, but cash on the table — it didn't happen. He wished he had identified everyone he knows, sending a personal note explaining: "A) buy the book; B) this means a lot to me. You owe me or I will owe you. Here's some things you can do to help: If you have speaking opportunities, let me know. For instance, I would love to speak at schools." Make it easy for people who want to help you. Everything else is bonus. If you haven't already converted the inner circle, you've skipped a critical step.

What specific marketing technique would you recommend to other authors?

Baratunde Thurston: You can make everything easier by figuring out what value to attach your book to. We've been working under the over-arching theme of identity. If you blog every week about why your book is so awesome, nobody cares. If you're producing relevant, interesting content, they get attached to you in context. That leads to sales. It's a good model.

Once you've actually articulated what that value is, make everything else consistent with that. For us, it was comfort with yourself and your identity — everybody has an outsider identity. That provides a roadmap for interviews and events. It establishes the brand and reinforces it. This approach requires time and consideration, but not cash. It's not just reactive. For instance, this book is about DIY culture that makes the world a better place. With that approach, somebody like my friend Nora Abousteit can get involved, even though race, per se, isn't her issue.

Anything else you want to add?

Baratunde Thurston: There was a very important tactical layer, the secret sauce: Knod.es [Note: this is launching to the public soon]. Ron Williams, Knod.es founder, has been an essential shadow. The types of services Knod.es provides — pre-qualified leads — are going to be important for everything. We were sending targeted blasts around and used Knod.es to augment that. The results have been incredible.

For example, we wanted people to submit more content to the How To Be Black Tumblr. After launch, it had faded. We recruited 18 people [some from the Street Team] to push a message through Facebook and email. We had a 50% conversion rate on those messages, and got in nine stories without trying that hard. In the same way you approach your network of friends, you can do the same with social networks where you don't know them as well but they still want to help. You still have to make it easy for people to help you, but finding the value in your existing relationships — that's incredibly valuable. "The Today Show" isn't available to everyone.

This interview was edited and condensed.

Related:

May 07 2012

A brief history of data journalism

The following is an excerpt from "The Data Journalism Handbook," a collection of essays and resources covering the growing field of data journalism.


Data journalism imagesIn August 2010 some colleagues and I organised what we believe was one of the first international "data journalism" conferences, which took place in Amsterdam. At this time there wasn't a great deal of discussion around this topic and there were only a couple of organizations that were widely known for their work in this area.

The way that media organizations like Guardian and the New York Times handled the large amounts of data released by Wikileaks is one of the major steps that brought the term into prominence. Around that time the term started to enter into more widespread usage, alongside "computer-assisted reporting," to describe how journalists were using data to improve their coverage and to augment in-depth investigations into a given topic.

Speaking to experienced data journalists and journalism scholars on Twitter it seems that one of the earliest formulations of what we now recognise as data journalism was in 2006 by Adrian Holovaty, founder of EveryBlock — an information service which enables users to find out what has been happening in their area, on their block. In his short essay "A fundamental way newspaper sites need to change," he argues that journalists should publish structured, machine-readable data, alongside the traditional "big blob of text":

"For example, say a newspaper has written a story about a local fire. Being able to read that story on a cell phone is fine and dandy. Hooray, technology! But what I really want to be able to do is explore the raw facts of that story, one by one, with layers of attribution, and an infrastructure for comparing the details of the fire — date, time, place, victims, fire station number, distance from fire department, names and years experience of firemen on the scene, time it took for firemen to arrive — with the details of previous fires. And subsequent fires, whenever they happen."

But what makes this distinctive from other forms of journalism which use databases or computers? How — and to what extent — is data journalism different from other forms of journalism from the past?

"Computer-Assisted Reporting" and "Precision Journalism"

Using data to improve reportage and delivering structured (if not machine readable) information to the public has a long history. Perhaps most immediately relevant to what we now call data journalism is "computer-assisted reporting" or "CAR," which was the first organised, systematic approach to using computers to collect and analyze data to improve the news.

CAR was first used in 1952 by CBS to predict the result of the presidential election. Since the 1960s, (mainly investigative, mainly U.S.-based) journalists, have sought to independently monitor power by analyzing databases of public records with scientific methods. Also known as "public service journalism," advocates of these computer-assisted techniques have sought to reveal trends, debunk popular knowledge and reveal injustices perpetrated by public authorities and private corporations. For example, Philip Meyer tried to debunk received readings of the 1967 riots in Detroit — to show that it was not just less-educated Southerners who were participating. Bill Dedman's "The Color of Money" stories in the 1980s revealed systemic racial bias in lending policies of major financial institutions. In his "What Went Wrong," Steve Doig sought to analyze the damage patterns from Hurricane Andrew in the early 1990s, to understand the effect of flawed urban development policies and practices. Data-driven reporting has brought valuable public service, and has won journalists famous prizes.

In the early 1970s the term "precision journalism" was coined to describe this type of news-gathering: "the application of social and behavioral science research methods to the practice of journalism." Precision journalism was envisioned to be practiced in mainstream media institutions by professionals trained in journalism and social sciences. It was born in response to "new journalism," a form of journalism in which fiction techniques were applied to reporting. Meyer suggests that scientific techniques of data collection and analysis rather than literary techniques are what is needed for journalism to accomplish its search for objectivity and truth.

Precision journalism can be understood as a reaction to some of journalism's commonly cited inadequacies and weaknesses: dependence on press releases (later described as "churnalism"), bias towards authoritative sources, and so on. These are seen by Meyer as stemming from a lack of application of information science techniques and scientific methods such as polls and public records. As practiced in the 1960s, precision journalism was used to represent marginal groups and their stories. According to Meyer:

"Precision journalism was a way to expand the tool kit of the reporter to make topics that were previously inaccessible, or only crudely accessible, subject to journalistic scrutiny. It was especially useful in giving a hearing to minority and dissident groups that were struggling for representation."

An influential article published in the 1980s about the relationship between journalism and social science echoes current discourse around data journalism. The authors, two U.S. journalism professors, suggest that in the 1970s and 1980s the public's understanding of what news is broadens from a narrower conception of "news events" to "situational reporting," or reporting on social trends. By using databases of — for example — census data or survey data, journalists are able to "move beyond the reporting of specific, isolated events to providing a context which gives them meaning."

As we might expect, the practise of using data to improve reportage goes back as far as "data" has been around. As Simon Rogers points out, the first example of data journalism at the Guardian dates from 1821. It is a leaked table of schools in Manchester listing the number of students who attended it and the costs per school. According to Rogers this helped to show for the first time the real number of students receiving free education, which was much higher than what official numbers showed.

Data Journalism in the Guardian in 1821
Data Journalism in the Guardian in 1821 (The Guardian)

Another early example in Europe is Florence Nightingale and her key report, "Mortality of the British Army," published in 1858. In her report to the parliament she used graphics to advocate improvements in health services for the British army. The most famous is her "coxcomb," a spiral of sections, each representing deaths per month, which highlighted that the vast majority of deaths were from preventable diseases rather than bullets.

Mortality of the British Army by Florence Nightingale
Mortality of the British Army by Florence Nightingale (Image from Wikipedia)

Data journalism and Computer-Assisted Reporting

At the moment there is a "continuity and change" debate going on around the label "data journalism" and its relationship with these previous journalistic practices which employ computational techniques to analyze datasets.

Some argue that there is a difference between CAR and data journalism. They say that CAR is a technique for gathering and analyzing data as a way of enhancing (usually investigative) reportage, whereas data journalism pays attention to the way that data sits within the whole journalistic workflow. In this sense data journalism pays as much — and sometimes more — attention to the data itself, rather than using data simply as a means to find or enhance stories. Hence we find the Guardian Datablog or the Texas Tribune publishing datasets alongside stories, or even just datasets by themselves for people to analyze and explore.

Another difference is that in the past investigative reporters would suffer from a poverty of information relating to a question they were trying to answer or an issue that they were trying to address. While this is of course still the case, there is also an overwhelming abundance of information that journalists don't necessarily know what to do with. They don't know how to get value out of data. A recent example is the Combined Online Information System, the U.K.'s biggest database of spending information — which was long sought after by transparency advocates, but which baffled and stumped many journalists upon its release. As Philip Meyer recently wrote to me: "When information was scarce, most of our efforts were devoted to hunting and gathering. Now that information is abundant, processing is more important."

On the other hand, some argue that there is no meaningful difference between data journalism and computer-assisted reporting. It is by now common sense that even the most recent media practices have histories, as well as something new in them. Rather than debating whether or not data journalism is completely novel, a more fruitful position would be to consider it as part of a longer tradition, but responding to new circumstances and conditions. Even if there might not be a difference in goals and techniques, the emergence of the label "data journalism" at the beginning of the century indicates a new phase wherein the sheer volume of data that is freely available online combined with sophisticated user-centric tools, self-publishing and crowdsourcing tools enables more people to work with more data more easily than ever before.

Data journalism is about mass data literacy

Digital technologies and the web are fundamentally changing the way information is published. Data journalism is one part in the ecosystem of tools and practices that have sprung up around data sites and services. Quoting and sharing source materials is in the nature of the hyperlink structure of the web and the way we are accustomed to navigate information today. Going further back, the principle that sits at the foundation of the hyperlinked structure of the web is the citation principle used in academic works. Quoting and sharing the source materials and the data behind the story is one of the basic ways in which data journalism can improve journalism, what Wikileaks founder Julian Assange calls "scientific journalism."

By enabling anyone to drill down into data sources and find information that is relevant to them, as well as to verify assertions and challenge commonly received assumptions, data journalism effectively represents the mass democratisation of resources, tools, techniques and methodologies that were previously used by specialists — whether investigative reporters, social scientists, statisticians, analysts or other experts. While currently quoting and linking to data sources is particular to data journalism, we are moving towards a world in which data is seamlessly integrated into the fabric of media. Data journalists have an important role in helping to lower the barriers to understanding and interrogating data, and increasing the data literacy of their readers on a mass scale.

At the moment the nascent community of people who called themselves data journalists is largely distinct from the more mature CAR community. Hopefully in the future we will see stronger ties between these two communities, in much the same way that we see new NGOs and citizen media organizations like ProPublica and the Bureau of Investigative Journalism work hand in hand with traditional news media on investigations. While the data journalism community might have more innovative ways of delivering data and presenting stories, the deeply analytical and critical approach of the CAR community is something that data journalism could certainly learn from.

This excerpt was lightly edited. Links were added for EveryBlock, the Guardian Datablog, Texas Tribune datasets, the Combined Online Information System, and Julian Assange's reference to "scientific journalism."

The Data Journalism Handbook (Early Release) — This collaborative book aims to answer questions like: Where can I find data? What tools can I use? How can I find stories in data? (The digital Early Release edition includes raw and unedited content. You'll receive updates when significant changes are made, as well as the final ebook version.)

Related:

Reposted bynunatak nunatak

May 04 2012

Publishing News: Nook gets Microsoft, and soon NFC

Here are a few stories from the publishing space that caught my eye this week.

Microsoft enters the battle of the publishing tech giants

NookLogo.pngAfter hinting in January that something might be in the works for the Nook, a deal between Microsoft and Barnes & Noble was announced this week. Reuters reports:

"Microsoft Corp will invest $300 million in Barnes & Noble Inc's digital and college businesses ... Microsoft will get a 17.6 percent stake in the new unit, while Barnes & Noble will own about 82.4 percent ... The business, whose name has not yet been decided, will have an ongoing relationship with Barnes & Noble's retail stores."

Much discussion is flurrying about.

Felix Salmon has an interesting analysis at Wired, writing that "the news does mean that Barnes & Noble won't need to constantly find enormous amounts of money to keep up in the arms race with Amazon. That's largely Microsoft's job, now." He also points out that the real winners here are readers: "... we finally have a real three-way fight on our hands in the e-book space, between three giants of tech: Apple, Amazon, and Microsoft. And that can only be good for consumers."

Publisher Thad McIlroy offers an initial analysis of the deal, likening the "marriage" to "two losers stumbling to the altar without bridesmaids or witnesses," and a subsequent in-depth look at just what the $300 million exchange means to both sides:

"I know that Microsoft gained in part because the press release states that the two companies 'settled their patent litigation.' To merely settle patent litigation gives you no idea of who the winner is; the settlement can take myriad forms.

However, the sentence in the press release continues, 'moving forward, Barnes & Noble and Newco will have a royalty-bearing license under Microsoft's patents.' That means Barnes & Noble has agreed to pay Microsoft for some or all of its previously disputed patents via this new company (currently called 'Newco'). And that means Microsoft managed to gain the upper hand in these negotiations." [Link added.]

Microsoft analyst Mary Jo Foley over at ZDNet took a look at what the partnership could mean for future devices: a Windows-powered e-reader, perhaps? She reports that during a press/analyst call, "[Microsoft President Andy Lees] mentioned a few times that Microsoft is positioning Windows as key to the future of reading."

O'Reilly GM and publisher Joe Wikert argues this isn't about ebooks at all, suggesting that "Microsoft should instead use this as an opportunity to create an end-to-end consumer experience that rivals Apple's and has the advertising income potential to make Google jealous." He also wonders if Microsoft might influence B&N to deeply discount Nook prices with a two-year content purchase requirement, similar to what the company just did with the Xbox.

In any case, it looks like Wikert's wish for an end-to-end UX might already be in the works. In an interview about the Microsoft deal at CNN Fortune, Barnes & Noble CEO William Lynch says plans are underway to improve offline-online integration to bring a richer experience to customers:

"We're going to start embedding NFC chips into our Nooks. We can work with the publishers so they would ship a copy of each hardcover with an NFC chip embedded with all the editorial reviews they can get on BN.com. And if you had your Nook, you can walk up to any of our pictures, any our aisles, any of our bestseller lists, and just touch the book, and get information on that physical book on your Nook and have some frictionless purchase experience. That's coming, and we could lead in that area."

In response to whether NFC functionality will roll out this year, Lynch said, "Maybe ..."

The future of publishing has a busy schedule.
Stay up to date with Tools of Change for Publishing events, publications, research and resources. Visit us at oreilly.com/toc.

Amazon loses shelf space

Target decided this week that it would cease carrying the Kindle and its accessories. The Verge reports that "the company is going to stop carrying the line of products due to a 'conflict of interest'" and that "[c]ertain accessories will remain in stock, but shipments of Kindles themselves will cease as of May 13th." Exactly why this decision was made remains a bit unclear, though speculations are being bandied about.

The LA Times quotes a Target spokeswoman with the official benign company line: "Target continually evaluates its product assortment to deliver the best quality and prices for our guests," but then points to a New York Times story with a much more telling tidbit:

"'What we aren't willing to do is let online-only retailers use our brick-and-mortar stores as a showroom for their products and undercut our prices,' Target executives wrote in a letter to vendors, asking them to think of new pricing and inventory strategies, according to a note that Deborah Weinswig, a Citi analyst, sent to clients."

Laura Hazard Owen at GigaOm covers a couple possible reasons for Kindle eviction. Given the note quoted in the New York Times, the most likely seems to be that Amazon tried to negotiate new terms that Target just couldn't accept, or vice versa. Owen notes a couple of other important points to consider: Target will continue carrying other brands of ereaders and accessories, including the Nook, and that Apple is set to begin a mini-store test program with Target.

Also notable: Thus far, I haven't seen Amazon comment on the situation.

Is the end of ereaders and ebooks nigh?

The battle for King of the Ereaders may soon come to an end — not because one of tech giants wins the war, but because ebook formats lose out to the web and HTML5. So argues Jani Patokallio, publishing platform architect at Lonely Planet, in a blog post.

He says it all boils down to publishing rights and publishers opting "to circle wagons, stick their fingers in their ears and pretend digital is print." He argues that "in the print publishing industry, publishing rights for different countries and languages are both standard practice and a big deal," but these same agreements don't make sense for digital publishing. They are, in fact, hindering the customers' ability to purchase and read books:

"Customers today are expected to buy into a format that locks down their content into a silo, limits their purchasing choices based on where their credit card happens to have been registered, is designed to work best on devices that are rapidly becoming obsolete, and support only a tiny subset of the functionality available on any modern website. Nonetheless, publishers are seeing their e-book sales skyrocket and congratulating themselves on a job well done."

Patokallio says that "[o]n the Web, the very idea that the right to read a website would vary from country to country seems patently absurd," and that ebooks have an obvious replacement:

"The same medium that already killed off the encyclopedia, the telephone directory and the atlas: the Web. For your regular linear fiction novel, or even readable tomes of non-fiction, a no-frills PDF does the job just fine and Lonely Planet has been selling its travel guidebooks and phrasebooks a chapter at a time, no DRM or other silliness, as PDFs for years now. For more complicated, interactive, Web-like stuff, throw away the artificial shackles of ePub and embrace the full scope of HTML5, already supported by all major browsers and usable right now by several billion people."

Patokallio's post is a must-read, and there were a couple indications this week that he might be on to something. First, "[t]he Financial Times is preparing to kill off its iPad and iPhone app for good, signalling its final conversion from executable-app to web-app publishing." Second, in a post at Wired regarding the Microsoft deal with B&N, Felix Salmon says: "... over the long term, we're not going to be buying Kindles or Nooks to read books. Just as people stopped buying cameras because they're now just part of their phones, eventually people will just read books on their mobile device, whether it's running Windows or iOS or something else."

Related:


Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
Get rid of the ads (sfw)

Don't be the product, buy the product!

Schweinderl