Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

March 21 2013

February 26 2013

February 11 2013

J. C. R. Licklider: Libraries of the Future (1965) (at Monoskop) - via worldedness.tumblr | offene Ablage: nothing to hide 2013-02-11

In this book J. C. R. Licklider discussed how information could be stored and retrieved electronically. Although he had not read Vannevar Bush’s “As We May Think,” he realized that Bush’s ideas had been diffused through the computing community enough to have provided a base for his own ideas. His theoretical information network, which he called a “procognitive system” sounds remarkably similar to Tim Berners-Lee’s World Wide Web: “the concept of a ‘desk’ may have changed from passive to active: a desk may be primarily a display-and-control station in a telecommunication-telecomputation system-and its most vital part may be the cable (‘umbilical cord’) that connects it, via a wall socket, into the procognitive utility net”. This system could be used to connect the user to “everyday business, industrial, government, and professional information, and perhaps, also to news, entertainment, and education.” (source)

Based on a study sponsored by the Council on Library Resources, Inc., and conducted by Bolt, Beranek, and Newman, Inc., between November 1961 and November 1963.

Publisher MIT Press, 1965
ISBN 026212016X, 9780262120166
219 pages


via Archive.org (where it is not available anymore)

google books

Download

 

 

 



February 10 2013

April 26 2012

November 04 2011

The problem with Amazon's Kindle Owners' Lending Library

Amazon Kindle logoIt's no secret that I'm a big Amazon fan. In fact, I recently took the Amazon side in a debate about platform superiority. (I won that debate, by the way ...) That's why a lot of people are surprised that I'm such an outspoken critic of Amazon's new Kindle Owners' Lending Library. The program is great for Amazon and maybe even for consumers, assuming they're willing to live with the many restrictions, but it's awful for publishers and authors.

Why? As Amazon stated in its press release, "For the vast majority of titles, Amazon has reached agreement with publishers to include titles for a fixed fee." So no matter how popular (or unpopular) the publisher's titles are, they get one flat fee for participation in the library. I strongly believe this type of program needs to compensate publishers and authors on a usage level, not a flat fee. The more a title is borrowed, the higher the fee to the publisher and author. Period.

Even if a flat fee made sense, how can a publisher try to estimate a fair amount? One key factor is the number of Amazon Prime subscribers. There are a variety of estimates on this figure, but those estimates only apply to this specific point in time. How can you possibly know the number of Prime subscribers Amazon will have in six months? In 12 months? Don't assume you can simply extrapolate this number from historical trends. When the Kindle Fire comes out later this month it will include a 30-day Prime trial, and I expect the Fire's availability and the upcoming holidays to create an enormous surge in Prime subscribers. When will Prime double today's levels? It's impossible to say, which means there's no way to estimate how many times a book might get loaned out. That also means it's impossible to come up with a reasonable estimate on a flat fee for a publisher's list.

I hope Amazon reconsiders and switches this program to a pay-for-performance model. That's the only way I'll ever support it as a publisher.

What's your take? Please weigh in through the comments.

TOC NY 2012 — O'Reilly's TOC Conference, being held Feb. 13-15, 2012 in New York City, is where the publishing and tech industries converge. Practitioners and executives from both camps will share what they've learned and join together to navigate publishing's ongoing transformation.

Register to attend TOC 2012

Related:

September 30 2011

02mydafsoup-01
[...]


On my visit to the library this morning, I was supplied with a full history of the institution by its appointed caretaker, Betsy Fagin. A few days ago, Betsy, a trained librarian who lives in Brooklyn, came to the protest for the first time and found a short stack of books lying on the ground where everyone was camped out. She decided to go to one of the organizational meetings for the protests and ask if anyone else thought it would be a good idea to start a proper library. People did. So Betsy posted a sign asking for book, newspaper, and poetry donations. She says that since then protesters, bystanders, and even Wall Streeters have been stopping by the park with stacks of books. Because she doesn’t keep track of who checks out what—the library is on the honor system—she has no idea what the most popular book is. She has seen a couple of people reading Allen Ginsberg’s “Howl,” Walt Whitman’s “Leaves of Grass,” and has noticed that the manga is flying off the shelves.


[...]

-------------------------

oAnth:

this entry is part of the OccupyWallStreet compilation 2011-09/10, here.

Reposted bysofiasbrightbyteL337hiumhenteaser

August 10 2011

June 02 2011

How the Library of Congress is building the Twitter archive

Library of Congress Reading Room 1 by maveric2003, on FlickrIn April 2010, Twitter announced it was donating its entire archive of public tweets to the Library of Congress. Every tweet since Twitter's inception in 2006 would be preserved. The donation of the archive to the Library of Congress may have been in part a symbolic act, a recognition of the cultural significance of Twitter. Although several important historical moments had already been captured on Twitter when the announcement was made last year (the first tweet from space, for example, Barack Obama's first tweet as President, or news of Michael Jackson's death), since then our awareness of the significance of the communication channel has certainly grown.

That's led to a flood of inquiries to the Library of Congress about how and when researchers will be able to gain access to the Twitter archive. These research requests were perhaps heightened by some of the changes that Twitter has made to its API and firehose access.

But creating a Twitter archive is a major undertaking for the Library of Congress, and the process isn't as simple as merely cracking open a file for researchers to peruse. I spoke with Martha Anderson, the head of the library's National Digital Information Infrastructure and Preservation Program (NDIIP), and Leslie Johnston, the manager of the NDIIP's Technical Architecture Initiatives, about the challenges and opportunities of archiving digital data of this kind.

It's important to note that the Library of Congress is quite adept with the preservation of digital materials, as it's been handling these types of projects for more than a decade. The library has been archiving congressional and presidential campaign websites since 2000, for example, and it currently has more than 200 terabytes of web archives. It also has hundreds of terabytes of digitized newspapers, and petabytes of data from other sources, such as film archives and materials from the Folklife Center. So the Twitter archives fall within the purview of these sorts of digital preservation efforts, and in terms of the size of the archive, it is actually not too unwieldy.

Even with a long experience with archiving "born digital" content, Anderson says the Library of Congress "felt pretty brave about taking on Twitter."

OSCON Data 2011, being held July 25-27 in Portland, Ore., is a gathering for developers who are hands-on, doing the systems work and evolving architectures and tools to manage data. (This event is co-located with OSCON.)

Save 20% on registration with the code OS11RAD

What makes the endeavor challenging, if not the size of the archive, is its composition: billions and billions and billions of tweets. When the donation was announced last year, users were creating about 50 million tweets per day. As of Twitter's fifth anniversary several months ago, that number has increased to about 140 million tweets per day. The data keeps coming too, and the Library of Congress has access to the Twitter stream via Gnip for both real-time and historical tweet data.

Each tweet is a JSON file, containing an immense amount of metadata in addition to the contents of the tweet itself: date and time, number of followers, account creation date, geodata, and so on. To add another layer of complexity, many tweets contain shortened URLs, and the Library of Congress is in discussions with many of these providers as well as with the Internet Archive and its 301works project to help resolve and map the links.

As it stands, Anderson and Johnston say they won't be crawling all these external sites and end-points, although Anderson says that in her "grand vision of the future" all of this data — not just from the Library of Congress but from all these different technological and cultural heritage institutions — would be linked. In the meantime, the Library of Congress won't be creating a catalog of all these tweets and all this data, but they do want to be able to index the material so researchers can effectively search it.

This requires a significant technological undertaking on the part of the library in order to build the infrastructure necessary to handle inquiries, and specifically to handle the sorts of inquiries that researchers are clamoring for. Anderson and Johnston say that a cross-departmental team has been assembled at the library, and they're actively taking input from researchers to find out exactly what their needs for the material may be. Expectations also need to be set about exactly what the search parameters will be — this is a high-bandwidth, high-computing-power undertaking after all.

The project is still very much under construction, and the team is weighing a number of different open source technologies in order to build out the storage, management and querying of the Twitter archive. While the decision hasn't been made yet on which tools to use, the library is testing the following in various combinations: Hive, ElasticSearch, Pig, Elephant-bird, HBase, and Hadoop.

A pilot workshop is slated to run this summer with researchers who can help guide the Library of Congress in building out the archive and its accessibility. Anderson and Johnston say they expect an initial offering to be made available in four or five months. But even then, access to the Twitter archive will be restricted to "known researchers" who will need to go through the Library of Congress approval process to gain access to the data. Based on the sheer number of research requests, there are going to be plenty of scholars lined up to have a closer examination of this important cultural and technological archive.

Photo: Library of Congress Reading Room 1 by maveric2003, on Flickr



Related:




May 27 2011

Publishing News: Curation for the Kindle

Here are some of this week's highlights from the publishing world. (Note: These stories were published here on Radar throughout the week.)

Web content curation welcomed a new platform — your Kindle

DelivereadsA new project called Delivereads curates interesting content from around the web and delivers it to your Kindle, via your Kindle email address. At the time of this writing, Delivereads was sending out selections like GQ's "Out on the Ice," The Atlantic's "The Lazarus File," Washington Monthly's "The Information Sage," and Time's "Zach Galifianakis Hates to Be Loved."

In an email interview, Delivereads founder Dave Pell (@davepell) talked about the project's origin:

Everyone who worked on the product, including designer Brian Moco and developer Alex King of Crowd Favorite, did so for free because they were excited about the idea and are subscribing to it themselves.

  • This story continues here.

Pete Meyers on how "Welcome to Pine Point" creates a truly digital reading experience not mired in nor based on print

I've been writing about and helping create digital books for about 15 years now and I don't think I've seen anything as innovative, as well executed, and as plain lovely to look at as "Welcome to Pine Point." No disrespect to the great work done by teams at Push Pop (Our Choice), Touch Press (The Elements), or Potion (NYPL Biblion), but all those projects take the print page as the starting point and ask: how can we best recreate that reading experience onscreen?

"Pine Point," instead, is an example of something that couldn't exist in any other medium. Its creators describe it as "part book, part film, part website," which sounds about right; it mixes audio, video, still photos, prose, and movable images to tell the story of a Canadian town that was abandoned, and then demolished, in the late 1980s. But as most people reading this blog know: that multimedia stew's been cooked before.

PinePoint
Title page for Welcome to Pine Point. Click to enlarge.

So why is "Pine Point" such a success?

Quality, for starters. The team behind this project — Paul Shoebridge and Michael Simons, aka The Goggles — have sweated the details on how to integrate all those various media elements in a viewer-friendly way, one that immerses the audience in the story. A story that, not incidentally, touches on themes (abandonment, aging, environmentalism) moving enough to reward the time it takes — about 30 minutes — to watch it.

  • This story continues here.



George Oates on how a minimum viable record could improve library catalogs and information systems


MVRAt first blush, bibliographic data seems like it would be a fairly straightforward thing: author, title, publisher, publication date. But that's really just the beginning of the sorts of data tracked in library catalogs. There's also a variety of metadata standards and information classification systems that need to be addressed.

The Open Library has run into these complexities and challenges as it seeks to create "one web page for every book ever published."

George Oates, Open Library lead, recently gave a presentation in which she surveyed audience members, asking them to list the five fields they thought necessary to adequately describe a book. In other words, what constitutes a "minimum viable record"? Akin to the idea of the "minimum viable product" for getting a web project coded and deployed quickly, the minimum viable record (MVR) could be a way to facilitate an easier exchange of information between library catalogs and information systems.

In the interview below, Oates explains the issues and opportunities attached to categorization and MVRs.

What are some of the challenges that libraries and archives face when compiling and comparing records?

George Oates: I think the challenges for compilation and comparison of records rest in different styles, and the innate human need to collect, organize, and describe the things around us. As Barbara Tillett noted in a 2004 paper: "Once you have a collection of over say 2,000 items, a human being can no longer remember every item and needs a system to help find things."

I was struck by an article I saw on a site called Apartment Therapy, about "10 Tiny Gardens," where the author surveyed extremely different decorations and outputs within remarkable constraints. That same concept can be dropped into cataloging, where even in the old days, when librarians described books within the boundaries of a physical index card, great variation still occurred. Trying to describe a book on a 3x5 card is oddly reductionist.

It's precisely this practice that's produced this "diabolical rationality" of library metadata that Karen Coyle describes [slide No. 38]. We're not designed to be rational like this, all marching to the same descriptive drum, even though these mythical levels of control and uniformity are still claimed. It seems to be a human imperative to stretch ontological boundaries and strive for greater levels of detail.

Some specific categorization challenges are found in the way people's names are cataloged. There's the very simple difference between "Lastname, Firstname" and "Firstname Lastname" or the myriad "disambiguators" that can help tell two authors with the same name apart — like a middle initial, a birthdate, title, common name, etc.

There are also challenges attached to the normal evolution of language, and a particular classification's ability to keep up. An example is the recent introduction of the word "cooking" as an official Library of Congress Subject Heading. "Cooking" supersedes "Cookery," so now you have to make sure all the records you have in your catalog that previously referred to "Cookery" now know about this newfangled "Cooking" word. This process is something of a ouroboros, although it's certainly made easier now that mass updates are possible with software.

A useful contrast to all this is the way tagging on Flickr was never controlled (even though several Flickr members crusaded for various patterns). Now, even from this chaos, order emerges. On Flickr it's now possible to find photos of red graffiti on walls in Brooklyn, all through tags. Using metadata "native" to a digital photograph, like the date it was taken, and various camera details, you can focus even deeper, to find photos taken with a Nikon in the winter of 2008. Even though that's awesome, I'm sure it rankles professionals since Flickr also has a bunch of photos that have no tags at all.

  • This story continues here.
Webcast: SneakPeek at Publishing Startups — SneakPeeks are a TOC webcast series featuring a behind-the-scenes look at publishing startups and their products. The inaugural SneakPeek webcast includes presentations from 24symbols, Valobox, Appitude, Active Reader and OnSwipe.

Join us on Tuesday, May 31, 2011, at 10 am PT
Register for this free webcast



Related:


May 24 2011

The search for a minimum viable record

Catalogue by Helga's Lobster Stew, on FlickrAt first blush, bibliographic data seems like it would be a fairly straightforward thing: author, title, publisher, publication date. But that's really just the beginning of the sorts of data tracked in library catalogs. There's also a variety of metadata standards and information classification systems that need to be addressed.

The Open Library has run into these complexities and challenges as it seeks to create "one web page for every book ever published."

George Oates, Open Library lead, recently gave a presentation in which she surveyed audience members, asking them to list the five fields they thought necessary to adequately describe a book. In other words, what constitutes a "minimum viable record"? Akin to the idea of the "minimum viable product" for getting a web project coded and deployed quickly, the minimum viable record (MVR) could be a way to facilitate an easier exchange of information between library catalogs and information systems.

In the interview below, Oates explains the issues and opportunities attached to categorization and MVRs.

What are some of the challenges that libraries and archives face when compiling and comparing records?

George OatesGeorge Oates: I think the challenges for compilation and comparison of records rest in different styles, and the innate human need to collect, organize, and describe the things around us. As Barbara Tillett noted in a 2004 paper: "Once you have a collection of over say 2,000 items, a human being can no longer remember every item and needs a system to help find things."

I was struck by an article I saw on a site called Apartment Therapy, about "10 Tiny Gardens," where the author surveyed extremely different decorations and outputs within remarkable constraints. That same concept can be dropped into cataloging, where even in the old days, when librarians described books within the boundaries of a physical index card, great variation still occurred. Trying to describe a book on a 3x5 card is oddly reductionist.

It's precisely this practice that's produced this "diabolical rationality" of library metadata that Karen Coyle describes [slide No. 38]. We're not designed to be rational like this, all marching to the same descriptive drum, even though these mythical levels of control and uniformity are still claimed. It seems to be a human imperative to stretch ontological boundaries and strive for greater levels of detail.

Some specific categorization challenges are found in the way people's names are cataloged. There's the very simple difference between "Lastname, Firstname" and "Firstname Lastname" or the myriad "disambiguators" that can help tell two authors with the same name apart — like a middle initial, a birthdate, title, common name, etc.

There are also challenges attached to the normal evolution of language, and a particular classification's ability to keep up. An example is the recent introduction of the word "cooking" as an official Library of Congress Subject Heading. "Cooking" supersedes "Cookery," so now you have to make sure all the records you have in your catalog that previously referred to "Cookery" now know about this newfangled "Cooking" word. This process is something of a ouroboros, although it's certainly made easier now that mass updates are possible with software.

A useful contrast to all this is the way tagging on Flickr was never controlled (even though several Flickr members crusaded for various patterns). Now, even from this chaos, order emerges. On Flickr it's now possible to find photos of red graffiti on walls in Brooklyn, all through tags. Using metadata "native" to a digital photograph, like the date it was taken, and various camera details, you can focus even deeper, to find photos taken with a Nikon in the winter of 2008. Even though that's awesome, I'm sure it rankles professionals since Flickr also has a bunch of photos that have no tags at all.

OSCON Data 2011, being held July 25-27 in Portland, Ore., is a gathering for developers who are hands-on, doing the systems work and evolving architectures and tools to manage data. (This event is co-located with OSCON.)

Save 20% on registration with the code OS11RAD

In a blog post, you wrote about "a metastasized level of complexity." How does that connect to our need for minimum viable records?

George Oates: What I'm trying to get at is a sense that cataloging is a bit like case law: special cataloging rules apply in even the most specific of situations. Just take a quick glance at some of the documentation on cataloging rules for a sense of that. It's incredible. As a librarian friend of mine once said, "Some catalogers like it hard."

At Open Library, we're trying to ingest catalogs from all over the place, but we're constantly tripped up by fields we don't recognize, or things in fields that probably shouldn't be there. Trying to write an importing program that's filled with special treatments and exceptions doesn't seem practical since it would need constant tweaking to keep up with new styles or standards.

The desire to simplify this sort of thing isn't new. The Dublin Core (DC) initiative came out of a meeting hosted by OCLC in 1995. There are now 15 base DC fields that can describe pretty much anything, and DC is widely used as an approachable standard for all sorts of exchanges of data today. All in all, it's really successful.

Interestingly, after 16 years, DC now has an incorporated organization, loads of contributors, and documentation that seems much more complex than "just use these 15 fields for everything." As every good archivist would tell you, it's better to archive something than nothing, and to get as much information as you can from your source. The temptation for us is to keep trying to handle any kind of metadata at all times, which is super hard.

How do you see computers and electronic formats helping with minimum viable records?

George Oates: MVR might be an opportunity to create a simpler exchange of records. One computer says "Let me send you my MVR for an initial match." If the receiving computer can interpret it, then the systems can talk and ask each other for more.

The tricky part about digital humanities is that its lifeblood is in the details. For example, this section from the Tillett paper I mentioned earlier looked at the relationship between precision and recall:

Studies ... looked at precision and recall, demonstrating that the two are inversely related — greater recall means poorer precision and greater precision means poorer recall — high recall being the ability to retrieve everything that relates to a search request from the database searched, while precision is retrieving only those relevant to a user.

It's a huge step to sacrifice detail (hence, precision) in favor of recall. But, perhaps that's the step we need, as long as recall can elicit precision, if asked. Certainly in the case of computers, the less fiddly the special cases, the more straightforward it is to make a match.

Photos: Catalogue by Helga's Lobster Stew, on Flickr; profile photo by Derek Powazek

This interview was edited and condensed.



Related:


  • The Library of the Commons: Rise of the Infodex

  • Rethinking museums and libraries as living structures

  • The quiet rise of machine learning

  • May 19 2011

    May 18 2011

    Four short links: 18 May 2011

    1. The Future of the Library (Seth Godin) -- We need librarians more than we ever did. What we don't need are mere clerks who guard dead paper. Librarians are too important to be a dwindling voice in our culture. For the right librarian, this is the chance of a lifetime. Passionate railing against a straw man. The library profession is diverse, but huge numbers of them are grappling with the new identity of the library in a digital age. This kind of facile outside-in "get with the Internet times" message is almost laughably displaying ignorance of actual librarians, as much as "the book is dead!" displays ignorance of books and literacy. Libraries are already much more than book caves, and already see themselves as navigators to a world of knowledge for people who need that navigation help. They disproportionately serve the under-privileged, they are public spaces, they are brave and constant battlers at the front line of freedom to access information. This kind of patronising "wake up and smell the digital roses!" wank is exactly what gives technologists a bad name in other professions. Go back to your tribes of purple cows, Seth, and leave librarians to get on with helping people find, access, and use information.
    2. An Old Word for a New World (PDF) -- paper on how "innovation", which used to be pejorative, came now to be laudable. (via Evgeny Mozorov)
    3. AlchemyAPI -- free (as in beer) entity extraction API. (via Andy Baio)
    4. Referrals by LinkedIn -- the thing with social software is that outsiders can have strong visibility into the success of your software, in a way that antisocial software can't.

    April 04 2011

    Four short links: 4 April 2011

    1. Find The Future -- New York Public Library big game, by Jane McGonigal. (via Imran Ali)
    2. Enable Certificate Checking on Mac OS X -- how to get your browser to catch attempts to trick you with revoked certificates (more of a worry since security problems at certificate authorities came to light). (via Peter Biddle)
    3. Clever Algorithms -- Nature-Inspired Programming Recipes from AI, examples in Ruby. I hadn't realized there were Artificial Immune Systems. Cool! (via Avi Bryant)
    4. Rethinking Evaluation Metrics in Light of Flickr Commons -- conference paper from Museums and the Web. As you move from "we are publishing, you are reading" to a read-write web, you must change your metrics. Rather than import comments and tags directly into the Library's catalog, we verify the information then use it to expand the records of photos that had little description when acquired by the Library. [...] The symbolic 2,500th record, a photo from the Bain collection of Captain Charles Polack, is illustrative of the updates taking place based on community input. The new information in the record of this photo now includes his full name, death date, employer, and the occasion for taking the photo, the 100th Atlantic crossing as ocean liner captain. An additional note added to the record points the Library visitor to the Flickr conversation and more of the story with references to gold shipments during WWI. Qualitative measurements, like level of engagement, are a challenge to gauge and convey. While resources expended are sometimes viewed as a cost, in this case they indicate benefit. If you don't measure the right thing, you'll view success as a failure. (via Seb Chan)

    April 01 2011

    March 08 2011

    02mydafsoup-01

    manystuff.org — Graphic Design daily selection » Blog Archive » Open Books – Perpetual Proposal

    Perpetual Proposal
    A space and classification system by Fay Nicolson and Oliver Smith for Openbooks

    Perpetual Proposal is a system to initiate and record the flow of books within an exhibition as reading room. The Books (selected by Sophie Demay and Charlotte Cheetham) all deal with the form and content of the book as a subject. Being interested in mathematician and librarian S.R. Raganathan’s Five Laws of Library Science*, Fay and Oliver encourage visitors to the exhibition to select books from the peg board display and browse through them at their leisure. Investigating the idea of a reader as both a consumer and (re)producer of knowledge Fay and Oliver ask visitors to select a page using a book mark for our librarian to copy. The rationale behind page selection is left in hands of each reader.

    Throughout his/her shift the librarian will copy and index selected pages, binding them into a compendium that will enter the selection of exhibited books each day. These new additions both document the use of, and add to, the collection of exhibited books.

    * Raganathan’s Five Laws of Library Science are:
    1. BOOKS ARE FOR USE.
    2. EVERY READER HIS (OR HER) BOOK.
    3. EVERY BOOK ITS READER.
    4. SAVE THE TIME OF THE USER
    5. THE LIBRARY IS A GROWING ORGANISM

    Here is a diagram that charts the proposed flow of books around the space:


    March 01 2011

    HarperCollins' digital lending cap sparks lively discussion

    OverDrive CEO Steve Potash sent a notice to libraries about new restrictions and changes in OverDrive's territory policies. The notice, which sparked an outcry from libraries, library patrons and even some authors, announced that a major U.S. publishing house (later identified as HarperCollins) would be placing a 26-time lending cap on its titles.

    In a lively #followreader discussion Friday on Twitter, Peter Brantley, director of the Bookserver Project at the Internet Archive, suggested libraries respond with a touch of aggression to make a point:

    PeterBrantley1.png

    Ron Hogan, author and host of Beatrice.com responded to Brantley, taking it a bit further:

    RonHogan.png

    Neil Gaiman, a HarperCollins author, responded to the situation on Twitter as well:

    GaimanTweetLibraryHC.png

    Gaiman's suggestion of implementing PLR — public lending rights, a process used in the UK — in the U.S. was discussed in the #followreader session. You can see the entire session here or by searching Twitter for the hashtag #followreader.

    Brantley also pointed to a second concerning item in the OverDrive notice:

    PeterBrantley2.png

    The section of the note he's referring to states (via Librarian by Day):

    In addition, our publishing partners have expressed concerns regarding the card issuance policies and qualification of patrons who have access to OverDrive supplied digital content. Addressing these concerns will require OverDrive and our library partners to cooperate to honor geographic and territorial rights for digital book lending, as well as to review and audit policies regarding an eBook borrower's relationship to the library (i.e. customer lives, works, attends school in service area, etc.). I can assure you OverDrive is not interested in managing or having any say in your library policies and issues. Select publisher terms and conditions require us to work toward their comfort that the library eBook lending is in compliance with publisher requirements on these topics.

    Brantley responded:

    PeterBrantley3.png

    Heather McCormick, managing editor at Library Journal, commented on the HarperCollins situation in an e-mail interview. She highlighted the issue of trust:

    The most obvious short-term consequence is what appears to be a mass obliteration of trust. Don't get me wrong — there was ample discontent with the ebook loaning model as it stood. DRM and Overdrive's interface have long been pointed to as stumbling blocks in providing easy access to information, and yet the model was largely tolerated. With the announcement of HarperCollins' loaning cap, however, a sizable contingent of librarians have had it. See "The Ebook User Bill of Rights" issued this week by a set of advocates led by Andy Woodworth and Sarah Houghton-Jan, and the charge to boycott HarperCollins content by librarians Brett Bonfield and Gabriel Farrell.

    No trust means little to no communication, of course, and that's what scares me the most. Librarians have long been shut out of conversations about the ebook loaning model, so from where I'm standing, it doesn't make much sense for librarians to formally sever their commercial relationships with HarperCollins. Cap or no cap, they have a responsibility to provide access to information, and in order to fulfill their mission, they need to do the exact opposite of a boycott. They need to join forces on an unprecedented scale to lobby for value for their communities and collections: paging the blogosphere; paging ALA; paging the publishers who do support libraries; paging the sea of patrons who are ultimately affected.

    As for long-term effects, I hope HarperCollins' move isn't replicated in its exact terms, but I do hope it will set a precedent of publishers and librarians engaging much more directly about new loaning models. A good, old-fashioned dustup can pave the way for clearer communication and progress.

    It's important to note that a couple of the major U.S. publishing houses — Simon & Schuster and MacMillan — don't make their titles available for digital lending at all.

    For more on these moves, check out related links posted at Librarian by Day, and commentary by Jane Litte at Dear Author and Martyn Daniels at Brave New World.

    February 01 2011

    Open question: Do libraries help or hurt publishing?

    questionmarkI might not know who Nancy Drew is if it weren't for libraries. Granted, I ended up buying most of the series — or rather, my parents did — but the library was the discovery zone. It still works like that for me today; I now own three Richard Russo books because of the library.

    Libraries have been a part of most of our lives in one way or another, yet they are in a constant struggle for funding. Jerry Brown, the governor of California, is proposing a budget that would pull back all state funding for libraries. Some libraries, such as the Butler Public Library in Indiana, are thinking out of the box to raise funding (see the banner at the top of their site). And the struggle isn't only in the United States.

    With libraries around the world in such financial jeopardy, a couple of questions come to mind:

    • What purpose (if any) has a library served for you?
    • If libraries ceased to exist, what would the ramifications be?
    • Do libraries help or hurt publishing?

    Please share your thoughts in the comments area. To continue the discussion, check out the TOC panel Solving the Digital Loan Problem: Can Library Lending of eBooks be a Win-win for Publishers AND Libraries? February 16 at TOC 2011.

    TOC: 2011, being held Feb. 14-16, 2011 in New York City, will explore "publishing without boundaries" through a variety of workshops, keynotes and panel sessions.

    Save 15% off registration with the code TOC11RAD

    November 11 2010

    02mydafsoup-01
    Der durch die Erhöhung der Haushaltssperre bedingte finanzielle Einbruch entzieht der Bibliothek nunmehr die Grundlagen zur angemessenen Erfüllung ihres gesetzlichen Auftrags. Er erzwingt umfassende Abbestellungen von Zeitschriftenabonnements und Einschnitte in der Bucherwerbung sowie einen weitestgehenden Verzicht auf den für die langfristige Nutzbarkeit von Medien unentbehrlichen Einband. Dies hat auch eine Existenzbedrohung von Buchbinder-Werkstätten zur Folge. Die Erwerbung von Handschriften und alten Drucken – ureigener Auftrag der Bibliothek zur Sicherung des kulturellen Erbes des Freistaates – muss nahezu vollständig eingestellt werden.
    Die Bayerische Staatsbibliothek 20101028: Spardiktat zwingt Staatsbibliothek zu drastischen Einschnitten 
    Reposted bytladesignzhermes
    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!

    Schweinderl