Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 27 2014

Four short links: 27 January 2014

  1. Druid — open source clustered data store (not key-value store) for real-time exploratory analytics on large datasets.
  2. It’s Time to Engineer Some Filter Failure (Jon Udell) — Our filters have become so successful that we fail to notice: We don’t control them, They have agendas, and They distort our connections to people and ideas. That idea that algorithms have agendas is worth emphasising. Reality doesn’t have an agenda, but the deployer of a similarity metric has decided what features to look for, what metric they’re optimising, and what to do with the similarity data. These are all choices with an agenda.
  3. Capstone — open source multi-architecture disassembly engine.
  4. The Future of Employment (PDF) — We note that this prediction implies a truncation in the current trend towards labour market polarization, with growing employment in high and low-wage occupations, accompanied by a hollowing-out of middle-income jobs. Rather than reducing the demand for middle-income occupations, which has been the pattern over the past decades, our model predicts that computerisation will mainly substitute for low-skill and low-wage jobs in the near future. By contrast, high-skill and high-wage occupations are the least susceptible to computer capital. (via The Atlantic)

February 14 2013

Four short links: 14 February 2013

  1. Welcome to the Malware-Industrial Complex (MIT) — brilliant phrase, sound analysis.
  2. Stupid Stupid xBoxThe hardcore/soft-tv transition and any lead they feel they have is simply not defensible by licensing other industries’ generic video or music content because those industries will gladly sell and license the same content to all other players. A single custom studio of 150 employees also can not generate enough content to defensibly satisfy 76M+ customers. Only with quality primary software content from thousands of independent developers can you defend the brand and the product. Only by making the user experience simple, quick, and seamless can you defend the brand and the product. Never seen a better put statement of why an ecosystem of indies is essential.
  3. Data Feedback Loops for TV (Salon) — Netflix’s data indicated that the same subscribers who loved the original BBC production also gobbled down movies starring Kevin Spacey or directed by David Fincher. Therefore, concluded Netflix executives, a remake of the BBC drama with Spacey and Fincher attached was a no-brainer, to the point that the company committed $100 million for two 13-episode seasons.
  4. wrka modern HTTP benchmarking tool capable of generating significant load when run on a single multi-core CPU. It combines a multithreaded design with scalable event notification systems such as epoll and kqueue.

January 21 2013

Four short links: 21 January 2013

  1. School District Builds Own SoftwareBy taking a not-for-profit approach and using freely available open-source tools, Saanich officials expect to develop openStudent for under $5 million, with yearly maintenance pegged at less than $1 million. In contrast, the B.C. government says it spent $97 million over the past 10 years on the B.C. enterprise Student Information System — also known as BCeSIS — a provincewide system already slated for replacement.
  2. Giving a Presentation From an Apple ][A co-worker used an iPad to give a presentation. I thought: why take a machine as powerful as an early Cray to do something as low-overhead as display slides? Why not use something with much less computing power? From this asoft_presenter was born. The code is a series of C programs that read text files and generate a large Applesoft BASIC program that actually presents the slides. (via Jim Stogdill)
  3. AirBnB TechTalks — impressive collection of interesting talks, part of the AirBnB techtalks series.
  4. Gawker’s Realtime Dashboard — this is not just technically and visually cool, but also food for thought about what they’re choosing to measure and report on in real time (new vs returning split, social engagement, etc.). Does that mean they hope to be able to influence those variables in real time? (via Alex Howard)

January 17 2013

Four short links: 17 January 2013

  1. Free Book Sifter — lists all the free books on Amazon, has RSS feeds and newsletters. (via BoingBoing)
  2. Whom the Gods Would Destroy, They First Give Realtime Analytics — a few key reasons why truly real-time analytics can open the door to a new type of (realtime!) bad decision making. [U]ser demographics could be different day over day. Or very likely, you could see a major difference in user behavior immediately upon releasing a change, only to watch it evaporate as users learn to use new functionality. Given all of these concerns, the conservative and reasonable stance is to only consider tests that last a few days or more.
  3. Web Book Boilerplate (Github) — uses plain old markdown and generates a well structured HTML version of your written words. Since it’s sitting on top of Pandoc and Grunt, you can easily make your books available for every platform. MIT-style license.
  4. Raspberry Pi Education Manual (PDF) — from Scratch to Python and HCI all via the Raspberry Pi. Intended to be informative and a series of lessons for teachers and students learning coding with the Raspberry Pi as their first device.

March 12 2012

Four short links: 12 March 2012

  1. Web-Scale User Modeling for Targeting (Yahoo! Research, PDF) -- research paper that shows how online advertisers build profiles of us and what matters (e.g., ads we buy from are more important than those we simply click on). Our recent surfing patterns are more relevant than historical ones, which is another indication that value of data analytics increases the closer to real-time it happens. (via Greg Linden)
  2. Information Technology and Economic Change -- research showing that cities which adopted the printing press no prior growth advantage, but subsequently grew far faster than similar cities without printing presses. [...] The second factor behind the localisation of spillovers is intriguing given contemporary questions about the impact of information technology. The printing press made it cheaper to transmit ideas over distance, but it also fostered important face-to-face interactions. The printer’s workshop brought scholars, merchants, craftsmen, and mechanics together for the first time in a commercial environment, eroding a pre-existing “town and gown” divide.
  3. They Just Don't Get It (Cameron Neylon) -- curating access to a digital collection does not scale.
  4. Should Libraries Get Out of the Ebook Business? -- provocative thought: the ebook industry is nascent, a small number of patrons have ereaders, the technical pain of DRM and incompatible formats makes for disproportionate support costs, and there are already plenty of worthy things libraries should be doing. I only wonder how quickly the dynamics change: a minority may have dedicated ereaders but a large number have smartphones and are reading on them already.

September 22 2011

Four short links: 22 September 2011

  1. Implicit and Explicit Feedback -- for preferences and recommendations, implicit signals (what people clicked on and actually listened to) turn out to be strongly correlated with what they would say if you asked. (via Greg Linden)
  2. Pivoting to Monetize Mobile Hyperlocal Social Gamification by Going Viral -- Schuyler Erle's stellar talk at the open source geospatial tools conference. Video, may cause your sides to ache.
  3. repl.it -- browser-based environment for exploring different programming languages from FORTH to Python and Javascript by way of Brainfuck and LOLCODE.
  4. Twitter Storm (GitHub) -- distributed realtime computation system, intended for realtime what Hadoop is to batch processing. Interesting because you improve most reporting and control systems when you move them closer to real-time. Eclipse-licensed open source.

August 29 2011

The application of real-time data

From her vantage point as chief scientist of Bitly, Hilary Mason has interesting insight into the real-time web and what people are sharing, posting, clicking and reading.

I recently spoke with Mason about Bitly's analysis and usage of real-time data. She'll be digging into these same topics at next month's Strata Conference in New York.

Our interview follows.

How does Bitly develop its data products and processes?

Hilary MasonHilary Mason: Our primary goal at Bitly is to understand what's happening on the Internet in real-time. We work by stating the problem we're trying to solve, brainstorming methods and models on the whiteboard, then experimenting on subsets of the data. Once we have a methodology in mind that we're fairly certain will work at scale, we build a prototype of the system, including data ingestion, storage, processing, and (usually) an API. Once we've proven it at that scale, we might decide to scale it to the full dataset or wait and see where it will plug into a product.

How does data drive Bitly's application of analytics and data science?

Hilary Mason: Bitly is a data-centric organization. The data informs business decisions, the potential of the product, and certainly our own internal processes. That said, it's important to draw a distinction between analytics and data science. Analytics is the measurement of well-understood metrics. Data science is the invention of new mathematical and algorithmic approaches to understanding the data. We do both, but apply them in very different ways.

What are the most important applications of real-time data?

Hilary Mason: The most important applications of real-time data apply to situations where having analysis immediately will change the outcome. More practically, when you can ask a question and get the answer before you've forgotten why you asked the question in the first place, it makes you massively more productive.

This interview was edited and condensed.

Strata Conference New York 2011, being held Sept. 22-23, covers the latest and best tools and technologies for data science — from gathering, cleaning, analyzing, and storing data to communicating data intelligence effectively.

Save 30% on registration with the code STN11RAD


Related:


July 29 2011

Top Stories: July 25-29, 2011

Here's a look at the top stories published across O'Reilly sites this week.



How data and analytics can improve education

Education theorist George Siemens discusses education data: its current state, how it can shape customized learning, and what lies ahead for education analytics.

Real-time data needs to power the business side, not just tech
Real-time data analysis has come a long way, but Theo Schlossnagle, principal and CEO of OmniTI, says some technology improvements are actually causing a data analysis devolution.


What publishing can learn from tech startups

Author Todd Sattersten believes the publishing industry has a lot to learn from tech startups. Agile development, iteration and adaptation all have a place.

Ebook empowerment with EPUB3
New features in EPUB3 are expanding the horizons of ebook enhancement. In this interview, Julien Simon and Jérémie Gisserot of Walrus Books discuss the advantages of EPUB3 and what they'd like to see developers do next.
Books as a service: How and why it works
24Symbols, a kind of Netflix for ebooks, aims to benefit readers and publishers alike. Company co-founder Justo Hildago outlines the books-as-a-service model in this interview.

Chalkboard photo: How to break in a new chalkboard by zugaldia, on Flickr"; Chart photo: sitesmetric Dashboard by kevinmarsh, on Flickr




Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders. Save 20% on registration with the code AN11RAD.



July 26 2011

Real-time data needs to power the business side, not just tech

In 2005, real-time data analysis was being pioneered and predicted to "transform society." A few short years later, the technology is a reality and indeed is changing the way people do business. But Theo Schlossnagle (@postwait), principal and CEO of OmniTI, says we're not quite there yet.

In a recent interview, Schlossnagle said that not only does the current technology allow less-qualified people to analyze data, but that most of the analysis being done is strictly for technical benefit. The real benefit will be realized when the technology is capable of powering real-time business decisions.

Our interview follows.


How has data analysis evolved over the last few years?

TheoSchlossnagle.jpgTheo Schlossnagle: The general field of data analysis has actually devolved over the last few years because the barrier to entry is dramatically lower. You now have a lot of people attempting to analyze data with no sound mathematics background. I personally see a lot of "analysis" happening that is less mature than your run-of-the-mill graduate-level statistics course or even undergraduate-level signal analysis course.

But where does it need to evolve? Storage is cheaper and more readily available than ever before. This leads organizations to store data like its going out of style. This isn't a bad thing, but it causes a significantly lower signal-to-noise ratio. Data analysis techniques going forward will need to evolve much better noise reduction capabilities.

What does real-time data allow that wasn't available before?

Theo Schlossnagle: Real-time data has been around for a long time, so in a lot of ways, it isn't offering anything new. But the tools to process data in real-time have evolved quite a bit. CEP systems now provide a much more accessible approach to dealing with data in real time and building millisecond-granularity real-time systems. In a web application, imagine being about to observe something about a user and make an intelligent decision on that data combined with a larger aggregate data stream — all before before you've delivered the headers back to the user.

What's required to harness real-time analysis?

Theo Schlossnagle: Low-latency messaging infrastructure and a good CEP system. In my work we use either RabbitMQ or ZeroMQ and a whole lot of Esper.

Strata Conference New York 2011, being held Sept. 22-23, covers the latest and best tools and technologies for data science -- from gathering, cleaning, analyzing, and storing data to communicating data intelligence effectively.

Save 20% on registration with the code STN11RAD

Does there need to be a single person at a company who collects data, analyzes and makes recommendations, or is that something that can be done algorithmically?

Theo Schlossnagle: You need to have analysts, and I think it is critically important to have them report into the business side — marketing, product, CFO, COO — instead of into the engineering side. We should be doing data analysis to make better business decisions. It is vital to make sure we are always supplied with intelligent and rewarding business questions.

A lot of data analysis done today is technical analysis for technical benefit. The real value is when we can take this technology and expertise and start powering better real-time business decisions. Some of the areas doing real-time analysis well in this regard include finance, stock trading, and high-frequency traders.



Related:


July 13 2011

Who are the OSCON data geeks?

This podcast highlights some of the sessions in OSCON Data and who might be interested in them.

Edd Dumbill, Bradford Stephens and I took the liberty of making irreverent monikers for several of the types of attendees we expect at OSCON Data. These include:

DBA Dude
  • Data Scientist
  • NOSQL Nerd
  • Scaling Geek
  • Real-time Traveler
  • (Podcast production by Rich Goyette Audio.)

    OSCON Data 2011, being held July 25-27 in Portland, Ore., is a gathering for developers who are hands-on, doing the systems work and evolving architectures and tools to manage data.

    Save 20% on registration with the code OS11RAD

    July 05 2011

    The challenges of streaming real-time data

    Although Gnip handles real-time streaming of data from a variety of social media sites, it's best known as the official commercial provider of the Twitter activity stream.

    Frankly, "stream" is a misnomer. "Fire hose," the colloquial variation, better represents the torrent of data Twitter produces. That hose pumps out around 155 million tweets per day, and it's all addressed at a sustained rate.

    I recently spoke with Gnip CEO Jud Valeski (@jvaleski) about what it takes to manage Twitter's flood of data and how the Internet's architecture needs to adapt to real-time needs. Our interview follows.


    The Internet wasn't really built to handle a river of big data. What are the architectural challenges of running real-time data through these pipes?

    Jud ValeskiJud Valeski: The most significant challenge is rusty infrastructure. Just as with many massive infrastructure projects that the world has seen, adopted, and exploited (aqueducts, highways, power/energy grids), the connective tissue of the network becomes excruciatingly dated. We're lucky to have gotten as far as we have on it. The capital build-outs on behalf of the telecommunications industry have yielded relatively low-bandwidth solutions laden with false advertising about true throughput. The upside is that highly transactional HTTP REST apps are relatively scalable in this environment and they "just work." It isn't until we get into heavy payload apps — video streaming, large-scale activity fire hoses like Twitter — that the deficiencies in today's network get put in the spotlight. That's when the pipes begin to burst.

    We can redesign applications to create smaller activities/actions in order to reduce overall sizes. We can use tighter protocols/formats (Protocol Buffers for example), and compression to minimize sizes as well. However, with the ever-increasing usage of social networks generating more "activities," we're running into true pipe capacity limits, and those limits often come with very hard stops. Typical business-class network connections don't come close to handling high volumes, and you can forget about consumer-class connections handling them.

    Strata Conference New York 2011, being held Sept. 22-23, covers the latest and best tools and technologies for data science -- from gathering, cleaning, analyzing, and storing data to communicating data intelligence effectively.

    Save 20% on registration with the code STN11RAD

    Beyond infrastructure issues, as engineers, the web app programming we've been doing over the past 15 years has taught us to build applications in a highly synchronous transactional manner. Because each HTTP transaction generally only lasts a second or so at most, it's easy to digest and process many discrete chunks of data. However, the bastard stepchild of every HTTP lib's "get()" routine that returns the complete result, is the "read()" routine that only gives you a poorly bounded chunk.

    You would be shocked at the ratio of engineers who can't build event-driven, asynchronous data processing applications, to those who can, yet this is a big part of this space. Lack of ecosystem knowledge around these kinds of programming primitives is a big problem. Many higher level abstractions exist for streaming HTTP apps, but they're not industrial strength, and therefore you have to really know what's going on to build your own.

    Shifting back to infrastructure: Often the bigger issue plaguing the network itself is one of latency, not throughput. While data tends to move quickly once streaming connections are established, inevitable reconnects create gaps. The longer those connections take to stand up, the bigger the gaps. Run a traceroute to your favorite API and see how many hops you take. It's not pretty. Latencies on the network are generally a function of router and gateway clutter, as our packets bounce across a dozen servers just to get to the main server and then back to the client.

    How is Gnip addressing these issues?

    Jud Valeski: On the infrastructure side, we are trying (successfully to-date) to use existing, relatively off the shelf, back plane network topologies in the cloud to build our systems. We live on EC2 Larges and XLs to ensure dedicated NICs in our clusters. That helps with the router and gateway clutter. We're also working with Amazon to ensure seamless connection upgrades as volumes increase. These are use cases they actually want to solve at a platform level, so our incentives are nicely aligned. We also play at the IP-stack level to ensure packet transmission is optimized for constant high-volume streams.

    Once total volumes move past standard inbound and outbound connection capabilities, we will be offering dedicated interconnects. However, those come at a very steep price for us and our volume customers.

    All of this leads me to my real answer: Trimming the fat.

    While a sweet spot for us is certainly high-volume data consumers, there are many folks who don't want volume, they want coverage. Coverage of just the activities they care about; usually their customers' brands or products. We take on the challenge of digesting and processing the high volume on inbound, and distill the stream down to just the bits our coverage customers desire. You may need 100% of the activities that mention "good food," but that obviously isn't 100% of a publisher's fire hose. Processing high-velocity root streams on behalf of hundreds of customers without adversely impacting latency takes a lot of work. Today, that means good ol'-fashioned engineering.

    What tools and infrastructure changes are needed to better handle big-data streaming?

    Jud Valeski: "Big data" as we talk about it today has been slayed by lots of cool abstractions (e.g. Hadoop) that fit nicely into the way we think about the stack we all know and love. "Big streams," on the other hand, challenge the parallelization primitives folks have been solving for "big data." There's very little overlap, unfortunately.

    So, on the software solution side, better and more widely used frameworks are needed. Companies like BackType and Gnip pushing their current solutions onto the network for open refinement would be an awesome step forward. I'm intrigued by the prospect of BackType's Storm project, and I'm looking forward to seeing more of it. More brains lead to better solutions.

    We shouldn't be giving CPU and network latency injection a second thought, but we have to. The code I write to process bits as they come off the wire — quickly — should just "go fast," regardless of its complexity. That's too hard today. It requires too much custom code.

    On the infrastructure side of things, ISPs need to provide cheaper access to reliable fat pipes. If they don't, software will outpace their lack of innovation. To be clear, they don't get this and the software will lap them. You asked what I think we need, not what I think we'll actually get.

    This interview was edited and condensed.

    Related:

    July 01 2011

    Publishing News: Survey finds ereader ownership doubled in six months

    Here are a few of this week's publishing highlights. (Note: Some of these stories were previously published here on Radar.)

    More people are ereading

    A report released this week from the Pew Internet Project said ereader ownership growth in the U.S. doubled in six months, from 6% to 12% of adults owning an ebook reader. The report, which was compiled from a month-long telephone survey between April and May this year, also showed that ereader growth is far outpacing tablet growth:

    Tablet computers have not seen the same level of growth among U.S. adults in recent months. In May 2011, 8% of adults report owning a tablet computer such as an iPad, Samsung Galaxy or Motorola Xoom. This is roughly the same percentage of adults who reported owning this kind of device in January 2011 (7%), and represents just a 3 percentage-point increase in ownership since November 2010.



    PewReportGraphic.PNG

    In a post for PC World, Ed Oswald noted that the timing of the survey might have missed a growth spurt:

    What will be interesting to watch over the next few months is whether this trend continues, and whether the release of the iPad 2 results in a jump in tablet ownership. It probably isn't foolish to assume many who did buy the iPad 2 were new to tablets overall, which would result in a jump in ownership.

    And as PC Mag pointed out, although the growth rate for ereaders is impressive, the statistics for market penetration leave ereaders and tablets in the dust compared to other electronic devices:

    By way of comparison, some 83 percent of respondents to Pew's most recent survey said they owned a cellphone (Pew doesn't appear to have broken out smartphones in its findings). Desktop PC ownership (57 percent) still trumps laptop ownership (56 percent), but just by a whisker, and well within the margin of error.

    TOC Frankfurt 2011 — Being held on Tuesday, Oct. 11, 2011, TOC Frankfurt will feature a full day of cutting-edge keynotes and panel discussions by key figures in the worlds of publishing and technology.

    Save 100€ off the regular admission price with code TOC2011OR

    Book publishing goes realtime for 2012 election

    PoliticoLogo.PNGPolitico and Random House have forged a partnership to produce instant ebooks for the 2012 election. According to a post on Politico's website, the book series, which will be produced exclusively in digital format, won't be lacking in expertise:

    The series of four books will be written by Mike Allen, Politico's chief White House correspondent, and former Newsweek editor-at-large Evan Thomas, and edited by former Newsweek editor-in-chief Jon Meacham, who became executive editor and executive vice president at Random House last autumn after Newsweek was sold to Sidney Harman.

    The idea of covering events by writing books in realtime isn't new. As the Politico post points out, the 2012 election books will "represent another step forward for the model of instant book publishing exemplified by the 'Beyond Bin Laden' book of essays that Meacham edited and published a week after the Al Qaeda leader was killed."

    Commenting for a New York Times post, Meacham said the project might also be an opportunity for publishers to change consumer perception: "An impetus here is to encourage people to think of book publishers in a more periodical way."

    At the very least, it's another step forward for traditional book publishers into the instant-oriented digital age.

    Lessons learned from Pottermore

    This post originally appeared on Joe Wikert's Publishing 2020 Blog ("Harry Potter and the Direct, DRM-Free Sale"). It's republished with permission.


    PottermoreIt took her a while, but J.K. Rowling now apparently believes in the future of ebooks. Last week's Pottermore announcement featured two important publishing elements: a direct sales model and a lack of DRM.

    Harry Potter is one of those unique brands that dwarfs everything associated with it. Most Potter fans can name the author but few could tell you the publisher without looking at the book's spine. Although that's often true with other novels, Harry Potter is much more than a series of books or movies. It's an experience, or so I'm told. (I'm not a fan, have never read any of the books or seen any of the movies, but my house is filled with plenty of diehards who have told me everything I need to know.)

    Rowling realizes the strength of her brand and knows she can use it to establish direct relationships with her fans. And so via Pottermore, the author doesn't need any of the big names in ebook retailing. Why settle for a 20% royalty or a 70% cut of the top-line sale when you can keep 100% of it? And why only offer one format when some portion of your audience wants MOBI for the Kindle, others want EPUB for their Apple/Sony devices, and maybe a few more would prefer a simple PDF?

    It's not surprising that J.K. Rowing is forging ahead with a well thought-out direct sales plan. What blows my mind is that more publishers aren't doing the same. Sure, you'll find publisher websites selling PDFs. Some even offer other formats. But rarely do you find a publisher's website with all the popular ebook formats. Regardless of what type of device you have, it sounds like you'll be able to purchase a Harry Potter ebook for it on Pottermore. I hope they take the extra step and include all the formats in one transaction like we do on oreilly.com.

    The other smart move by Rowling is the exclusion of DRM from Pottermore ebooks. Here's an important question for authors and publishers everywhere: If Harry Potter doesn't need DRM, why does your book?! If you ditch DRM you'll be able to offer all the formats. You'll show your customers you trust them and you'll also make it far easier for them to actually use your content.



    Related:


  • 10 innovative digital books you should know about
  • Publishers: What are they good for?
  • Open question: Are ereaders too complex?
  • More Publishing Week in Review coverage

  • June 27 2011

    June 13 2011

    How one publisher uses "aggressive marketing"

    OpenRoadLogo.pngLast month, Jane Friedman landed $8 million in equity financing for her digital publishing company Open Road Integrated Media. In a recent NPR interview, Friedman talked about the company's business model, with 50/50 profit splits for authors and a focus on digitally publishing backlist titles. Friedman noted that "aggressive marketing" is the key to the company's success.

    What does aggressive marketing involve? The NPR piece hinted at a few elements:

    Open Road backs its titles with aggressive multi-platform marketing campaigns, making creative use of the Web, social media and video. The company produces short documentaries to promote its authors.

    For more on what aggressive marketing entails and how the campaigns are handled, I turned to Open Road's chief marketing officer Rachel Chou. Our short email interview follows.

    What does "aggressive marketing" mean?

    RachelChou.pngRachel Chou: Aggressive marketing means marketing throughout the term of contract and not just at the book's launch. It also means balancing real-time marketing vs planned marketing. We build quarterly marketing plans for every author or publishing partner and continue to think of new themes, topics or pitches.

    What kinds of resources are used to market titles?

    Rachel Chou: Each author is assigned a marketing lead who builds out the quarterly plans. We use online advertising, social media ads, video and photo distribution, content partnerships, as well as traditional publicity. In addition, we listen to the social media and online conversations with all the available tools, like TweetDeck, Facebook, and Google alerts.

    Being ready to add high-quality content to a conversation that has just gotten started online has become essential. Real-time marketing vs planned long-term marketing is the most dramatic shift in digital marketing.

    How long does a marketing campaign last?

    Rachel Chou: Our author campaigns go on for the term of contract. If we publish an author, we are committed to having their brand be part of the conversation. Short-term campaigns are added, such as National Library Week or our upcoming summer reading campaign, but those are supplemental to our author campaigns.


    Webcast: Digital Bookmaking Tools Roundup — Pete Meyers looks at the growing number of digital book tools: what's best, what's easiest to use, and what's worth putting in your book-building toolkit.


    Join us on Thursday, June 30, 2011, at 10 am PT


    Register for this free webcast



    Related:


    May 04 2011

    Four short links: 4 May 2011

    1. Maqetta -- open source (modified BSD) WYSIWYG HTML5 user interface editor from the Dojo project. (via Hacker News)
    2. Hacker News Analysis -- interesting to see relationship between number of posts, median score, and quality over time. Most interesting, though, was the relative popularity of different companies. (via Hacker News)
    3. Real Time All The Time (Emily Bell) -- Every news room will have to remake itself around the principle of being reactive in real time. Every page or story that every news organisation distributes will eventually show some way of flagging if the page is active or archived, if the conversation is alive and well or over and done with. Every reporter and editor will develop a real time presence in some form, which makes them available to the social web. When I say "will" I of course don't mean that literally . I think many of them won't, but eventually they will be replaced by ones who do. (via Chris Saad)
    4. Changes in Home Broadband (Pew Internet) -- Jeff Atwood linked to this, simply saying "Why Web 1.0 didn't work and Web 2.0 does, in a single graph." Ajax and web services and the growing value of data were all important, but nothing's made the web so awesome as all the people who can now access it. (via Jeff Atwood)

    March 23 2011

    Four short links: 23 March 2011

    1. The Heritage Health Competition -- Netflix-like contest to analyze insurance-claims data to develop a model that predicts the number of days a patient will spend in hospital in the coming year. $3M prize. (via Aza Raskin)
    2. Historically Hardcore -- fantastic fake Smithsonian ads that manage to make the institution sexy. Naturally they've been asked to take them down.
    3. Another Plato Innovation Ignored -- turns out the above-the-fold doodle has a long and glorious history, culminating in a fantastic demonstration of our broken patent system.
    4. Graphite -- Enterprise scalable realtime graphing. Apache 2.0-licensed, written in Python. (via John Nunemaker)

    February 15 2011

    December 25 2010

    July 09 2010

    Four short links: 9 July 2010

    1. Reasons for Artists and Fans to Consider Crowdfunding -- the number of fans acquiring music outside traditional and/or legal means is, well, the majority. Plenty of examples of bands raising money outside the label system.
    2. DARPA's Blood Makers Start Pumping (Wired) -- biomanufactured blood. The blood was produced using hematopoietic cells, derived from embryonic cord-blood units. Currently, it takes Arteriocyte scientists three days to turn a single umbilical cord unit into 20 units of RBC-packed blood. The average soldier needs six units during trauma treatment. (via rdiva on Twitter)
    3. Self-Reproducing Makerbot -- a community member popped up, out of the blue, and posted the designs for a MakerBot assembled from 150 pieces that a MakerBot can print, a-la the RepRap (whose design MakerBot is based on). (via Quinn Norton)
    4. Real Time Real World Statistics -- I can't wait to see what happens when we get real-time AND open data together. (via jessykate on Twitter)

    May 26 2010

    Four short links: 26 May 2010

    1. PSTSDK -- Apache-licensed code from Microsoft to read Outlook files. Covered by Microsoft's Open Specification Promise not to assert related patents against users of this library.
    2. Cheap Android Tablet -- not multitouch, but only $136. Good for hacking with in the meantime. (via Hacker News)
    3. Real-Time Collaborative Editing with Websockets, node.js, and Redis -- uses Chrome's websockets alternative to Comet and other long-polling web connections.
    4. XMPP Library for Node.js -- I'm intrigued to see how quickly Node.js, the Javascript server environment, has taken off.

    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!

    Schweinderl