Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

September 04 2013

Four short links: 4 September 2013

  1. MegaPWN (GitHub) — Your MEGA master key is supposed to be a secret, but MEGA or anyone else with access to your computer can easily find it without you noticing. Browser crypto is only as secure as the browser and the code it runs.
  2. hammer.js (GitHub) — a Javascript library for multitouch gestures.
  3. When Smart Homes Get Hacked (Forbes) — Insteon’s flaw was worse in that it allowed access to any one via the Internet. The researchers could see the exposed systems online but weren’t comfortable poking around further. I was — but I was definitely nervous about it and made sure I had Insteon users’ permission before flickering their lights.
  4. A Stick Figure Guide to Advanced Encryption Standard (AES) — exactly what it says.

October 19 2011

Four short links: 19 October 2011

  1. OmniTouch: Wearable Interaction Everywhere -- compact projector + kinect equivalents in shoulder-mounted multitouch glory. (via Slashdot)
  2. Price of Bitcoin Still Dropping -- currency is a confidence game, and there's no confidence in Bitcoins since the massive Mt Gox exchange hack.
  3. vim Text Objects -- I'm an emacs user, so this is like reading Herodotus. "On the far side of the Nile is a tribe who eat their babies and give birth to zebras made of gold. They also define different semantics for motion and text objects."
  4. Hard Drive Shortage Predicted (Infoworld) -- flooding in Thailand has knocked out 25% of the world's hard drive manufacturing capacity. Interested to see the effects this has on cloud providers. (via Slashdot)

October 18 2011

Six ways to think about an "infinite canvas"

masterpiece by 416style, on FlickrThis is part of an ongoing series related to Peter Meyers' project "Breaking the Page: Transforming Books and the Reading Experience." We'll be featuring additional material in the weeks ahead. (Note: This post originally appeared on A New Kind of Book. It's republished with permission.)

Next week, I'm speaking at the Books in Browsers conference on "the infinite canvas." When I started chewing on this topic, my thoughts centered on a very literal vision: a super-ginormous sheet for authors to compose on. And while I think there's some great creative territory to explore in this notion of space spanning endlessly up, down, left, and right, I also think there are a bunch of other ways to define what an infinite canvas is. Not simply a huge piece of virtual paper, but instead, an elastic space that does things no print surface could do, no matter how big it is. So, herewith, a quick stab at some non-literal takes on the topic. My version, if you will, of six different ways of thinking about the infinite canvas.

Continuously changeable

The idea here is simple: refreshable rather than static content. The actual dimensions of the page aren't what's elastic; instead, it's what's being presented that's continuously changing. In some ways, the home page of a newspaper's website serves as a good example here. Visit The Boston Globe half a dozen times over the course of a week and each time you'll see a new serving of news. (Haven't seen that paper's recent online makeover yet? Definitely worth checking out, and make sure to do so using a few different screen sizes — laptop, big monitor, mobile phone ... each showcases a different version of its morphing, on-the-fly design.)

Deep zooms

Ever seen that great short video, "The Power of Ten"? It's where the shot begins just above two picnickers on a blanket and then proceeds to zoom out so that you see the same picnic blanket, but now from 100 feet up, and then 1,000 feet, and on and on until you've got a view from outer space. (After the zoom out, the process reverses, and you end up getting increasingly microscopic glimpses of the blanket, its fabric, the individual strands of cotton, and so on.) Here's a presentational canvas that adds new levels of meaning at different magnifications. So, the viewer doesn't simply move closer or further away, as you might in a room when looking at, say, a person. As you get closer, you see progressively deeper into the body. Microsoft calls this "semantic zooming" (as part of its forthcoming touchscreen-friendly Metro interface). Bible software maker Glo offers some interesting content zooming tools that implement this feature for readers looking to flip between birds-eye and page views.

Alternate geometries

A printed page is a 2-D rectangle of fixed dimensions. On the infinite canvas, the possibilities vary widely, deeply, and as Will Ferrell's character in "Old School" might say, "in ways we've never even heard of." Some possible shapes here: a 3-D cube with content on each side, or pyramid-shaped ebooks (Robert Darnton wrote about those in The New Age of the Book, where he proposes a multi-layered structure for academics with excess material that would bust the bindings of a printed book).

Canvases that give readers room to contemplate and respond


I just got a wonderful print book the other day called "Finish This Book." It contains a collection of fill-in-the-blank and finish-this-thought creative exercises. It reminded me that one thing digital books haven't yet explored much is leaving space for readers to compose their reactions. Sure, every ebook reader today lets you take notes, but as I've written before, these systems are pale replicas of the rich, reader-friendly note taking experiences we get in print books. Job No. 1 is solving those shortcomings, but then imagine the possibilities if digital books are designed to allow readers to compose extensive thoughts and reactions.

Delight

Print book lovers (I'm one of 'em) wax on about their beloved format's special talents: the smell, the feel, its nap-friendly weight. But touchscreen fans can play that game, too. Recall, for starters, the first time you tapped an iPhone or similarly modern touchscreen. Admit it: the way it felt to pinch, swipe, flick, and spread ... those gestures introduce a whole new pleasure palette. Reading and books have heretofore primarily been a visual medium: you look and ponder what's inside. Now, as we enter the age of touchscreen documents, content becomes a feast for our fingers as much as our eyes. Authors, publishers, and designers are just beginning to appreciate this opportunity, making good examples hard to point to. I do think that Erik Loyer is among the most interesting innovators with his Strange Rain app, a kind of mashup between short fiction and those particle visualizers like Uzu. It's not civilian-friendly yet, I don't think, but it points the way for artists interested in incorporating touch into their creations.

Jumbo content

A movable viewport lets your audience pan across massive content panoramas. Some of the possibilities here are photographic (Photosynth, Virtual History ROMA). Others have begun to explore massively wide content landscapes, such as timelines (History of Jazz). One new example I just learned about yesterday: London Unfurled for iPad, a hand-illustrated pair of 37-foot long drawings of every building on the River Thames between Hammersmith Bridge and Millennium Dome, complete with tappable backstories on most of the architecture that's on display.

These are just a few of the possibilities that I've spotted. What comes to mind when you think about the infinite canvas?

Webcast: Digital Bookmaking Tools Roundup #2 — Back by popular demand, in a second look at Digital Bookmaking Tools, author and book futurist Pete Meyers explores the existing options for creating digital books.

Join us on Thursday, November 10, 2011, at 10 am PT
Register for this free webcast

Photo: masterpiece by 416style, on Flickr


Related:


September 28 2011

Spoiler alert: The mouse dies. Touch and gesture take center stage

Mouse Macro by orangeacid, on FlickrThe moment that sealed the future of human-computer interaction (HCI) for me happened just a few months ago. I was driving my car, carrying a few friends and their children. One child, an 8-year old, pointed to the small LCD screen on the dashboard and asked me whether the settings were controlled by touching the screen. They were not. The settings were controlled by a rotary button nowhere near the screen. It was placed conveniently between the driver and passenger seats. An obvious location in a car built at the tail-end of an era when humans most frequently interacted with technology through physical switches and levers.

The screen could certainly have been one controlled by touch, and it is likely a safe bet that a newer model of my car has that very feature. However, what was more noteworthy was the fact that this child was assuming the settings could be changed simply by passing a finger over an icon on the screen. My epiphany: for this child's generation, a rotary button was simply old school.

This child is growing up in an environment where people are increasingly interacting with devices by touching screens. Smartphones and tablets are certainly significant innovations in areas such as mobility and convenience. But these devices are also ushering in an era that shifts everyone's expectations of how we engage in the use of technology. Children raised in a world where technology will be pervasive will touch surfaces, make gestures, or simply show up in order for systems to respond to their needs.

This means we must rethink how we build software, implement hardware, and design interfaces. If you are in any of the professions or businesses related to these activities, there are significant opportunities, challenges and retooling needs ahead.

It also means the days of the mouse are probably numbered. Long live the mouse.

The good old days of the mouse and keyboard

Probably like most of you, I have never formally learned to type, but I have been typing since I was very young, and I can pound out quite a few words per minute. I started on an electric typewriter that belonged to my dad. When my oldest brother brought home our first computer, a Commodore VIC-20, my transition was seamless. Within weeks, I was impressing relatives by writing small software programs that did little more than change the color of the screen or make a sound when the spacebar was pressed.

Later, my brother brought home the first Apple Macintosh. This blew me away. For the first time I could create pictures using a mouse and icons. I thought it was magical that I could click on an icon and then click on the canvas, hold the mouse button down, and pull downward and to the right to create a box shape.

Imagine my disappointment when I arrived in college and we began to learn a spreadsheet program using complex keyboard combinations.

Fortunately, when I joined the workforce, Microsoft Windows 3.1 was beginning to roll out in earnest.

The prospect of the demise of the mouse may be disturbing to many, not least of whom is me. To this day, even with my laptop, if I want to be the most productive, I will plug in a wireless mouse. It is how I work best. Or at least, it is currently the most effective way for me.

For most of us, we have grown up using a mouse and a keyboard to interact with computers. It has been this way for a long time, and we have probably assumed it would continue to be that way. However, while the keyboard probably has considerable life left in it, the mouse is likely dead.

Fortunately, while the trend suggests mouse extinction, we can momentarily relax, as it is not imminent.

But what about voice?

From science fiction to futurist projections, it has always been assumed that the future of human-computer interaction would largely be driven by using our voices. Movies over decades have reinforced this image, and it has seemed quite plausible. We were more likely to see a door open via voice rather than a wave. After all, it appears to be the most intuitive and requires the least amount of effort.

Today, voice recognition software has come a long way. For example, accuracy and performance when dictating to a computer is quite remarkable. If you have broken your arms, this can be a highly efficient way to get things done on a computer. But despite having some success and filling important niches, broad-based voice interaction has simply not prospered.

It may be that a world in which we control and communicate with technology via voice is yet to come, but my guess is that it will likely complement other forms of interaction instead of being the dominant method.

There are other ways we may interact, too, such as via eye-control and direct brain interaction, but these technologies remain largely in the lab, niche-based, or currently out of reach for general use.

The future of HCI belongs to touch and gesture

Apple's Magic TrackpadIt is a joy to watch how people use their touch-enabled devices. Flicking through emails and songs seems so natural, as does expanding pictures by using an outward pinching gesture. Ever seen how quickly someone — particularly a child — intuitively gets the interface the first time they use touch? I have yet to meet someone who says they hate touch. Moreover, we are more likely to hear people say just how much they enjoy the ease of use. Touch (and multi-touch) has unleashed innovation and enabled completely new use cases for applications, utilities and gaming.

While not yet as pervasive, gesture-based computing (in the sense of computers interpreting body movements or emotions) is beginning to emerge in the mainstream. Anyone who has ever used Microsoft Kinect will be able to vouch for how compelling an experience it is. The technology responds adequately when we jump or duck. It recognizes us. It appears to have eyes, and gestures matter.

And let us not forget, too, that this is version 1.0.

The movie "Minority Report" teased us about a possible gesture-based future: the ability to manipulate images of objects in mid air, to pile documents in a virtual heap, and to cast aside less useful information. Today many of us can experience its early potential. Now imagine that technology embedded in the world around us.

The future isn't what it used to be

My bet is that in a world of increasingly pervasive technology, humans will interact with devices via touch and gestures — whether they are in your home or car, the supermarket, your workplace, the gym, a cockpit, or carried on your person. When we see a screen with options, we will expect to control those options by touch. Where it makes sense, we will use a specific gesture to elicit a response from some device, such as (dare I say it) a robot! And, yes, at times we may even use voice. However, to me, voice in combination with other behaviors is more obvious than voice alone.

But this is not some vision of a distant future. In my view, the touch and gesture era is right ahead of us.

What you can do now

Many programmers and designers are responding to the unique needs of touch-enabled devices. They know, for example, that a paradigm of drop-down menus and double-clicks is probably the wrong set of conventions to use in this new world of swipes and pinches. After all, millions of people are already downloading millions of applications for their haptic-ready smartphones and tablets (and as the drumbeat of consumerization continues, they will also want their enterprise applications to work this way, too). But viewing the future through too narrow a lens would be an error. Touch and gesture-based computing forces us to rethink interactivity and technology design on a whole new scale.

How might you design a solution if you knew your users would exclusively interact with it via touch and gesture, and that it might also need to be accessed in a variety of contexts and on a multitude of form factors?

At a minimum, it will bring software developers even closer to graphical interface designers and vice versa. Sometimes the skillsets will blur, and often they will be one and the same.

If you are an IT leader, your mobile strategy will need to include how your applications must change to accommodate the new ways your users will interact with devices. You will also need to consider new talent to take on these new needs.

The need for great interface design will increase, and there will likely be job growth in this area. In addition, as our world becomes increasingly run by and dependent upon software, technology architects and engineers will remain in high demand.

Touch and gesture-based computing are yet more ways in which innovation does not let us rest. It keeps the pace of change, already on an accelerated trajectory, even more relentless. But the promise is the reward. New ways to engage with technology enables novel ways to use it to enhance our lives. Simplifying the interface opens up technology so it becomes even more accessible, lowering the complexity level and allowing more people to participate and benefit from its value.

Those who read my blog know my view that I believe we are in a golden age of technology and innovation. It is only going to get more interesting in the months and years ahead.

Are you ready? I know there's a whole new generation that certainly is!

Photo (top): Mouse Macro by orangeacid, on Flickr

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD


Related:


September 20 2011

Five digital design ideas from Windows 8

This is part of an ongoing series related to Peter Meyers' project "Breaking the Page: Transforming Books and the Reading Experience." We'll be featuring additional material in the weeks ahead. (Note: This post originally appeared on A New Kind of Book. It's republished with permission.)


Microsoft deserves most of the design criticism it gets, but let's give them credit when they move in the right direction. What they've previewed in Windows 8 — especially the Metro touchscreen interface — is really lovely. It's humane, efficient, and innovative. In fact, I think there's plenty in it for digital book designers to think about emulating. I whipped out my notepad while watching one of their Build presentations — "8 traits of great Metro style apps" — and jotted down some key takeaways. (Also included are approximate timestamps so you don't have to sit through the whole 90 minutes.) The best part? Whether or not Microsoft actually ships something that matches their demo, you can benefit from the great thinking they've done.


Tablet users' postures and hand positions (16:31)

Microsoft did loads of research, hoping to identify how tablet users sat and where they placed their hands when holding these devices. The results are probably intuitive for anyone who's spent time with a tablet, but the conclusions are nevertheless helpful. Most people use both hands to hold a tablet, and the most frequent touch zones are on the edges. The lesson? "To design for comfort, you need to position [key controls] near the edge" (19:23). And: "It takes a posture change to reach comfortably into the center of the screen (in any orientation)." In other words, it's not that users can't reach things in the middle of the screen, but it does require they change how they're sitting. So, "put frequently used interaction surfaces near the edge," and "locate key controls to be comfortable to use while holding on to the edges of a device."

The difference between "fast & fluid" and "slow & jerky" (25:45)

The first phrase is Microsoft's (it's how they claim Windows 8 will perform; by the looks of the demo, they're pretty far along). The second phrase is mine, but in the demo it's clear that's what they want developers to stop doing. How? By using Microsoft-supplied transitional effects — for example, animating the way picture icons arrive on screen as users add them to a list. This might sound like frivolous eye candy, but the demo makes the point convincingly: these little points of polish make users feel a closer connection to the content and less like there's an engineer standing between them and what they want to do.

Specifically, what Microsoft is encouraging developers to do is use Windows 8's "Animation Library" to implement these effects and take advantage of things like hardware acceleration. This, they argue, saves programmers from having to master animation flourishes or learn After Effects; the ready-to-use animations take care of the design work. I mention all this because a sluggish reading experience — even one that's half a second too slow — can cause readers to bail.

This reminds me of a conversation I had last winter with Theo Gray, author of "The Elements for iPad" and one of the principals behind Touch Press. He was previewing an in-progress app for me and stopped the demo mid-way through. One of the gems onscreen that was supposed to spin was lagging a tiny bit. If you're even off by a little, he said, users will notice. Sweating the details like this may be one reason the Touch Press apps are so successful.

"A language for touch" (27:30)

The point Microsoft makes in this part of the presentation is, if you're making a touchscreen app, don't have fingers and touch gestures replicate what a mouse does. Multitouch screens can and should be controlled differently than our regular computers. And Microsoft makes this case by poking fun at the cumbersome steps an iOS user has to go through to drag an app icon from one home screen to another that's far away: "it's like driving a car from one side of the ocean to another." Anyone who's got more than a few screen's worth of apps knows what they're talking about. What Apple has currently designed is really the equivalent of how you'd scroll horizontally with a mouse (except in iOS there are no quick scrollbar shortcuts).

The solution that Microsoft demos is neat (28:48): you hold the app icon you want to move in place with one finger and then, with your other hand, you pan under it, swiping the screens quickly to get to the new placement spot where you want to drop the icon. It's very slick, and it's a reminder of the benefits of designing explicitly for a touchscreen.

"Semantic" zoom (33:25)

By now we're all used to tapping touchscreens to zoom in closer on an image or bump up the font size of an article. Microsoft has introduced a twist: zooming gestures now frequently deliver more and different kinds of info as users view content at different magnification levels. For example, when viewed up close, a group of neighboring app icons on the home screen might look like this:

But when the user zooms out to a bird's-eye view, that same group acquires a label, delivering an extra helping of information to help browsers decide where to go next or to rearrange groups into a different order.

The same kinds of semantic additions at different zoom levels could be helpful for digital book designers looking to provide different views (book-wide, chapter-level, and so on) for readers browsing through different levels of detail. A few months ago I wrote about Glo Bible and something similar they've done with their outline zooming tool.

True multitasking (46:42)

In Metro, two apps can co-exist side by side on the main screen. One sits center stage, and the other gets tucked in this so-called "snap" state: a compressed rectangular view that apps occupy when they cede the main part of the window to another app.

"A great snapped state," presenter Jensen Harris says, "invites users to keep an app on screen longer." These truncated views are fully functional. One fun example that gets a mention: imagine a piano app in snap state, a drum app on the main screen, and the user playing both of them at the same time. In other words, true multitasking and a world in which users are encouraged to make their apps interact with each other. It's a compelling reminder of something many serious readers (and writers) do all the time in the real world: keep multiple documents open simultaneously.

Webcast: Digital Bookmaking Tools Roundup #2 — Back by popular demand, in a second look at Digital Bookmaking Tools, author and book futurist Pete Meyers explores the existing options for creating digital books.

Join us on Thursday, November 10, 2011, at 10 am PT
Register for this free webcast

Related:

July 22 2011

Four short links: 22 July 2011

  1. Competitive Advantage Through Data -- the applications and business models for erecting barriers around proprietary data assets. Sees data businesses in these four categories: contributory data sourcing, offering cleaner data, data generated from service you offer, and viz/ux. The author does not yet appear to be considering when open or communal data is better than proprietary data, and how to make those projects work. (via Michael Driscoll)
  2. Interactive Touch Charts -- GPL v3 (and commercial) licensed Javascript charting library that features interactivity on touch devices: zoom, pan, and click. (via James Pearce)
  3. Solar Cutter, Solar 3D Printer -- prototypes of solar powered maker devices. The cutter is a non-laser cutter that focuses the sun's rays to a super-hot point. The printer makes glass from sand (!!!!). Not only is this cool, but sand is widespread and cheap.
  4. Synthetic Biology Open Language -- a language for the description and the exchange of synthetic biological parts, devices, and systems. Another piece of the synthetic biology puzzle comes together. The parallel development of DIY manufacturing in the worlds of bits and basepairs is mindboggling. We live in exciting times. (via krs)

November 10 2010

October 27 2010

Four short links: 27 October 2010

  1. Bleach -- HTML sanitizer, which some might say is an impossible task.
  2. TAT Home -- a gesture-powered 3d home screen for Android.
  3. Omaha -- the autoupdater used in Chrome and other Google projects, open sourced.
  4. Google Refine -- Freebase GridWorks has new home and new name, with new checkins happening all the time. An excellent ETL tool for figuring out what data you have and getting it into the right format.

July 08 2010

June 23 2010

Four short links: 23 June 2010

  1. Ira Glass on Being Wrong (Slate) -- fascinating interview with Ira Glass on the fundamental act of learning: being wrong. I had this experience a couple of years ago where I got to sit in on the editorial meeting at the Onion. Every Monday they have to come up with like 17 or 18 headlines, and to do that, they generate 600 headlines per week. I feel like that's why it's good: because they are willing to be wrong 583 times to be right 17. (via Hacker News)
  2. Real Lives and White Lies in the Funding of Scientific Research (PLoSBiology) -- very clear presentation of the problems with the current funding models of scientific research, where the acknowledged best scientists spend most of their time writing funding proposals. K.'s plight (an authentic one) illustrates how the present funding system in science eats its own seed corn. To expect a young scientist to recruit and train students and postdocs as well as producing and publishing new and original work within two years (in order to fuel the next grant application) is preposterous.
  3. jQTouch Roadmap -- interesting to me is the primary distinction between Sencha and jQTouch, namely that jQT is for small devices (phones) only, while Sencha handles small and large (tablet) touch-screen devices. (via Simon St Laurent)
  4. Travel Itineraries from Flickr Photo Trails (Greg Linden) -- clever idea, to use metadata extracted from Flickr photos (location, time, etc.) to construct itineraries for travellers, saying where to go, how long to spend there, and how long to expect to spend getting from place to place. Another story of the surprise value that can be extracted from overlooked data.

June 22 2010

Four short links: 22 June 2010

  1. High-Speed Book Scanner -- you flip the pages, and it uses high-speed photography to capture images of each page. "But they're all curved!" Indeed, so they project a grid onto the page so as to be able to correct for the curvature. The creator wanted to scan Manga, but the first publisher he tried turned him down. I've written to him offering a pile of O'Reilly books to test on. We love this technology!
  2. Magic Tables, not Magic Windows (Matt Jones) -- thoughtful piece about how touch-screens are rarely used as a controller of abstract things rather than of real things, with some examples of the potential he's talking about. When we’re not concentrating on our marbles, we’re looking each other in the eye - chuckling, tutting and cursing our aim - and each other. There’s no screen between us, there’s a magic table making us laugh. It’s probably my favourite app to show off the iPad - including the ones we’ve designed! It shows that the iPad can be a media surface to share, rather than a proscenium to consume through alone.
  3. Myths and Fallacies of Personally Identifiable Information -- particularly relevant after reading Apple's new iTunes privacy policy. We talk about the technical and legal meanings of “personally identifiable information” (PII) and argue that the term means next to nothing and must be greatly de-emphasized, if not abandoned, in order to have a meaningful discourse on data privacy. (via Pete Warden)
  4. Mensch Font -- an interesting font, but this particularly caught my eye: Naturally I searched for a font editor, and the best one I found was Font Forge, an old Linux app ported to the Mac but still requiring X11. So that’s two ways OS X is borrowing from Linux for font support. What’s up with that? Was there an elite cadre of fontistas working on Linux machines in a secret bunker? Linux is, um, not usually known for its great designers. (via joshua on Delicious)

June 16 2010

Four short links: 16 June 2010

  1. So You Want to Be A Consultant -- absolutely spot-on tips for understanding the true business of a consultant. (via Hacker News)
  2. BBYIDX -- a free and open source idea-gathering application written in Ruby, [...] the basis of the Best Buy IdeaX website.
  3. The Git Parable -- The following parable will take you on a journey through the creation of a Git-like system from the ground up. Understanding the concepts presented here will be the most valuable thing you can do to prepare yourself to harness the full power of Git. The concepts themselves are quite simple, but allow for an amazing wealth of functionality to spring into existence. (via Pete Warden)
  4. Ext JS + jQTouch + Raphael = Sencha -- merging some touch and rich graphics libraries and developers. We’re setting up a foundation called Sencha Labs that will hold the copyright and trademarks for all the non-commercial projects affiliated with Sencha. Our license of choice for these projects is, and will continue to be, the MIT license. We will fund maintainers for our non-commercial projects with contributions from Sencha and the communities of these projects. (via bjepson on Twitter)

May 18 2010

December 07 2009

Four short links: 7 December 2009

  1. 3D Touchscreens -- Japan Science & Technology Agency and researchers at the University of Electro-communications have made a "photoelastic" touch screen. The LCD emits polarized light, picked up by a camera over the screen. Transparent rubber on the screen deforms when pressed, and the camera can pick this up. Interesting hack, though it's not yet a consumer-grade product.
  2. Eureqa -- open source tool for detecting equations and hidden mathematical relationships in your data. Its primary goal is to identify the simplest mathematical formulas which could describe the underlying mechanisms that produced the data. (via pigor on delicious)
  3. Science in the Open, It Wasn't Supposed To Be This Way -- Cameron Neylon on the leaked climate email messages as a trigger for open data. One of the very few credible objections to open research that I have come across is that by making material available you open your inbox to a vast community of people who will just waste your time. The people who can’t be bothered to read the background literature or learn to use the tools; the ones who just want the right answer. [...] my concern is that in a kneejerk response to suddenly make things available no-one will think to put in place the social and technical infrastructure that we need to support positive engagement, and to protect active researchers, both professional and amateur from time-wasters. Sounds like an open science call for social software, though I'm not convinced it's that easy. Humans can't distinguish revolutionaries from terrorists, it's unclear why we think computers should be able to.
  4. EtherPad Back Online Until Open Sourced -- Google bought collaborative real-time EtherPad and the team will work on Google Wave, but the transition plan was "you can't create more documents, and it'll all go away in March". Grumpiness ensued. Everyone makes mistakes online, but the secret is to listen, acknowledge the mistake, and correct your course.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl