Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

September 14 2013

Formstone / Ben Plum

Formstone / Ben Plum
http://www.benplum.com/formstone

A growing collection of #jQuery #interface #plugins focused on structure and customization.

#webdesign

August 12 2013

Simple, fast map data editing | MapBox

Simple, fast map data editing | MapBox
http://www.mapbox.com/blog/geojsonio-announce

We are trying to make it easier to draw, change, and publish #maps. Some of the most important geospatial data is the information we know, observe, and can draw on a napkin. This is the kind of #data that we also like to collaborate on, like collecting bars that have free wifi ✎ or favorite running routes.

http://geojson.io aims to fix that. It’s an an open source project built with #MapBox.js, #GitHub's powerful new Gist and #GeoJSON features, and an array of microlibraries that power import, export, editing, and lots more.

#données #cartographie #interface

August 05 2013

Computer-Brain Interfaces Making Big Leaps - NYTimes.com

Computer-Brain Interfaces Making Big Leaps - NYTimes.com
http://bits.blogs.nytimes.com/2013/08/04/disruptions-rather-than-time-computers-might-become-panacea-to-h

“It wasn’t so much writing a memory from scratch, it was basically connecting two different types of memories. We took a neutral memory, and we artificially updated that to make it a negative memory,” said Steve Ramirez, one of the M.I.T. neuroscientists on the project.

#mémoire #cerveau #ordinateur #cyborg

Scientists Trace Memories of Things That Never Happened
http://www.nytimes.com/2013/07/26/science/false-memory-planted-in-a-mouse-brain-study-shows.html

Control a rat with your brain
http://news.cnet.com/8301-17938_105-57596633-1/control-a-rat-with-your-brain

Harvard scientists create an #interface that allows humans to move a rat’s tail just by thinking about it.

July 30 2013

Industry design trends

Industry design trends
http://imperavi.com/blog/industry_design_trends

En 2012, chaque éditeur WYSIWYG était unique, mais la tendance est à l’uniformisation :

http://imperavi.com/img/blog/editors-2013.png

#redaction #interface #WYSIWYG #picto #clicodrome

Reposted bylofi lofi

July 18 2013

Authentic Design | Smashing Magazine

Authentic #Design | Smashing Magazine
http://www.smashingmagazine.com/2013/07/16/authentic-design

The recently popularized “#flat#interface style is not merely a trend. It is the manifestation of a desire for greater authenticity in design, a desire to curb visual excess and eliminate the fake and the superfluous.

#webdesign

July 17 2013

Des interfaces en réalité augmenté pour rendre les objets réels plus intelligents - IQ par Intel

Des interfaces en réalité augmenté pour rendre les objets réels plus intelligents - IQ par Intel
http://iq.intel.com/iq/35436112/augmented-reality-interfaces-help-make-real-world-objects-more-intelligent

Le projet Smarter Objects du MIT - http://fluid.media.mit.edu/projects/smarter-objects - propose d’utiliser la réalité augmentée pour programmer des objets réels. Quand vous pointez la caméra de votre tablette ou smartphone sur des objets, vous avez accès à une #interface logicielle qui permet de les programmer ou de les faire interagir. Tags : internetactu internetactu2net interface (...)

#realiteaugmentee #programmation

November 17 2011

Understanding Apple fans

The original Nexus One from GoogleHaving admitted a few months ago that I've come to terms with JavaScript, I have to admit that recent events have made me more sympathetic with iPhone fans. I'm a long-time Android user, and I don't plan to change; Android has a better track record on innovation and on openness. You can debate Google's commitment to open source, but openness is a joke when you can't even get an app on the phone without Apple's approval.

Apple's design sense is admittedly better, but Android's user interface is pretty good; if Apple didn't exist to be teach us what great design was, we'd certainly be happy with Android. Way, way better than any of the feature-phones I've used in the past.

So what changed? I've been a sheltered Android user: I go to a lot of Google events, and at one point had a whole stack of Android phones sitting on my desk (most of which were tied to carriers that I don't use). I'm also an AT&T customer, and it was a simple matter to put an AT&T SIM into an unlocked phone and get on the network. But recently, I was running across the street when my trusty Nexus fell out of my pocket and had an unfortunate rendezvous with the wheels of a passing truck. So I reluctantly bought a new Motorola Atrix from AT&T. It's a nice piece of hardware: 1 GHz dual core processor, plenty of memory, etc. No complaints about the hardware.

But man, AT&T's notion of Android is a lot different from Google's. Android as Google ships is it really quite good. Not excellent, but more than good enough. AT&T has taken that basic goodness and broken it by piling on glopware, bloatware, and you-don't-need-it-and-you-can't-delete-it-ware. They've rearranged things in stupid ways, changed icons that were familiar and serviceable, and botched things up in many other ways. An AT&T logo for the browser? Why would I associate the AT&T's death star with the web? Why list apps you've downloaded separately from apps that come with the phone? Why banish Google Maps from the home screen? (AT&T has its own GPS navigation app, which they charge for using.)

I could go on, but I won't. Each change was minor, taken by itself, but they added up, and turned a good user experience into a mediocre user experience. Maybe I wouldn't feel so strongly if I weren't accustomed to the neater, trimmer Android that Google delivers; but suddenly, the light went on, and I said, "Oh, that's why Apple fans hate Android so much." I don't know what other carriers do; does Verizon mangle Android so badly on the phones they ship? As I said at the outset, I've been sheltered, and the real world isn't living up to my expectations.

The price of openness may well be letting vendors break stuff. And I suppose I'm willing to pay that price. But I don't have to be happy about it. I hope Google can figure out how to exert some control over what vendors do with Android; that would be good for the whole community. AT&T and other carriers are not helping Android, or themselves, by turning a great product into a second-rate one. And maybe I'm becoming soft in my old age, but I now understand what Apple fans hate about Android.

Related:

Reposted bymurdeltan0g

October 20 2011

Note to visualization creators: Add subtitles and narration

I came across a fantastic animated visualization from NASA that shows a spaced-based view of the Earth's fires. Like many visualizations in this genre, the combination of maps, data points and time-based animation proves quite powerful.

Beyond being just plain fascinating, this particular visualization also includes two simple but often neglected features: subtitles and narration.

NASA fire visualization
NASA's animated visualization of the Earth's fires is enhanced by subtitles and simple narration. (Click to see full version.)

It's common for visualization creators to write accompanying posts that detail the development of their visualizations and/or dig further into findings (case in point: NASA's excellent write-up). These sorts of posts are great, but why not go the extra step and embed a degree of context within the animations? That way, the information contained in subtitles and narration will always float along with the core material. This is particularly important with embeddable objects because the flashy animated bits are what gets shared across the web. The contextual elements rarely come along for the ride.

That said, the NASA visualization is far from perfect. Notably, there doesn't appear to be an easy way to embed a version of the video with formal subtitles and narration. A Fast Company version is embeddable, but captions and narration are absent. A YouTube version gets a little closer — it includes the narration and YouTube's experimental captions — but it's not on par with the full version available through NASA's site. What I really want is an embeddable "official" animation with narration and subtitles that I can toggle on or off.

Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.

Save 20% on registration with the code RADAR20

Related:

October 18 2011

Six ways to think about an "infinite canvas"

masterpiece by 416style, on FlickrThis is part of an ongoing series related to Peter Meyers' project "Breaking the Page: Transforming Books and the Reading Experience." We'll be featuring additional material in the weeks ahead. (Note: This post originally appeared on A New Kind of Book. It's republished with permission.)

Next week, I'm speaking at the Books in Browsers conference on "the infinite canvas." When I started chewing on this topic, my thoughts centered on a very literal vision: a super-ginormous sheet for authors to compose on. And while I think there's some great creative territory to explore in this notion of space spanning endlessly up, down, left, and right, I also think there are a bunch of other ways to define what an infinite canvas is. Not simply a huge piece of virtual paper, but instead, an elastic space that does things no print surface could do, no matter how big it is. So, herewith, a quick stab at some non-literal takes on the topic. My version, if you will, of six different ways of thinking about the infinite canvas.

Continuously changeable

The idea here is simple: refreshable rather than static content. The actual dimensions of the page aren't what's elastic; instead, it's what's being presented that's continuously changing. In some ways, the home page of a newspaper's website serves as a good example here. Visit The Boston Globe half a dozen times over the course of a week and each time you'll see a new serving of news. (Haven't seen that paper's recent online makeover yet? Definitely worth checking out, and make sure to do so using a few different screen sizes — laptop, big monitor, mobile phone ... each showcases a different version of its morphing, on-the-fly design.)

Deep zooms

Ever seen that great short video, "The Power of Ten"? It's where the shot begins just above two picnickers on a blanket and then proceeds to zoom out so that you see the same picnic blanket, but now from 100 feet up, and then 1,000 feet, and on and on until you've got a view from outer space. (After the zoom out, the process reverses, and you end up getting increasingly microscopic glimpses of the blanket, its fabric, the individual strands of cotton, and so on.) Here's a presentational canvas that adds new levels of meaning at different magnifications. So, the viewer doesn't simply move closer or further away, as you might in a room when looking at, say, a person. As you get closer, you see progressively deeper into the body. Microsoft calls this "semantic zooming" (as part of its forthcoming touchscreen-friendly Metro interface). Bible software maker Glo offers some interesting content zooming tools that implement this feature for readers looking to flip between birds-eye and page views.

Alternate geometries

A printed page is a 2-D rectangle of fixed dimensions. On the infinite canvas, the possibilities vary widely, deeply, and as Will Ferrell's character in "Old School" might say, "in ways we've never even heard of." Some possible shapes here: a 3-D cube with content on each side, or pyramid-shaped ebooks (Robert Darnton wrote about those in The New Age of the Book, where he proposes a multi-layered structure for academics with excess material that would bust the bindings of a printed book).

Canvases that give readers room to contemplate and respond


I just got a wonderful print book the other day called "Finish This Book." It contains a collection of fill-in-the-blank and finish-this-thought creative exercises. It reminded me that one thing digital books haven't yet explored much is leaving space for readers to compose their reactions. Sure, every ebook reader today lets you take notes, but as I've written before, these systems are pale replicas of the rich, reader-friendly note taking experiences we get in print books. Job No. 1 is solving those shortcomings, but then imagine the possibilities if digital books are designed to allow readers to compose extensive thoughts and reactions.

Delight

Print book lovers (I'm one of 'em) wax on about their beloved format's special talents: the smell, the feel, its nap-friendly weight. But touchscreen fans can play that game, too. Recall, for starters, the first time you tapped an iPhone or similarly modern touchscreen. Admit it: the way it felt to pinch, swipe, flick, and spread ... those gestures introduce a whole new pleasure palette. Reading and books have heretofore primarily been a visual medium: you look and ponder what's inside. Now, as we enter the age of touchscreen documents, content becomes a feast for our fingers as much as our eyes. Authors, publishers, and designers are just beginning to appreciate this opportunity, making good examples hard to point to. I do think that Erik Loyer is among the most interesting innovators with his Strange Rain app, a kind of mashup between short fiction and those particle visualizers like Uzu. It's not civilian-friendly yet, I don't think, but it points the way for artists interested in incorporating touch into their creations.

Jumbo content

A movable viewport lets your audience pan across massive content panoramas. Some of the possibilities here are photographic (Photosynth, Virtual History ROMA). Others have begun to explore massively wide content landscapes, such as timelines (History of Jazz). One new example I just learned about yesterday: London Unfurled for iPad, a hand-illustrated pair of 37-foot long drawings of every building on the River Thames between Hammersmith Bridge and Millennium Dome, complete with tappable backstories on most of the architecture that's on display.

These are just a few of the possibilities that I've spotted. What comes to mind when you think about the infinite canvas?

Webcast: Digital Bookmaking Tools Roundup #2 — Back by popular demand, in a second look at Digital Bookmaking Tools, author and book futurist Pete Meyers explores the existing options for creating digital books.

Join us on Thursday, November 10, 2011, at 10 am PT
Register for this free webcast

Photo: masterpiece by 416style, on Flickr


Related:


September 28 2011

Spoiler alert: The mouse dies. Touch and gesture take center stage

Mouse Macro by orangeacid, on FlickrThe moment that sealed the future of human-computer interaction (HCI) for me happened just a few months ago. I was driving my car, carrying a few friends and their children. One child, an 8-year old, pointed to the small LCD screen on the dashboard and asked me whether the settings were controlled by touching the screen. They were not. The settings were controlled by a rotary button nowhere near the screen. It was placed conveniently between the driver and passenger seats. An obvious location in a car built at the tail-end of an era when humans most frequently interacted with technology through physical switches and levers.

The screen could certainly have been one controlled by touch, and it is likely a safe bet that a newer model of my car has that very feature. However, what was more noteworthy was the fact that this child was assuming the settings could be changed simply by passing a finger over an icon on the screen. My epiphany: for this child's generation, a rotary button was simply old school.

This child is growing up in an environment where people are increasingly interacting with devices by touching screens. Smartphones and tablets are certainly significant innovations in areas such as mobility and convenience. But these devices are also ushering in an era that shifts everyone's expectations of how we engage in the use of technology. Children raised in a world where technology will be pervasive will touch surfaces, make gestures, or simply show up in order for systems to respond to their needs.

This means we must rethink how we build software, implement hardware, and design interfaces. If you are in any of the professions or businesses related to these activities, there are significant opportunities, challenges and retooling needs ahead.

It also means the days of the mouse are probably numbered. Long live the mouse.

The good old days of the mouse and keyboard

Probably like most of you, I have never formally learned to type, but I have been typing since I was very young, and I can pound out quite a few words per minute. I started on an electric typewriter that belonged to my dad. When my oldest brother brought home our first computer, a Commodore VIC-20, my transition was seamless. Within weeks, I was impressing relatives by writing small software programs that did little more than change the color of the screen or make a sound when the spacebar was pressed.

Later, my brother brought home the first Apple Macintosh. This blew me away. For the first time I could create pictures using a mouse and icons. I thought it was magical that I could click on an icon and then click on the canvas, hold the mouse button down, and pull downward and to the right to create a box shape.

Imagine my disappointment when I arrived in college and we began to learn a spreadsheet program using complex keyboard combinations.

Fortunately, when I joined the workforce, Microsoft Windows 3.1 was beginning to roll out in earnest.

The prospect of the demise of the mouse may be disturbing to many, not least of whom is me. To this day, even with my laptop, if I want to be the most productive, I will plug in a wireless mouse. It is how I work best. Or at least, it is currently the most effective way for me.

For most of us, we have grown up using a mouse and a keyboard to interact with computers. It has been this way for a long time, and we have probably assumed it would continue to be that way. However, while the keyboard probably has considerable life left in it, the mouse is likely dead.

Fortunately, while the trend suggests mouse extinction, we can momentarily relax, as it is not imminent.

But what about voice?

From science fiction to futurist projections, it has always been assumed that the future of human-computer interaction would largely be driven by using our voices. Movies over decades have reinforced this image, and it has seemed quite plausible. We were more likely to see a door open via voice rather than a wave. After all, it appears to be the most intuitive and requires the least amount of effort.

Today, voice recognition software has come a long way. For example, accuracy and performance when dictating to a computer is quite remarkable. If you have broken your arms, this can be a highly efficient way to get things done on a computer. But despite having some success and filling important niches, broad-based voice interaction has simply not prospered.

It may be that a world in which we control and communicate with technology via voice is yet to come, but my guess is that it will likely complement other forms of interaction instead of being the dominant method.

There are other ways we may interact, too, such as via eye-control and direct brain interaction, but these technologies remain largely in the lab, niche-based, or currently out of reach for general use.

The future of HCI belongs to touch and gesture

Apple's Magic TrackpadIt is a joy to watch how people use their touch-enabled devices. Flicking through emails and songs seems so natural, as does expanding pictures by using an outward pinching gesture. Ever seen how quickly someone — particularly a child — intuitively gets the interface the first time they use touch? I have yet to meet someone who says they hate touch. Moreover, we are more likely to hear people say just how much they enjoy the ease of use. Touch (and multi-touch) has unleashed innovation and enabled completely new use cases for applications, utilities and gaming.

While not yet as pervasive, gesture-based computing (in the sense of computers interpreting body movements or emotions) is beginning to emerge in the mainstream. Anyone who has ever used Microsoft Kinect will be able to vouch for how compelling an experience it is. The technology responds adequately when we jump or duck. It recognizes us. It appears to have eyes, and gestures matter.

And let us not forget, too, that this is version 1.0.

The movie "Minority Report" teased us about a possible gesture-based future: the ability to manipulate images of objects in mid air, to pile documents in a virtual heap, and to cast aside less useful information. Today many of us can experience its early potential. Now imagine that technology embedded in the world around us.

The future isn't what it used to be

My bet is that in a world of increasingly pervasive technology, humans will interact with devices via touch and gestures — whether they are in your home or car, the supermarket, your workplace, the gym, a cockpit, or carried on your person. When we see a screen with options, we will expect to control those options by touch. Where it makes sense, we will use a specific gesture to elicit a response from some device, such as (dare I say it) a robot! And, yes, at times we may even use voice. However, to me, voice in combination with other behaviors is more obvious than voice alone.

But this is not some vision of a distant future. In my view, the touch and gesture era is right ahead of us.

What you can do now

Many programmers and designers are responding to the unique needs of touch-enabled devices. They know, for example, that a paradigm of drop-down menus and double-clicks is probably the wrong set of conventions to use in this new world of swipes and pinches. After all, millions of people are already downloading millions of applications for their haptic-ready smartphones and tablets (and as the drumbeat of consumerization continues, they will also want their enterprise applications to work this way, too). But viewing the future through too narrow a lens would be an error. Touch and gesture-based computing forces us to rethink interactivity and technology design on a whole new scale.

How might you design a solution if you knew your users would exclusively interact with it via touch and gesture, and that it might also need to be accessed in a variety of contexts and on a multitude of form factors?

At a minimum, it will bring software developers even closer to graphical interface designers and vice versa. Sometimes the skillsets will blur, and often they will be one and the same.

If you are an IT leader, your mobile strategy will need to include how your applications must change to accommodate the new ways your users will interact with devices. You will also need to consider new talent to take on these new needs.

The need for great interface design will increase, and there will likely be job growth in this area. In addition, as our world becomes increasingly run by and dependent upon software, technology architects and engineers will remain in high demand.

Touch and gesture-based computing are yet more ways in which innovation does not let us rest. It keeps the pace of change, already on an accelerated trajectory, even more relentless. But the promise is the reward. New ways to engage with technology enables novel ways to use it to enhance our lives. Simplifying the interface opens up technology so it becomes even more accessible, lowering the complexity level and allowing more people to participate and benefit from its value.

Those who read my blog know my view that I believe we are in a golden age of technology and innovation. It is only going to get more interesting in the months and years ahead.

Are you ready? I know there's a whole new generation that certainly is!

Photo (top): Mouse Macro by orangeacid, on Flickr

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD


Related:


March 24 2011

UI is becoming an "embodied" model

The digital use case used to focus on a single person huddled over a keyboard, but now designers must understand how different interfaces and technologies influence a variety of user experiences. A single set of wireframes just won't cut it anymore.

In the following interview, Christian Crumlish (@mediajunkie), director of consumer experience at AOL and a speaker at the upcoming Web 2.0 Expo, discusses the unique challenges and opportunities designers now face.


What are the most important aspects of current UI research?

Christian CrumlishChristian Crumlish: The most interesting area right now is the intersection of the mobile, social, and real-time aspects with the physical, gestural, kinetic, and haptic interfaces. User interface is moving from the periphery of our senses — aside from a heavy reliance on the visual — to a much more embodied model where we will be able to use the full somatic sensory apparatus of our bodies to interact with systems and networks.

When you combine that with ubiquity, location-awareness, and a portable social graph, you're starting to meld the virtual and the physical. That should lead to all kinds of breakthroughs that are hard to picture in detail from our vantage today.

What challenges does cloud computing create for UI?

Christian Crumlish: The challenges largely go beyond the UI level and tend to be most interesting when talking about the broader user experience. Working with the cloud raises challenges around syncing, caching, and continuity across multiple modes and entry points. More and more, designers in this space need to look at the holistic experience and then explore how best to express it in various contexts, with different mediating devices.

Web 2.0 Expo San Francisco 2011, being held March 28-31, will examine key pieces of the digital economy and the ways you can use important ideas for your own success.

Save 20% on registration with the code WEBSF11RAD



Your Web 2.0 Expo session has an unusual title: "Start Using UX as a Weapon." How can a
"weaponized" user experience be put to work?


Christian Crumlish: UX can be used as a weapon in several ways:

  • Differentiating a product from its commodified, ho-hum competitors.
  • Allow the design/development team to maintain focus on the experiences of real people using the product.
  • Incorporating the problem-solving, ideation, visualizing, communication, and innovation techniques of the design tradition.
  • Integrating disparate aspects of a complex experience.



What sites, apps or platforms do you find to have the most impressive or effective interfaces?


Christian Crumlish: I think the wider realm of UX writ large offers much more potential for breakthrough experiences than a focus on the details of specific user interfaces. That's not to say that an elegant design and a well laid out screen aren't still a huge part of making a great experience.

That said, some of the sites, apps, and platforms we love that have distinguished themselves by providing for great user experiences include Dropbox, Hipmunk, Mint, Tumblr, Feedly, Etsy, and Zappos.

This interview was edited and condensed.



Related:


March 23 2011

Open Question: How important is a mobile device's "feel"?

Open QuestionWhether you buy into Apple's "post-PC" spin, the rise of mobile computing does seem to shift the perspective from raw tech specs to overall experience. That's likely born from mobile's obstacles: it's harder to text than type, it's harder to swipe than click, and it's harder to scan when your big beautiful monitor has been replaced by a mini screen. The sheer horsepower of a mobile device doesn't mean much if you can't interact with the thing.

Most of us have adapted to mobile methods. Our thumbs are nimble and our swipes are filled with purpose. But it's also clear — to me at least — that adaptation is only part of the equation: a good mobile experience is connected to the overall "feel" for a device. (Note: My definition of feel goes beyond hardware. The speed, responsiveness and elegance of the software shape my opinion of a device.)

Am I alone in this thinking? That's what I hope to find out through the following questions:

  • When considering a mobile device (phone or tablet), how much importance do you place on technical specifications?
  • How about the device's overall feel — does that factor into your decision?
  • Do you have to hold and interact with a mobile device before purchasing it?
  • In your experience, which mobile devices have the best feel? Which have the worst?

Please weigh in through the comments.

Web 2.0 Expo San Francisco 2011, being held March 28-31, will examine key pieces of the digital economy and the ways you can use important ideas for your own success.

Save 20% on registration with the code WEBSF11RAD




Related:




March 17 2011

7 emergent themes from Webstock reveal a framework

WebstockAt this year's Webstock in Wellington, New Zealand, there were no reports of bad acid or rainstorms to muck things up, just a few sore tummies from newcomers (like me) eating too many Pineapple Lumps.

Webstock wasn't a rock concert, but a gathering of the geek tribes that lived up to its reputation with accomplished speakers, an interesting mix of topics, and a scenic venue on the harbor.

The conference organizers sat the speakers together and they developed a nice camaraderie — unusual for a tech conference — that had them referencing and building on each others' presentations. While the presenters came from a variety of backgrounds — engineering, design and the arts — the talks were anchored in how how their topics related to web and mobile business processes.

A few useful and instructive themes emerged over the course of the conference (besides presenters using metaphors from Pixar's "Wall-E"). Sessions by Doug Bowman, Marco Arment, Karen Halvorson, Steve Souders, Frank Chimero, Christine Perfetti, Josh Clark and Michael Lopp most informed the following.

Deliver value early and build fast with a purpose

Best results come from agile-like processes and cultures that start small and iterate fast. Not surprisingly, this common theme was echoed most often by those with startup backgrounds. By limiting features, releasing fast, and learning, products and services can quickly become aligned to user needs. Resisting feature bloat is important to staying fast, especially after initial success and getting exposed to what may be a large number of users.


The agile approach also lets organization stay tuned into serendipity — uses and features that users exploit that were not part of planned use scenarios (Twitter becoming a communications tool for activism is one example). Corollaries to building fast include failing fast, i.e., quickly acknowledging what doesn't work, and using the scientific method to test and learn.

Scientific method

Elements of the scientific method — hypothesis setting, testing, learning and adjusting — are just as valid for web performance tuning as for user interfaces and understanding comprehension. Embracing the scientific method can help an organization establish a learning culture. It's not trivial. To gain the most from testing you need familiarity with using data and quantitative analysis, including visualization (charts), statistics, and machine learning.

Web 2.0 Expo San Francisco 2011, being held March 28-31, will examine key pieces of the digital economy and the ways you can use important ideas for your own success.

Save 20% on registration with the code WEBSF11RAD



Keep everything human-scaled


Acknowledge that the builders and users of products and services are people whose needs should be understood and accommodated. For user interfaces, human scale means keeping operations and processes simple, functionally consistent, cognitively easy, and intuitive. For engineers and managers that means developing strategies that help align efforts, listening to users and making fast build cycles possible. Communicating on a human scale is best done through narrative processes and storytelling, and establishing a communications strategy. Apple products were often offered as examples of simple-to-use, human-scaled designs that delight users.



Communicating with stories


To effectively communicate with users — to share, to guide and convince, to engage — use storytelling processes like narration and cause-and-effect processes. Stories are a way to cut through and make sense of the increasing volumes and ubiquity of data and noise we experience via the web and mobile devices. Humans are wired to understand and remember stories — they make content and interfaces visceral, memorable and viral (i.e., worth repeating). Different speakers evoked storytelling as a way to develop content strategies, to stay connected with others, as an interface metaphor, e.g., using scrolling to represent the passage of time, and, as a way to put data to use.


My background in data and analysis kept me keenly interested in David McCandless' "Information is Beautiful" session, with its mix of analytics, visually rendered data and storytelling. David does beautiful, playful and insightful infographics that focus on expressing the relative magnitude/scale of large numbers — especially numbers so large they are abstractions to most people — and on telling the stories in the data. His Debtris visualizations show the size of the US and UK deficits using a visual metaphor from the game Tetris to explains the size of the US and UK debt. David's work is always designed to tell a story, keeping users compelled to drill further and stay engaged. It's worth noting that David shared some failures and what he learned from them — the scientific method in action. David, who has a journalism background, calls his mix of charts and narration a "charticle" — a concept worth remembering when teasing a story out of data.



Users expect multi-touch and gestural interfaces


Multi-touch and gestural functions should be treated as the primary user interface, not functionality tacked onto conventional interfaces. If touch and gestures can do the job, there's no reason to keep a keyboard or mouse. Users see multi-touch and gestures as intuitive and the "right" interface for many tasks, expecting them on many devices, not just mobile phones and tablets. Vendors who fail to fully embrace multi-touch and gestures as the primary interface to devices and services do so at their own peril.



Simple is hard


It's hard work building the right amount of functionality, providing the optimal amount and frequency of content, creating devices that feel right, and designing user interfaces that are natural and "disappear." Making a simple experience for users, one that takes the cognitive load of how to use a device so they can focus on why they are using the device, takes effort, sweating the details, thinking, testing, and creativity. To succeed at simple, build fast and use scientific learning processes to refine and improve interfaces. Spend as much time taking out what does not work as building on what does.

Find inspiration everywhere

The presenters commonly described creative flashes inspired by their own needs and from their curiosity about topics outside their primary expertise. Creativity often gets sparked by making non-obvious connections.


Build fast, learn, simplify, tell stories, stay curious — taken together, these themes provide a useful framework for quickly building the next generation of services, applications and devices that become the warp and woof of our increasingly digital and mobile lives.



Related:


March 04 2011

Computers are looking back at us

As researchers work to increase human-computer interactivity, the lines between real and digital worlds are blurring. Augmented reality (AR), just in its infant stage, may be set to explode. As the founders of Bubbli, a startup developing an AR iPhone app, said in a recent Silicon Valley Blog post by Barry Bazzell: "Once we understand reality through a camera lens ... the virtual and real become indistinguishable.'"

Eyetracking

Kevin Kelly, co-founder and senior maverick at Wired magazine, recently pointed out in a keynote speech at TOC 2011 that soon the computers we're looking at would be looking back at us (the image above is from Kelly's presentation).

"Soon" turns out to be now: Developers at Swedish company Tobii Technology have created 20 computers that are controlled by eye movement. Tom Simonite described the technology in a recent post for MIT's Technology Review:

The two cameras below the laptop's screen use infrared light to track a user's pupils. An infrared light source located next to the cameras lights up the user's face and creates a "glint" in the eyes that can be accurately tracked. The position of those points is used to create a 3-D model of the eyes that is used to calculate what part of the screen the user is looking at; the information is updated 40 times per second.

The exciting part here is a comment in the article by Barbara Barclay, general manager of Tobii North America:

We built this conceptual prototype to see how close we are to being ready to use eye tracking for the mass market ... We think it may be ready.

Tobii released the following video to explain the system:

This type of increased interaction has potential across industries — intelligent search, AR, personalization, recommendations, to name just a few channels. Search models built on interaction data gathered directly from the user could also augment the social aggregation that search engine companies are currently focused on. Engines could incorporate what you like with what you see and do.



Related:


June 19 2008

TERRA 433: Papiroflexia PREVIEW

And now for something wonderfully different! Papiroflexia (Spanish for “Origami”) is the animated tale of Fred, a skillful paper folder who could shape the world with his hands. Joaquin Baldwin used Photoshop and AfterEffects to create this fanciful animation that may resonate with many of us who would like to mold the human world into one just a little more natural.
TERRA 433: Papiroflexia

And now for something wonderfully different! Papiroflexia (Spanish for “Origami”) is the animated tale of Fred, a skillful paper folder who could shape the world with his hands. Joaquin Baldwin used Photoshop and AfterEffects to create this fanciful animation that may resonate with many of us who would like to mold the human world into one just a little more natural.
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl