Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

April 18 2012

Four short links: 18 April 2012

  1. CartoDB (GitHub) -- open source geospatial database, API, map tiler, and UI. For feature comparison, see Comparing Open Source CartoDB to Fusion Tables (via Nelson Minar).
  2. Future Telescope Array Drives Exabyte Processing (Ars Technica) -- Astronomical data is massive, and requires intense computation to analyze. If it works as planned, Square Kilometer Array will produce over one exabyte (260 bytes, or approximately 1 billion gigabytes) every day. This is roughly twice the global daily traffic of the entire Internet, and will require storage capacity at least 10 times that needed for the Large Hadron Collider. (via Greg Linden)
  3. Faster Touch Screens More Usable (Toms Hardware) -- check out that video! (via Greg Linden)
  4. Why Microsoft's New Open Source Division (Simon Phipps) -- The new "Microsoft Open Technologies, Inc." provides an ideal firewall to protect Microsoft from the risks it has been alleging exist in open source and open standards. As such, it will make it "easier and faster" for them to respond to the inevitability of open source in their market without constant push-back from cautious and reactionary corporate process.

March 07 2012

Buttons were an inspired UI hack, but now we've got better options

If you've ever seen a child interact with an iPad, you've seen the power of the touch interface in action. Is this a sign of what's to come — will we be touching and swiping screens rather tapping buttons? I reached out to Josh Clark (@globalmoxie), founder of Global Moxie and author of "Tapworthy," to get his thoughts on the future of touch and computer interaction, and whether or not buttons face extinction.

Clark says a touch-based UI is more intuitive to the way we think and act in the world. He also says touch is just the beginning — speech, facial expression, and physical gestures are on they way, and we need to start thinking about content in these contexts.

Clark will expand on these ideas at Mini TOC Austin on March 9 in Austin, Texas.

Our interview follows.

Are we close to seeing the end of buttons?

josh_clark.pngJosh Clark: I frequently say that buttons are a hack, and people sometimes take that the wrong way. I don't mean it in a particularly negative way. I think buttons are an inspired hack, a workaround that we've needed just to get stuff done. That's true in the real world as well as the virtual: A light switch over here to turn on a light over there isn't especially intuitive, and it's something that has to be learned, and re-learned for every room we walk into. That light switch introduces a middle man, a layer of separation between the action and the thing you really want to work on, which is the light. The switch is a hack, but a brilliant one because it's just not practical to climb up a ladder in a dark room to screw in the light bulb.

Buttons in interfaces are a similar kind of hack — an abstraction we've needed to make the desktop interface work for 30 years. The cursor, the mouse, buttons, tabs, menus ... these are all prosthetics we've been using to wrangle content and information.

With touchscreen interfaces, though, designers can create the illusion of acting on information and content directly, manipulating it like a physical object that you can touch and stretch and drag and nudge. Those interactions tickle our brains in different ways from how traditional interfaces work because we don't have to process that middle layer of UI conventions. We can just touch the content directly in many cases. It's a great way to help cut through complexity.

The result is so much more intuitive, so much more natural to the way we think and act in the world. The proof is how quickly people with no computing experience — people like toddlers and seniors — take so quickly to the iPad. They're actually better with these interfaces than the rest of us because they aren't poisoned by 30 years of desktop interface conventions. Follow the toddlers; they're better at it than we are.

So, yes, in some contexts, buttons and other administrative debris of the traditional interface have run their course. But buttons remain useful in some contexts, especially for more abstract tasks that aren't easily represented physically. The keyboard is a great example, as are actions like "send to Twitter," which don't have readily obvious physical components. And just as important, buttons are labeled with clear calls to action. As we turn the corner into popularizing touch interactions, buttons will still have a place.

Mini TOC Austin — Being held March 9, 2012 — right before SXSW — O'Reilly Tools of Change presents Mini TOC Austin, a one-day event focusing on Austin's thriving publishing, tech, and bookish-arts community.

Register to attend Mini TOC Austin

What kinds of issues do touch- and gesture-oriented interfaces present?

Josh Clark: There are issues for both designers and users. In general, if a touchscreen element looks or behaves like a physical object, people will try to interact with it like one. If your interface looks like a book, people will try to turn its pages. For centuries, designers have dressed up their designs to look like physical objects, but that's always just been eye candy in the past. With touch, users approach those designs very differently; they're promises about how the interface works. So designers have to be careful to deliver on those promises. Don't make your interface look like a book, for example, if it really works through desktop-like buttons. (I'm looking at you, Contacts app for iPad.)

So, you can create really intuitive interfaces by making them look or behave like physical objects. That doesn't mean that everything has to look just like a real-world object. Windows Phone and the forthcoming Windows 8 interface, for example, use a very flat tile-like metaphor. It doesn't look like a 3-D gadget or artifact, but it does behave with real-world physics. It's easy to figure out how to slide and work the content on the screen. People figure that stuff out really quickly.

The next hurdle — and the big opportunity for touch interfaces — is moving to more abstract gestures: two- and three-finger swipes, a full-hand pinch, and so on. In those cases, gestures become the keyboard shortcuts of touch and begin to let you create applications that you play more than you use, almost like an instrument. But wait, here I am talking about abstract gestures; didn't I just say that abstractions — like buttons — are less than ideal? Well, yeah, the trouble is you don't want to have the overhead of processing an interface, of thinking through how it works. The thing about physical abstractions (like gestures) versus visual abstractions (like buttons) is that physical actions can be absorbed into muscle memory. That kind of subconscious knowledge is actually much faster than visual processing — it's why touch typists are so much faster than people who visually peck at the keys. So, once you learn and absorb those physical actions — a two-finger swipe always does this or that — then you can actually move really quickly through an interface in the same way a pianist or a typist moves through a keyboard. Intent fluidly translated to action.

But how do you teach that stuff? Swiping a card, pinching a map, or tapping a photo are all based on actions we know from the physical world. But a two-finger swipe has no prior meaning. It's not something we'll guess. Gestures are invisible with no labels, so that means they have to be taught.

Screenshot from Apple's built-in trackpad tutorial
Screenshot from Apple's trackpad tutorial.

In what ways can UI design alleviate these learning issues?

Josh Clark: Designers should approach this by thinking through how we learn any physical action in the real world: observation of visual cues, demonstration, and practice. Too often, designers fall back on instruction manuals (iPad apps that open with a big screen of gesture diagrams) or screencasts. Neither are very effective.

Instead, designers have to do a better job of coaching people in context, showing our audiences how and when to use a gesture in the moment. More of us need to study video game design because games are great at this. In so many video games, you're dropped into a world where you don't even know what your goal is, let alone what you're capable of or what obstacles you might encounter. The game rides along with you, tracking your progress, taking note of what you've encountered and what you haven't, and giving in-context instruction, tips, and demonstrations as you go. That's what more apps and websites should do. Don't wait for people to somehow find a hidden gesture shortcut; tell people about it when they need it. Show an animation of the gesture and wait for them to copy it. Demonstration and practice — that's how we learn all physical actions, from playing an instrument to learning a tennis serve.

How do you see computer interaction evolving?

Josh Clark: It's a really exciting time for interaction design because so many new technologies are becoming mature and affordable. Touch got there a few years ago. Speech is just now arriving. Computer vision with face recognition and gesture recognition like Kinect are coming along. So, we have all these areas where computers are learning to understand our particularly human forms of communication.

In the past, we had to learn to act and think like the machine. At the command line, we had to write in the computer's language, not our own. The desktop graphical user interface was a big step forward in making things more humane through visuals, but it was still oriented around how computers saw the world, not humans. When you consider the additions of touch, speech, facial expression, and physical gesture, you have nearly the whole range of human (and humane) communication tools. As computers learn the subtleties of those expressions, our interfaces can become more human and more intuitive, too.

Touchscreens are leading this charge for now, but touch isn't appropriate in every context. Speech is obviously great for the car, for walking, for any context where you need your eyes elsewhere. We're going to see interfaces that use these different modes of communication in context-appropriate combinations. But that means we have to start thinking hard about how our content works in all these different contexts. So many are struggling just to figure out how to make the content adapt to a smaller screen. How about how your content sounds when spoken? How about when it can be touched, or how it should respond to physical gestures or facial expressions? There's lots of work ahead.

Are Google's rumored heads-up-display glasses a sign of things to come?

Josh Clark: I'm sure that all kinds of new displays will have a role in the digital future. I'm not especially clever about figuring out which technology will be a huge hit. If someone had told me five years ago that the immediate future would be all about a glass phone with no buttons, I'd have said they were nuts. I think both software and context and, above all, human empathy make the difference in how and when a hardware technology becomes truly useful. The stuff I've seen of the heads-up-display glasses seems a bit awkward and unnatural. The twitchy way you have to move your head to navigate the screen seems to ask you to behave a little robot-like. I think trends and expectations are moving in an opposite direction — technology that adapts to human means of expression, not humans adapting to technology.

This interview was edited and condensed.

Related:

September 28 2011

Spoiler alert: The mouse dies. Touch and gesture take center stage

Mouse Macro by orangeacid, on FlickrThe moment that sealed the future of human-computer interaction (HCI) for me happened just a few months ago. I was driving my car, carrying a few friends and their children. One child, an 8-year old, pointed to the small LCD screen on the dashboard and asked me whether the settings were controlled by touching the screen. They were not. The settings were controlled by a rotary button nowhere near the screen. It was placed conveniently between the driver and passenger seats. An obvious location in a car built at the tail-end of an era when humans most frequently interacted with technology through physical switches and levers.

The screen could certainly have been one controlled by touch, and it is likely a safe bet that a newer model of my car has that very feature. However, what was more noteworthy was the fact that this child was assuming the settings could be changed simply by passing a finger over an icon on the screen. My epiphany: for this child's generation, a rotary button was simply old school.

This child is growing up in an environment where people are increasingly interacting with devices by touching screens. Smartphones and tablets are certainly significant innovations in areas such as mobility and convenience. But these devices are also ushering in an era that shifts everyone's expectations of how we engage in the use of technology. Children raised in a world where technology will be pervasive will touch surfaces, make gestures, or simply show up in order for systems to respond to their needs.

This means we must rethink how we build software, implement hardware, and design interfaces. If you are in any of the professions or businesses related to these activities, there are significant opportunities, challenges and retooling needs ahead.

It also means the days of the mouse are probably numbered. Long live the mouse.

The good old days of the mouse and keyboard

Probably like most of you, I have never formally learned to type, but I have been typing since I was very young, and I can pound out quite a few words per minute. I started on an electric typewriter that belonged to my dad. When my oldest brother brought home our first computer, a Commodore VIC-20, my transition was seamless. Within weeks, I was impressing relatives by writing small software programs that did little more than change the color of the screen or make a sound when the spacebar was pressed.

Later, my brother brought home the first Apple Macintosh. This blew me away. For the first time I could create pictures using a mouse and icons. I thought it was magical that I could click on an icon and then click on the canvas, hold the mouse button down, and pull downward and to the right to create a box shape.

Imagine my disappointment when I arrived in college and we began to learn a spreadsheet program using complex keyboard combinations.

Fortunately, when I joined the workforce, Microsoft Windows 3.1 was beginning to roll out in earnest.

The prospect of the demise of the mouse may be disturbing to many, not least of whom is me. To this day, even with my laptop, if I want to be the most productive, I will plug in a wireless mouse. It is how I work best. Or at least, it is currently the most effective way for me.

For most of us, we have grown up using a mouse and a keyboard to interact with computers. It has been this way for a long time, and we have probably assumed it would continue to be that way. However, while the keyboard probably has considerable life left in it, the mouse is likely dead.

Fortunately, while the trend suggests mouse extinction, we can momentarily relax, as it is not imminent.

But what about voice?

From science fiction to futurist projections, it has always been assumed that the future of human-computer interaction would largely be driven by using our voices. Movies over decades have reinforced this image, and it has seemed quite plausible. We were more likely to see a door open via voice rather than a wave. After all, it appears to be the most intuitive and requires the least amount of effort.

Today, voice recognition software has come a long way. For example, accuracy and performance when dictating to a computer is quite remarkable. If you have broken your arms, this can be a highly efficient way to get things done on a computer. But despite having some success and filling important niches, broad-based voice interaction has simply not prospered.

It may be that a world in which we control and communicate with technology via voice is yet to come, but my guess is that it will likely complement other forms of interaction instead of being the dominant method.

There are other ways we may interact, too, such as via eye-control and direct brain interaction, but these technologies remain largely in the lab, niche-based, or currently out of reach for general use.

The future of HCI belongs to touch and gesture

Apple's Magic TrackpadIt is a joy to watch how people use their touch-enabled devices. Flicking through emails and songs seems so natural, as does expanding pictures by using an outward pinching gesture. Ever seen how quickly someone — particularly a child — intuitively gets the interface the first time they use touch? I have yet to meet someone who says they hate touch. Moreover, we are more likely to hear people say just how much they enjoy the ease of use. Touch (and multi-touch) has unleashed innovation and enabled completely new use cases for applications, utilities and gaming.

While not yet as pervasive, gesture-based computing (in the sense of computers interpreting body movements or emotions) is beginning to emerge in the mainstream. Anyone who has ever used Microsoft Kinect will be able to vouch for how compelling an experience it is. The technology responds adequately when we jump or duck. It recognizes us. It appears to have eyes, and gestures matter.

And let us not forget, too, that this is version 1.0.

The movie "Minority Report" teased us about a possible gesture-based future: the ability to manipulate images of objects in mid air, to pile documents in a virtual heap, and to cast aside less useful information. Today many of us can experience its early potential. Now imagine that technology embedded in the world around us.

The future isn't what it used to be

My bet is that in a world of increasingly pervasive technology, humans will interact with devices via touch and gestures — whether they are in your home or car, the supermarket, your workplace, the gym, a cockpit, or carried on your person. When we see a screen with options, we will expect to control those options by touch. Where it makes sense, we will use a specific gesture to elicit a response from some device, such as (dare I say it) a robot! And, yes, at times we may even use voice. However, to me, voice in combination with other behaviors is more obvious than voice alone.

But this is not some vision of a distant future. In my view, the touch and gesture era is right ahead of us.

What you can do now

Many programmers and designers are responding to the unique needs of touch-enabled devices. They know, for example, that a paradigm of drop-down menus and double-clicks is probably the wrong set of conventions to use in this new world of swipes and pinches. After all, millions of people are already downloading millions of applications for their haptic-ready smartphones and tablets (and as the drumbeat of consumerization continues, they will also want their enterprise applications to work this way, too). But viewing the future through too narrow a lens would be an error. Touch and gesture-based computing forces us to rethink interactivity and technology design on a whole new scale.

How might you design a solution if you knew your users would exclusively interact with it via touch and gesture, and that it might also need to be accessed in a variety of contexts and on a multitude of form factors?

At a minimum, it will bring software developers even closer to graphical interface designers and vice versa. Sometimes the skillsets will blur, and often they will be one and the same.

If you are an IT leader, your mobile strategy will need to include how your applications must change to accommodate the new ways your users will interact with devices. You will also need to consider new talent to take on these new needs.

The need for great interface design will increase, and there will likely be job growth in this area. In addition, as our world becomes increasingly run by and dependent upon software, technology architects and engineers will remain in high demand.

Touch and gesture-based computing are yet more ways in which innovation does not let us rest. It keeps the pace of change, already on an accelerated trajectory, even more relentless. But the promise is the reward. New ways to engage with technology enables novel ways to use it to enhance our lives. Simplifying the interface opens up technology so it becomes even more accessible, lowering the complexity level and allowing more people to participate and benefit from its value.

Those who read my blog know my view that I believe we are in a golden age of technology and innovation. It is only going to get more interesting in the months and years ahead.

Are you ready? I know there's a whole new generation that certainly is!

Photo (top): Mouse Macro by orangeacid, on Flickr

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD


Related:


Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl