Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

March 07 2012

Buttons were an inspired UI hack, but now we've got better options

If you've ever seen a child interact with an iPad, you've seen the power of the touch interface in action. Is this a sign of what's to come — will we be touching and swiping screens rather tapping buttons? I reached out to Josh Clark (@globalmoxie), founder of Global Moxie and author of "Tapworthy," to get his thoughts on the future of touch and computer interaction, and whether or not buttons face extinction.

Clark says a touch-based UI is more intuitive to the way we think and act in the world. He also says touch is just the beginning — speech, facial expression, and physical gestures are on they way, and we need to start thinking about content in these contexts.

Clark will expand on these ideas at Mini TOC Austin on March 9 in Austin, Texas.

Our interview follows.

Are we close to seeing the end of buttons?

josh_clark.pngJosh Clark: I frequently say that buttons are a hack, and people sometimes take that the wrong way. I don't mean it in a particularly negative way. I think buttons are an inspired hack, a workaround that we've needed just to get stuff done. That's true in the real world as well as the virtual: A light switch over here to turn on a light over there isn't especially intuitive, and it's something that has to be learned, and re-learned for every room we walk into. That light switch introduces a middle man, a layer of separation between the action and the thing you really want to work on, which is the light. The switch is a hack, but a brilliant one because it's just not practical to climb up a ladder in a dark room to screw in the light bulb.

Buttons in interfaces are a similar kind of hack — an abstraction we've needed to make the desktop interface work for 30 years. The cursor, the mouse, buttons, tabs, menus ... these are all prosthetics we've been using to wrangle content and information.

With touchscreen interfaces, though, designers can create the illusion of acting on information and content directly, manipulating it like a physical object that you can touch and stretch and drag and nudge. Those interactions tickle our brains in different ways from how traditional interfaces work because we don't have to process that middle layer of UI conventions. We can just touch the content directly in many cases. It's a great way to help cut through complexity.

The result is so much more intuitive, so much more natural to the way we think and act in the world. The proof is how quickly people with no computing experience — people like toddlers and seniors — take so quickly to the iPad. They're actually better with these interfaces than the rest of us because they aren't poisoned by 30 years of desktop interface conventions. Follow the toddlers; they're better at it than we are.

So, yes, in some contexts, buttons and other administrative debris of the traditional interface have run their course. But buttons remain useful in some contexts, especially for more abstract tasks that aren't easily represented physically. The keyboard is a great example, as are actions like "send to Twitter," which don't have readily obvious physical components. And just as important, buttons are labeled with clear calls to action. As we turn the corner into popularizing touch interactions, buttons will still have a place.

Mini TOC Austin — Being held March 9, 2012 — right before SXSW — O'Reilly Tools of Change presents Mini TOC Austin, a one-day event focusing on Austin's thriving publishing, tech, and bookish-arts community.

Register to attend Mini TOC Austin

What kinds of issues do touch- and gesture-oriented interfaces present?

Josh Clark: There are issues for both designers and users. In general, if a touchscreen element looks or behaves like a physical object, people will try to interact with it like one. If your interface looks like a book, people will try to turn its pages. For centuries, designers have dressed up their designs to look like physical objects, but that's always just been eye candy in the past. With touch, users approach those designs very differently; they're promises about how the interface works. So designers have to be careful to deliver on those promises. Don't make your interface look like a book, for example, if it really works through desktop-like buttons. (I'm looking at you, Contacts app for iPad.)

So, you can create really intuitive interfaces by making them look or behave like physical objects. That doesn't mean that everything has to look just like a real-world object. Windows Phone and the forthcoming Windows 8 interface, for example, use a very flat tile-like metaphor. It doesn't look like a 3-D gadget or artifact, but it does behave with real-world physics. It's easy to figure out how to slide and work the content on the screen. People figure that stuff out really quickly.

The next hurdle — and the big opportunity for touch interfaces — is moving to more abstract gestures: two- and three-finger swipes, a full-hand pinch, and so on. In those cases, gestures become the keyboard shortcuts of touch and begin to let you create applications that you play more than you use, almost like an instrument. But wait, here I am talking about abstract gestures; didn't I just say that abstractions — like buttons — are less than ideal? Well, yeah, the trouble is you don't want to have the overhead of processing an interface, of thinking through how it works. The thing about physical abstractions (like gestures) versus visual abstractions (like buttons) is that physical actions can be absorbed into muscle memory. That kind of subconscious knowledge is actually much faster than visual processing — it's why touch typists are so much faster than people who visually peck at the keys. So, once you learn and absorb those physical actions — a two-finger swipe always does this or that — then you can actually move really quickly through an interface in the same way a pianist or a typist moves through a keyboard. Intent fluidly translated to action.

But how do you teach that stuff? Swiping a card, pinching a map, or tapping a photo are all based on actions we know from the physical world. But a two-finger swipe has no prior meaning. It's not something we'll guess. Gestures are invisible with no labels, so that means they have to be taught.

Screenshot from Apple's built-in trackpad tutorial
Screenshot from Apple's trackpad tutorial.

In what ways can UI design alleviate these learning issues?

Josh Clark: Designers should approach this by thinking through how we learn any physical action in the real world: observation of visual cues, demonstration, and practice. Too often, designers fall back on instruction manuals (iPad apps that open with a big screen of gesture diagrams) or screencasts. Neither are very effective.

Instead, designers have to do a better job of coaching people in context, showing our audiences how and when to use a gesture in the moment. More of us need to study video game design because games are great at this. In so many video games, you're dropped into a world where you don't even know what your goal is, let alone what you're capable of or what obstacles you might encounter. The game rides along with you, tracking your progress, taking note of what you've encountered and what you haven't, and giving in-context instruction, tips, and demonstrations as you go. That's what more apps and websites should do. Don't wait for people to somehow find a hidden gesture shortcut; tell people about it when they need it. Show an animation of the gesture and wait for them to copy it. Demonstration and practice — that's how we learn all physical actions, from playing an instrument to learning a tennis serve.

How do you see computer interaction evolving?

Josh Clark: It's a really exciting time for interaction design because so many new technologies are becoming mature and affordable. Touch got there a few years ago. Speech is just now arriving. Computer vision with face recognition and gesture recognition like Kinect are coming along. So, we have all these areas where computers are learning to understand our particularly human forms of communication.

In the past, we had to learn to act and think like the machine. At the command line, we had to write in the computer's language, not our own. The desktop graphical user interface was a big step forward in making things more humane through visuals, but it was still oriented around how computers saw the world, not humans. When you consider the additions of touch, speech, facial expression, and physical gesture, you have nearly the whole range of human (and humane) communication tools. As computers learn the subtleties of those expressions, our interfaces can become more human and more intuitive, too.

Touchscreens are leading this charge for now, but touch isn't appropriate in every context. Speech is obviously great for the car, for walking, for any context where you need your eyes elsewhere. We're going to see interfaces that use these different modes of communication in context-appropriate combinations. But that means we have to start thinking hard about how our content works in all these different contexts. So many are struggling just to figure out how to make the content adapt to a smaller screen. How about how your content sounds when spoken? How about when it can be touched, or how it should respond to physical gestures or facial expressions? There's lots of work ahead.

Are Google's rumored heads-up-display glasses a sign of things to come?

Josh Clark: I'm sure that all kinds of new displays will have a role in the digital future. I'm not especially clever about figuring out which technology will be a huge hit. If someone had told me five years ago that the immediate future would be all about a glass phone with no buttons, I'd have said they were nuts. I think both software and context and, above all, human empathy make the difference in how and when a hardware technology becomes truly useful. The stuff I've seen of the heads-up-display glasses seems a bit awkward and unnatural. The twitchy way you have to move your head to navigate the screen seems to ask you to behave a little robot-like. I think trends and expectations are moving in an opposite direction — technology that adapts to human means of expression, not humans adapting to technology.

This interview was edited and condensed.

Related:

March 24 2011

UI is becoming an "embodied" model

The digital use case used to focus on a single person huddled over a keyboard, but now designers must understand how different interfaces and technologies influence a variety of user experiences. A single set of wireframes just won't cut it anymore.

In the following interview, Christian Crumlish (@mediajunkie), director of consumer experience at AOL and a speaker at the upcoming Web 2.0 Expo, discusses the unique challenges and opportunities designers now face.


What are the most important aspects of current UI research?

Christian CrumlishChristian Crumlish: The most interesting area right now is the intersection of the mobile, social, and real-time aspects with the physical, gestural, kinetic, and haptic interfaces. User interface is moving from the periphery of our senses — aside from a heavy reliance on the visual — to a much more embodied model where we will be able to use the full somatic sensory apparatus of our bodies to interact with systems and networks.

When you combine that with ubiquity, location-awareness, and a portable social graph, you're starting to meld the virtual and the physical. That should lead to all kinds of breakthroughs that are hard to picture in detail from our vantage today.

What challenges does cloud computing create for UI?

Christian Crumlish: The challenges largely go beyond the UI level and tend to be most interesting when talking about the broader user experience. Working with the cloud raises challenges around syncing, caching, and continuity across multiple modes and entry points. More and more, designers in this space need to look at the holistic experience and then explore how best to express it in various contexts, with different mediating devices.

Web 2.0 Expo San Francisco 2011, being held March 28-31, will examine key pieces of the digital economy and the ways you can use important ideas for your own success.

Save 20% on registration with the code WEBSF11RAD



Your Web 2.0 Expo session has an unusual title: "Start Using UX as a Weapon." How can a
"weaponized" user experience be put to work?


Christian Crumlish: UX can be used as a weapon in several ways:

  • Differentiating a product from its commodified, ho-hum competitors.
  • Allow the design/development team to maintain focus on the experiences of real people using the product.
  • Incorporating the problem-solving, ideation, visualizing, communication, and innovation techniques of the design tradition.
  • Integrating disparate aspects of a complex experience.



What sites, apps or platforms do you find to have the most impressive or effective interfaces?


Christian Crumlish: I think the wider realm of UX writ large offers much more potential for breakthrough experiences than a focus on the details of specific user interfaces. That's not to say that an elegant design and a well laid out screen aren't still a huge part of making a great experience.

That said, some of the sites, apps, and platforms we love that have distinguished themselves by providing for great user experiences include Dropbox, Hipmunk, Mint, Tumblr, Feedly, Etsy, and Zappos.

This interview was edited and condensed.



Related:


December 10 2010

Video pick: The inevitable merging of Kinect and "Minority Report"

Combine Kinect hacks with the creativity of MIT and this is what you get: The first baby steps toward "Minority Report"-inspired interfaces.

The only downside: The required gestures are quite grand, so muscle fatigue is going to be factor for a while.

Hat tip: Engadget

Note: This is the first entry in a semi-regular series highlighting tech-centric videos that are excellent, intriguing, or thought provoking. Suggestions are always welcome.



Related:




December 03 2010

7 areas beyond gaming where Kinect could play a role

Kinect
Kinect for Xbox 360. Image courtesy Microsoft/Xbox press kit.

I recently had the opportunity to put Microsoft's Kinect to the test. While the device may prove to be a financial success (it seems well on its way), my takeaway was all about sense, not dollars.


For the first time in my adult life, I played a video game with one of my parents and we both enjoyed the experience. The parent in question was able to interact with the set-top box without navigating a dozen different buttons on complicated controllers. Some of my younger relatives took to the interface like otters learning to swim.

Kinect, at this stage, isn't perfect. As the Wall Street Journal and Engadget noted in reviews, a controller is still needed to access certain menus or functions, and accuracy-focused tasks that involve manipulating objects don't work all that well.

Glitches aside, after using Kinect it's clear to me that the device's full potential isn't bound only to games. What caught and held my attention was the device's gestural user interface, or what Microsoft research scientist Craig Mundie more specifically described to Computerworld as Kinect's "natural user interface."

Joe Sinicki summarized Kinect's broader influence in his review: "The main draw of Kinect is not what it does now, but what developers may be able to do with it in the future." Given OpenKinect, a set of open source drivers that unlocks the device's potential, the developer opportunities are considerable.

With that as a backdrop, here are a few ways Kinect could leap beyond its gaming applications.

1. Health and medicine

Chris Niehaus, director of U.S. public sector innovation at Microsoft, blogged at length about Kinect applications in telemedicine, neurological processes, physical therapy and medical training. While Niehaus has a strong motive to highlight the best of Kinect, his post explores the concept well.

R.O.G.E.R. shows how Kinect can be applied to post-stroke patients. For a sense of how motion gaming has already made an impact in this direction, recall the stories about veterans undergoing "Wii-hab" last year, or the cybertherapy pilots underway at various militaries.

2. Special needs children and adults

Look no further than the story of Kinect and John Yan's four-year-old autistic son for a real-world example of how a different interface can literally open up new worlds. Research into computerized gaming and autism already reveals the potential for mirroring applications to improve facial recognition. Kinect hacks might take this further.

3. Exercise

This one has no technical angle at all. It may be a bit too hopeful to think Kinect, Wii, and PlayStation's Move will get overweight children moving. But given the rates of childhood obesity, moving kids from sedentary to active would be a clear win for everything but the living room furniture. That benefit might even extend to parents and seniors.

4. Education

Educational technology has grown in recent years through electronic publishing, learning management systems, and new models for virtual degrees earned online. Will Kinect's video and interface allow for distributed teaching? That's admittedly speculative, but education is an area to watch.

5. Participatory art

This interactive, Kinect-based puppet prototype is just the beginning. As the Kinect platform matures, groups of people dancing in public and private spaces could explore new art forms that blend virtual and physical spaces. Flash mobs could get a lot more distributed.

6. Advertising and e-commerce

The Kinect can already detect multiple people in field. The Xbox 360 already enables users to watch live and recorded sports on ESPN. As more intelligence is built into the platform, combining those two capacities could go far beyond changing channels. It could lead to recognizing different users, profiles and creating a more interactive watching experience. Early integration with Twitter and Facebook are a start, but they don't lend themselves to fully socializing the experience. Neither does Microsoft's Live community, though it's not hard to imagine that feature evolving into a full-fledged social network.

7. Navigating the web / exploring digital spaces

If you've seen "Minority Report" or watched video of Oblong Industry's spatial operating system, you're already familiar with the concept of gestural interfaces. Kinect could be an initial step toward making that concept a reality. In fact, a team from MIT's Media Lab has already developed a Kinect hack that allows for gesture-driven web surfing:


Related:


February 10 2010

Why does Facebook keep redesigning?

facebook-redesign-screen.png
Facebook is rolling out what seems like its 30th redesign. Judging by the reorganization in the site's latest look, the user backlash should be on par with past overhauls.

I asked E. A. Vander Veer, author of "Facebook: The Missing Manual," for her take on the frequency of Facebook redesigns. She also weighed in Facebook's dominant position and the willingness of its users to "retrain" themselves.

Mac Slocum: From what I can tell, Facebook has rolled out an annual redesign since 2006. Is that a reasonable frequency?

E. A. Vander Veer: "Reasonable" is a hard one. Facebook is in line with most software upgrades, which are in the one-to-two-year range. The goal from the company's perspective is balancing what they want to accomplish -- in this case, making the site pay off -- with members' willingness to retrain themselves every several months.

M.S.: How big a factor is "retraining" in Facebook redesigns? The word itself doesn't have a positive connotation.

I don't know anyone personally on the Facebook design team, and I don't have any Facebook moles. But I used to work in software development, and I can tell you that typically users aren't considered at all when it comes to software redesigns. I wouldn't have believed this if I hadn't seen it in action on countless projects in several different companies! The attitude is, "We're the experts, we know what you want and need, our redesign is making it better, and it won't take more than a few minutes for you to get up to speed."

Redesigns are typically driven by what the company wants. For example, to showcase features they think will drive revenue. They're also driven by competitive factors; companies think they need to redesign every year or so because all the other companies are doing it and they don't want to risk looking old fashioned.

M.S.: Each Facebook redesign has a backlash. Some of that is inevitable, but do you think Facebook is undermining itself with updates?

E. V.: Anecdotally, I know people who simply stopped using large portions of the site -- and even stopped logging in altogether -- after the last redesign because they couldn't figure out where anything was. They were cheesed they had to go hunting. But overall, the numbers of registered members has continued to rise, and by some estimates the average site usage per member of under-25s is a few hours a day. Meaning: redesigns aren't putting off a lot of folks.

M.S.: Do you think Facebook's position and its continued growth give it leeway with redesigns?

Absolutely. They're riding high right now, and there currently isn't any alternative site positioned as a close competitor. And, honestly, most people will retrain themselves. Facebook has a significant user base that cut its teeth on computers -- folks who think it's normal to have to deal with a new interface to perform essentially the same tasks over and over again.

M.S.: Looking over the latest redesign, does anything jump out at you?

E.V.: The motivation for the redesign is unquestionably in line with the company's goal to monetize the site. The value of the site lies in members' interactions, and these features -- friend requests, messages, etc. -- have been grouped together and placed in the upper-left corner.

M.S.: Are Facebook's recent designs -- 2009's and this most recent one -- about expanding user engagement rather than attracting new users?

The recent redesigns focus on two things: supporting users' real-world relationships and extending them online. So attracting new users is still a goal. Witness the new position of the "friend requests" feature [top left corner]. But what Facebook really wants is to make it easy to get people to categorize and describe themselves -- by "liking" certain posts, fanning certain pages, etc. -- and to interact heavily with each other. These are the keys to monetizing the site.

M.S.: Given the sheer amount of time users put into Facebook, do you think the user base can claim a de facto ownership stake in the service? I know that would never hold up legally, but I wonder if that lies at the heart of the anti-redesign criticism that flares up.

E.V.: Yes, absolutely. The last couple of generations, in particular, think that using something -- rather than creating something -- confers ownership. Witness the attitude toward the ripping of music and movies: kids talk about "our" music while they express outrage when the actual musicians attempt to control their work! This attitude is understandable in a culture where passively consuming media from birth is the norm and less emphasis is placed on personal ownership and creativity (less arts education, falling science and literacy scores, less funding and visibility for small businesses, etc.)

I liken a site redesign to waking up in the morning and finding that someone has moved my toaster and changed all the knobs and slots, so it takes me 10 minutes to figure out how to find and work it. Some people are going to deal with that once, and that will be it. They'll opt out. But a sizable user base (particularly under-30s) believes that joining a "I hate the new Facebook" group is both useful and meaningful. These users will grouse for a while, feel as though they've contributed to some sort of public discourse, and then continue to use the site as usual.

Note: This interview was condensed and edited.

Reposted bySigalon02flopsbox
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl