Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

December 13 2013

Architecture, design, and the connected environment

Just when it seems we’re starting to get our heads around the mobile revolution, another design challenge has risen up fiercer and larger right behind it: the Internet of Things. The rise in popularity of “wearables” and the growing activity around NFC and Bluetooth LE technologies are pushing the Internet of Things increasingly closer to the mainstream consumer market. Just as some challenges of mobile computing were pointedly addressed by responsive web design and adaptive content, we must carefully evaluate our approach to integration, implementation, and interface in this emerging context if we hope to see it become an enriching part people’s daily lives (and not just another source of anger and frustration).

It is with this goal in mind that I would like to offer a series of posts as one starting point for a conversation about user interface design, user experience design, and information architecture for connected environments. I’ll begin by discussing the functional relationship between user interface design and information architecture, and by drawing out some implications of this relationship for user experience as a whole.

In follow-up posts, I’ll discuss the library sciences origins of information architecture as it has been traditionally practiced on the web, and situate this practice in the emerging climate of connected environments. Finally, I’ll wrap up the series by discussing the cognitive challenges that connected systems present and propose some specific measures we can take as designers to make these systems more pleasant, more intuitive, and more enriching to use.

Architecture and Design

Technology pioneer Kevin Ashton is widely credited with coining the term “The Internet of Things.” Ashton characterizes the core of the Internet of Things as the “RFID and sensor technology [that] enables computers to observe, identify, and understand the world — without the limitations of human-entered data.”

About the same time that Ashton gave a name to this emerging confluence of technologies, scholar N. Katherine Hayles noted in How We Became Posthuman that “in the future, the scarce commodity will be the human attention span.” In effect, collecting data is a technology problem that can be solved with efficiency and scale; making that mass of data meaningful to human beings (who evolve on a much different timeline) is an entirely different task.

The twist in this story? Both Ashton and Hayles were formulating these ideas circa 1999. Now, 14 years later, the future they identified is at hand. Bandwidth, processor speed, and memory will soon be up to the task of ensuring the technical end of what has already been imagined, and much more. The challenge before us now as designers is in making sure that this future-turned-present world is not only technically possible, but also practically feasible — in a word, we still need to solve the usability problem.

Fortunately, members of the forward guard in emerging technology have already sprung into action and have begun to outline the specific challenges presented by the connected environment. Designer and strategist Scott Jenson has written and spoken at length about the need for open APIs, flexible cloud solutions, and the need for careful attention to the “pain/value“ ratio. Designer and researcher Stephanie Rieger likewise has recently drawn our collective attention to advances in NFC, Android intent sharing, and behavior chaining that all work to tie disparate pieces of technology together.

These challenges, however, lie primarily on the “computer” side of the Human Computer Interaction (HCI) spectrum. As such, they give us only limited insight into how to best accommodate Hayles’ “scarce commodity” – the human attention span. By shifting our point of view from how machines interact with and create information to the way that humans interact with and consume information, we will be better equipped to make the connections necessary to create value for individuals. Understanding the relationship between architecture and design is an important first step in making this shift.

Image by Dan Klyn, used with permission.Image by Dan Klyn, used with permission.

Image by Dan Klyn, used with permission.

Information Architect Dan Klyn explains the difference between architecture and design with a metaphor of tailoring: the architect determines where the cuts should go in the fabric, the designer then brings those pieces together to make the finished product the best it can be, “solving the problems defined in the act of cutting.”

Along the way, the designer may find that some cuts have been misplaced – and should be stitched back together or cut differently from a new piece. Likewise, the architect remains active and engaged in the design phase, making sure each piece fits together in a way that supports the intent of the whole.

The end result – be it a well-fitted pair of skinny jeans or a user interface – is a combination of each of these efforts. As Klyn puts it, the architect specializes in determining what must be built and in determining the overall structure of the finished product; the designer focuses on how to put that product together in a way that is compelling and effective within the constraints of a given context.

Once we make this distinction clear, it becomes equally clear that user interface design is a context-specific articulation of an underlying information architecture. It is this IA foundation that provides the direct connection to how human end users find value in content and functionality. The articulatory relationship between architecture and design creates consistency of experience across diverse platforms and works to communicate the underlying information model we’ve asked users to adopt.

Let’s look at an example. The early Evernote app had a very different look and feel on iOS and Android. On Android, it was a distinctly “Evernote-branded” experience. On iOS, on the other hand, it was designed to look more like a piece of the device operating system.

Evernote_screenshotsEvernote_screenshots

Evernote screenshots, Android (left) versus iOS.

Despite the fact that these apps are aesthetically different, their architectures are consistent across platforms. As a result, even though the controls are presented in different ways, in different places, and at different levels of granularity, moving between the apps is a cognitively seamless experience for users.

In fact, apps that “look” the same across different platforms sometimes end up creating architectural inconsistencies that may ultimately confuse users. This is most easily seen in the case of “ported applications,” where iPhone user interfaces are “ported over” whole cloth to Android devices. The result is usually a jumble of misplaced back buttons and errant tab bars that send mixed messages about the effects of native controls and patterns. This, in turn, sends a mixed message about the information model we have proposed. The link between concept-rooted architecture and context-rooted design has been lost.

In the case of such ports, the full implication of the articulatory relationship between information architecture and user interface becomes clear. In these examples, we can see that information architecture always happens: either it happens by design or it happens by default. As designers, we sometimes fool ourselves into thinking that a particular app or website “doesn’t need IA,” but the reality is that information architecture is always present — it’s just that we might have specified it in a page layout instead of a taxonomy tool (and we might not have been paying attention when that happened).

Once we step back from the now familiar user interface design patterns of the last few years and examine the information architecture structures that inform them, we can begin to develop a greater awareness (and control) of how those structures are articulated across devices and contexts. We can also begin to cultivate the conditions necessary for that articulation to happen in terms that make sense to users in the context of new devices and systems, which subsequently increases our ability to capitalize on those devices’ and systems’ unique capabilities.

This basic distinction between architecture and design is not a new idea, but in the context of the Internet of Things, it does present architects and designers with a new set of challenges. In order to get a better sense of what has changed in this new context, it’s worth taking a closer look at how the traditional model of IA for the web works. This is the topic to which I’ll turn in my next post.

November 03 2013

Podcast: the Internet of Things should work like the Internet

At our OSCON conference this summer, Jon Bruner, Renee DiResta and I sat down with Alasdair Allen, a hardware hacker and O’Reilly author; Josh Marinacci, a researcher with Nokia; and Tony Santos, a user experience designer with Mozilla. Our discussion focused on the future of UI/UX design, from the perils of designing from the top down to declining diversity in washing machines to controlling your car from anywhere in the world.

Here are some highlights from our chat:

  • Alasdair’s Ignite talk on the bad design of UX in the Internet of Things: the more widgets and dials and sliders that you add on are delayed design decisions that you’re putting onto the user. (1:55 mark)
  • Looking at startups working in the Internet of Things, design seems to be “pretty far down on the general level of importance.” Much of the innovation is happening on Kickstarter and is driven by hardware hackers, many of whom don’t have design experience — and products are often designed as an end to themselves, as opposed to parts of a connected ecosystem. “We’re not building an Internet of Things, we’re building a series of islands…we should be looking at systems.” (3:23)
  • Top-down approach in the Internet of Things isn’t going to work — large companies are developing proprietary systems of things designed to lock in users. (6:05)
  • Consumer inconvenience is becoming an issue — “We’re creating an experience where I have to buy seven things to track me…not to mention to track my house….at some point there’s going to be a tipping point where people say, ‘that’s enough!’” (8:30)
  • Anti-overload Internet of Things user experience — mini printers and bicycle barometers. “The Internet of Things has to work like the Internet.” (11:07)
  • Is the Internet of Things following suit with the declining diversity we’ve seen in the PC space? Apple has imposed a design quality onto other company’s laptops…in the same vein, will we see a slowly declining diversity in washing machines, for instance? Or have we wrung out all the mechanical improvements that can possibly be made to a washing machine with current technology, opening the door for innovation through software enhancements? (17:30)
  • Tesla’s cars have a restful API; you can monitor and control the car from anywhere — “You can pop the sunroof while you’re doing 90 mph down the motorway, or worse than that, you can pop someone else’s sunroof.” Security holes are an increasing issue in the industrial Internet. (22:42)
  • “The whole point of the Internet of Things — and technology in general — should be to reduce the amount of friction in our lives.” (26:46)
  • Are we heading toward a world of zero devices, where the objects themselves are the interfaces? For the mainstream, it will depend on design and the speed of maturing technology. (30:53)
  • Sometimes the utility of a thing supersedes the poor user experience to become widely adopted — we’re looking at your history, WiFi. (33:19)
  • The speed of technology turnover is vastly increasing — in a 100 years, platforms and services like Kickstarter will be viewed as major drivers of the second industrial revolution. “We’re really going back to the core of innovation — if you offer up an idea, there will be enough people out there who want that idea to build it and make it, and that will cause someone else to have another idea.” (34:08)

Links related to our discussion:

Subscribe to the O’Reilly Radar Podcast through iTunes, SoundCloud, or directly through our podcast’s RSS feed.

October 07 2013

Podcast: expanding our experience of interfaces and interaction

At our Sci Foo Camp this past summer, Jon Bruner, Jim Stogdill, Roger Magoulas, and I were joined by guests Amanda Parkes, a professor in the Department of Architecture at Columbia University, and CTO at algae biofuels company Bodega Algae and fashion technology company Skinteractive Studio; Ivan Poupyrev, principle research scientist at Disney Research, who leads an interaction research team; and Hayes Raffle, an interaction designer at Google [X] working on Project Glass. Our discussion covered a wide range of topics, from scalable sensors to tactile design to synthetic biology to haptic design to why technology isn’t a threat but rather is essential for human survival.

Here are some highlights from our discussion:

  • The Botanicus Interacticus project from Disney research and the Touché sensor technology.
  • Poupyrev explains the concept behind the Touché sensor is that we need to figure out how to make the entire world interactive, developing a single sensor that can be scalable to any situation — finding a universal solution that can adapt to multiple uses. That’s what Touché is, Poupyrev says: “a sensing technology that can dynamically adapt to multiple objects and can sense interaction with water, with everyday objects, with tables, with surfaces, the human body, plants, cats, birds, whatever you want.” (2:50 mark)
  • The ultimate goals of Google Glass and Touché, Raffle says, are similar in that they’re both trying to make computers disappear — Disney is putting the computation into the world so that it’s indistinguishable from objects around us, Glass aims to bring technology closer to you so that it almost fades into the background when you’re using it. (4:17 mark)
  • In a similar vein, Parkes is interested in bringing the interactivity of our surroundings — such as the overwhelming visual pollution in Times Square — back to a more natural, softer state — and is there any hope for the Times Square redesign? (5:56 mark)
  • Parkes’ discussion of her work prompted a nod from Stogdill to Design in Nature, by Adrian Bejan and J. Peder Zane.
  • We also discussed how the expansion in our experience of interfaces and interaction is enhancing our possibilities to be entertained as well as enhancing beauty, aesthetics and pleasure (9:15 mark); Poupyrev stressed as well that it’s the technology that makes humans who we are — technology isn’t a threat; it’s the most important thing for human survival. (13:28 mark)

Additional points of note include a discussion of Google Glass in the wild (14:19 mark); biocouture (19:15 mark); Parkes’ recent experiment: feeding organic conductive ink to slime mold to see if it’ll produce conductive circuit traces — can we can grow our own circuit boards? (17:50 mark). Also, research into haptic technology that creates tactile sensations in free air (23:58 mark) — have a look:

September 13 2013

Progressive reduction, Bret Victor rants, and Elon Musk is Tony Stark

Investigating emerging UI/UX tech in the design space is leading me to many interesting people, projects and innovative experiments. Here’s a brief selection of highlights I’ve come across in my research. Seen something interesting in UI/UX innovation? Please join the discussion in the comments section or reach out to me via email or on Twitter.

Getting users to interact with products in a particular way is hard. On one hand, you have experienced users who are ready for advanced features and interactions, but at the same time, you have new users who might be put off or confused by too much too soon. Think Interactive’s Alison McKenna took a look at a potential solution: Progressive Reduction — what if designers had a one-size-fits-all solution that allowed an interface to adapt to a user’s level of proficiency? She points to Allan Grinshtein’s seminal article, in which he describes how his company, LayerVault, implements Progressive Reduction and defines the concept: “The idea behind Progressive Reduction is simple: Usability is a moving target. A user’s understanding of your application improves over time and your application’s interface should adapt to your user.”

Maybe the solution is to look outside the user altogether. San Francisco designer Mike Long wants designers to stop designing for “users” and design for activities, rather than individuals:

“Activity-Centered Design (ACD) focuses on the activity context in which individuals interact with your product. Instead of analyzing specific goals and tasks, ACD focuses on the analysis of meaningful, goal-directed actions supported by tools and artifacts in a social world.”

Long outlines how companies or teams can create human activity diagrams to identify a product’s activity context.

Perhaps we haven’t properly designed for the individual in the first place. Bret Victor produced a delicious rant on the “Future Interfaces For The Future” and how they lack vision, i.e. screens under glass, what he dubs “Pictures Under Glass.” Victor encourages designers to ditch the handheld devices that ignore our hands and embrace human capability:

“Hands do two things. They are two utterly amazing things, and you rely on them every moment of the day, and most Future Interaction Concepts completely ignore both of them.

“Hands feel things, and hands manipulate things. … We live in a three-dimensional world. Our hands are designed for moving and rotating objects in three dimensions, for picking up objects and placing them over, under, beside, and inside each other. No creature on earth has a dexterity that compares to ours.”

Victor suggests paying attention next time you make a sandwich — how your fingers and hands manipulate ingredients and utensils — and, comparing that to your experience interacting with Pictures Under Glass, asks: “Are we really going to accept an Interface Of The Future that is less expressive than a sandwich?” Go. Read.

On the academic front, there’s some amazing design research being done at RMIT University in Melbourne, Australia — from centers and institutes focusing on collaborative design innovation across disciplines to smart technology solutions to complex questions around the future of cities to health and lifestyle innovation to climate change and sustainability. Fair warning: do not enter if you don’t have time to lose yourself.

And some might have suspected this before, but it turns out Elon Musk is a real-life Tony Stark. John Koetsier reports at VentureBeat that Musk’s SpaceX engineers develop rockets “at least partially with Iron Man-style 3D immersive reality visualizations that designers and engineers can interact with in real time with natural hand gestures.” They then 3D print the designed components. This story came to me via Mac Slocum, who has awesome ideas on adapting the technology for finger-swipe editing and custom fist-pump publishing.

Here’s a video of Musk demonstrating the sensor and visualization design technology that’s “going to revolutionize design and manufacturing in the 21st century”:

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl