Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

December 13 2013

Architecture, design, and the connected environment

Just when it seems we’re starting to get our heads around the mobile revolution, another design challenge has risen up fiercer and larger right behind it: the Internet of Things. The rise in popularity of “wearables” and the growing activity around NFC and Bluetooth LE technologies are pushing the Internet of Things increasingly closer to the mainstream consumer market. Just as some challenges of mobile computing were pointedly addressed by responsive web design and adaptive content, we must carefully evaluate our approach to integration, implementation, and interface in this emerging context if we hope to see it become an enriching part people’s daily lives (and not just another source of anger and frustration).

It is with this goal in mind that I would like to offer a series of posts as one starting point for a conversation about user interface design, user experience design, and information architecture for connected environments. I’ll begin by discussing the functional relationship between user interface design and information architecture, and by drawing out some implications of this relationship for user experience as a whole.

In follow-up posts, I’ll discuss the library sciences origins of information architecture as it has been traditionally practiced on the web, and situate this practice in the emerging climate of connected environments. Finally, I’ll wrap up the series by discussing the cognitive challenges that connected systems present and propose some specific measures we can take as designers to make these systems more pleasant, more intuitive, and more enriching to use.

Architecture and Design

Technology pioneer Kevin Ashton is widely credited with coining the term “The Internet of Things.” Ashton characterizes the core of the Internet of Things as the “RFID and sensor technology [that] enables computers to observe, identify, and understand the world — without the limitations of human-entered data.”

About the same time that Ashton gave a name to this emerging confluence of technologies, scholar N. Katherine Hayles noted in How We Became Posthuman that “in the future, the scarce commodity will be the human attention span.” In effect, collecting data is a technology problem that can be solved with efficiency and scale; making that mass of data meaningful to human beings (who evolve on a much different timeline) is an entirely different task.

The twist in this story? Both Ashton and Hayles were formulating these ideas circa 1999. Now, 14 years later, the future they identified is at hand. Bandwidth, processor speed, and memory will soon be up to the task of ensuring the technical end of what has already been imagined, and much more. The challenge before us now as designers is in making sure that this future-turned-present world is not only technically possible, but also practically feasible — in a word, we still need to solve the usability problem.

Fortunately, members of the forward guard in emerging technology have already sprung into action and have begun to outline the specific challenges presented by the connected environment. Designer and strategist Scott Jenson has written and spoken at length about the need for open APIs, flexible cloud solutions, and the need for careful attention to the “pain/value“ ratio. Designer and researcher Stephanie Rieger likewise has recently drawn our collective attention to advances in NFC, Android intent sharing, and behavior chaining that all work to tie disparate pieces of technology together.

These challenges, however, lie primarily on the “computer” side of the Human Computer Interaction (HCI) spectrum. As such, they give us only limited insight into how to best accommodate Hayles’ “scarce commodity” – the human attention span. By shifting our point of view from how machines interact with and create information to the way that humans interact with and consume information, we will be better equipped to make the connections necessary to create value for individuals. Understanding the relationship between architecture and design is an important first step in making this shift.

Image by Dan Klyn, used with permission.Image by Dan Klyn, used with permission.

Image by Dan Klyn, used with permission.

Information Architect Dan Klyn explains the difference between architecture and design with a metaphor of tailoring: the architect determines where the cuts should go in the fabric, the designer then brings those pieces together to make the finished product the best it can be, “solving the problems defined in the act of cutting.”

Along the way, the designer may find that some cuts have been misplaced – and should be stitched back together or cut differently from a new piece. Likewise, the architect remains active and engaged in the design phase, making sure each piece fits together in a way that supports the intent of the whole.

The end result – be it a well-fitted pair of skinny jeans or a user interface – is a combination of each of these efforts. As Klyn puts it, the architect specializes in determining what must be built and in determining the overall structure of the finished product; the designer focuses on how to put that product together in a way that is compelling and effective within the constraints of a given context.

Once we make this distinction clear, it becomes equally clear that user interface design is a context-specific articulation of an underlying information architecture. It is this IA foundation that provides the direct connection to how human end users find value in content and functionality. The articulatory relationship between architecture and design creates consistency of experience across diverse platforms and works to communicate the underlying information model we’ve asked users to adopt.

Let’s look at an example. The early Evernote app had a very different look and feel on iOS and Android. On Android, it was a distinctly “Evernote-branded” experience. On iOS, on the other hand, it was designed to look more like a piece of the device operating system.


Evernote screenshots, Android (left) versus iOS.

Despite the fact that these apps are aesthetically different, their architectures are consistent across platforms. As a result, even though the controls are presented in different ways, in different places, and at different levels of granularity, moving between the apps is a cognitively seamless experience for users.

In fact, apps that “look” the same across different platforms sometimes end up creating architectural inconsistencies that may ultimately confuse users. This is most easily seen in the case of “ported applications,” where iPhone user interfaces are “ported over” whole cloth to Android devices. The result is usually a jumble of misplaced back buttons and errant tab bars that send mixed messages about the effects of native controls and patterns. This, in turn, sends a mixed message about the information model we have proposed. The link between concept-rooted architecture and context-rooted design has been lost.

In the case of such ports, the full implication of the articulatory relationship between information architecture and user interface becomes clear. In these examples, we can see that information architecture always happens: either it happens by design or it happens by default. As designers, we sometimes fool ourselves into thinking that a particular app or website “doesn’t need IA,” but the reality is that information architecture is always present — it’s just that we might have specified it in a page layout instead of a taxonomy tool (and we might not have been paying attention when that happened).

Once we step back from the now familiar user interface design patterns of the last few years and examine the information architecture structures that inform them, we can begin to develop a greater awareness (and control) of how those structures are articulated across devices and contexts. We can also begin to cultivate the conditions necessary for that articulation to happen in terms that make sense to users in the context of new devices and systems, which subsequently increases our ability to capitalize on those devices’ and systems’ unique capabilities.

This basic distinction between architecture and design is not a new idea, but in the context of the Internet of Things, it does present architects and designers with a new set of challenges. In order to get a better sense of what has changed in this new context, it’s worth taking a closer look at how the traditional model of IA for the web works. This is the topic to which I’ll turn in my next post.

July 24 2013

Four short links: 24 July 2013

  1. What to Look For in Software Dev (Pamela Fox) — It’s important to find a job where you get to work on a product you love or problems that challenge you, but it’s also important to find a job where you will be happy inside their codebase – where you won’t be afraid to make changes and where there’s a clear process for those changes.
  2. The Slippery Slope to Dark Patterns — demonstrates and deconstructs determinedly user-hostile pieces of software which deliberately break Nielsen’s usability heuristics to make users agree to things they rationally wouldn’t.
  3. Victory Lap for Ask Patents (Joel Spolsky) — story of how a StackExchange board on patents helped bust a bogus patent. It’s crowdsourcing the prior art, and Joel shows how easy it is.
  4. The World as Fire-Free Zone (MIT Technology Review) — data analysis to identify “signature” of terrorist behaviour, civilian deaths from strikes in territories the US has not declared war on, empty restrictions on use. Again, it’s a test that, by design, cannot be failed. Good history of UAVs in warfare and the blowback from their lax use. Quoting retired General Stanley McChrystal: The resentment caused by American use of unmanned strikes … is much greater than the average American appreciates. They are hated on a visceral level, even by people who’ve never seen one or seen the effects of one.

January 17 2012

Mobile interfaces: Mistakes to avoid and trends to watch

Drawing on a tabletIn the following interview, "Designing Mobile Interfaces" co-author Steven Hoober discusses common mobile interface mistakes, and he examines the latest mobile device trends — including why the addition of more gestures and sensors isn't wholly positive.

What are the most common mobile UI mistakes?

Steven Hoober: The biggest issues are common to everyone, and they're strategic. Specifically, don't make a decision on what or how you are going to develop for mobile without some good thinking and some research. For example, your product might be best on the web, or as an SMS service, or 60% of your customers are on BlackBerry. Developing an iPhone app will not get the benefits you'd expect in these cases.

Related to this is making sure you have the right data. I see lots of people who suddenly reveal that 90% of their desktop web clicks are coming from, for example, iPad. Much of the time, shocking numbers like this are simply wrong, and the analytics tool is being tricked. Or, there is some other driver, such as that the site works poorly on Firefox, and it's redirected to a dumbed-down version on most handsets, so no one uses it.

Mobile must never be a dumbed-down, limited experience. Sure, it can be different from the desktop, but users expect all information everywhere they go now. Don't make them go to the desktop site or use their desktop for some parts of your product. If you do, they will probably find a competitor that doesn't make them do this.

Lastly, make sure that you are addressing the whole mobile experience — from the way an app is sold in the store or market to the password-reset email. Each of these elements can break the customer's experience enough that they might just stop using your product.

What recent mobile UI and mobile trends — good or bad — have caught your attention?

Steven Hoober: I fear that gesture is getting out of hand. More and more gestures are being added, and far too many are at the operating-system (OS) level. At first, I liked this for consistency, but now I'm seeing that it risks interfering with getting work done. OS-level gestures supersede good ideas at the app level, or they will prevent app developers from coming up with interesting gestural interfaces that fit their specific needs.

Additionally, I fear that using gesture alone is making the discovery of functions and features even more difficult. Basic functions are becoming "Easter eggs." The trend away from menus means that sometimes it's impossible to find a feature you just know is in there. We need buttons and lists and controls, at least as secondary functions.

Also, for good and bad, we're getting more sensors in devices. Near-field communication (NFC) is a good example. But theses sensors are all too often being used as deliberate, direct technology in the way GPS is tied to driving directions. Mobile sensors — and radios — can and should be used for lots of other purposes.

What do you see as the core UI difference between smartphones and tablets?

Steven Hoober: Larger screens should mean more collaboration and sharing. Tablets, used hand held or as kiosks, seem to encourage joint usage, but they are often designed as individual platforms. Even in the book, we conflated all mobiles as personal, but that's partly because the operating systems are set up this way now. I'd like to see more exploration of simultaneous, multi-user interfaces to exploit the platform.

Designing Mobile Interfaces — With hundreds of thousands of mobile applications available today, your app has to capture users immediately. This book provides practical techniques to help you catch — and keep — their attention. You'll learn core principles for designing effective user interfaces, along with a set of common patterns for interaction design on all types of mobile devices.


Reposted byRK RK

December 16 2011

Top Stories: December 12-16, 2011

Here's a look at the top stories published across O'Reilly sites this week.

Five big data predictions for 2012
The coming year of big data will bring developments in streaming data frameworks and data marketplaces, along with a maturation in the roles and processes of data science.

A war story, a Kindle Single, and hope for long-form journalism
Instead of walking his latest long-form story door to door, freelance journalist Marc Herman decided to blaze his own trail — he published the story as a Kindle Single. In this interview, he talks about the Kindle Single experience and offers his take on the future of journalism.

You can't get away with a bad mobile experience anymore
Mobile used to carry built-in caveats around speed and design, but those excuses are now wearing thin. In this interview, Strangeloop's Joshua Bixby examines the evolution of mobile expectations and how companies should adapt.

An angel who bets on women-led companies
Joanne Wilson discusses becoming an angel investor, how investors can help change the ratio of women CEOs, and the Mars versus Venus approach to entrepreneurialism.

Where is the OkCupid for elections?
What will be the "OkCupid for elections" in 2012? Open-source app offers one approach, and startup ElectNext is applying data analysis with an issue-matching engine.

Tools of Change for Publishing, being held February 13-15 in New York, is where the publishing and tech industries converge. Register to attend TOC 2012.

December 14 2011

You can't get away with a bad mobile experience anymore

In the past, we accepted certain limitations in mobile content and speed because it was the actual, real web on a mobile device! But as Strangeloop president Joshua Bixby (@joshuabixby) notes in the following interview, that mode of thinking is on its way out. Now, most customers expect a full and fast web experience whether they're on a desktop, tablet or phone. The companies that offer that are poised to succeed. And for those that don't, ridicule will be the least of their concerns.

Should companies be thinking "mobile first" now?

Joshua BixbyJoshua Bixby: We talk about "the web" and "the mobile web" as if the two are different, but they aren't. I'm the first to admit that I'm as guilty of doing this as the next person. Using these terms is helpful for discussing differences in how people browse via different devices, but at the end of the day, it's all one web. Users want the same breadth and depth of content, no matter what device they're using. They want a consistent, reliable user experience. They don't want to interact with your site one way at their desks, then learn a whole new way when they're tablet-surfing on the couch, and then learn a third way when they're roaming around with their phones. Site owners who can deliver an experience that feels the same, regardless of the platform, are the ones who are going to own the web of the not-too-distant future.

Will all phones be smartphones at some point? Or will non-smartphones remain an important segment for years to come?

Joshua Bixby: It really depends on which market you look at. Last summer, comScore published a survey that showed that, despite the rapid rate of smartphone adoption, 155 million American mobile phone users still don't have smartphones. That's obviously a pretty significant number. But if you take a global view, things flip around and we see that, especially in developing countries, new mobile users are jumping right on the smartphone wagon. This allows users to bypass both dumb phones and the desktop web. It's a whole different way of interacting with the Internet.

I think an even better question might be: How great a disruptor will tablets be to the mobile market? Site owners have been caught with their mobile pants down, so to speak, in their inability to recognize the importance of this market. Forrester did a survey of retailers and found that, on average, about 20% of holiday mobile traffic came via tablets, with some retailers reporting that more than half their mobile traffic came via tablets. But most "mobile-optimized" sites look terrible on tablets. In 2012, site owners are going to be scrambling to catch up with this paradigm shift.

Should mobile app development be prioritized over mobile site development?

Joshua Bixby: Mobile apps will continue to have a role for repeat, loyal users of an online store or service, but it's incredibly narrow-sighted of companies to focus on app development over site development. People are always going to want to access the full public Internet. It's circa-1995, AOL-style thinking to assume otherwise.

I've seen scenario after scenario where apps present a huge usability problem because they don't anticipate the various ways that the web and apps intersect. Media sites are some of the worst culprits. Here's a common scenario: You're checking your Twitter feed and click on a link, but instead of taking you to the article, you're funneled to an intermediary "download our app" page. Often, this is a demand, not a request. There's no way to bypass this app block. And then say you do download the app — after it's installed, you don't even get served the article you originally wanted to see. You just get served the home page of the site, so you have to go back to Twitter and find the original link again.

This is just one scenario, but there are plenty more, and they all highlight the fact that many site owners don't recognize the complexity of mobile user paths through the web.

What are the most important mobile key performance indicators (KPIs)? How do these differ from the KPIs of traditional websites?

Joshua Bixby: This really depends on the type of site. Obviously, for ecommerce sites, you're always going to care about revenue and conversion, but these aren't necessarily as important for mobile as for desktop traffic. For all you know, your mobile user is standing in your bricks-and-mortar store doing some price-checking or reading product reviews. They're going to convert in your store. For these users, what you care about is engagement, not conversions, which is where KPIs like bounce rate and page views come in.

Site owners should look closely at bounce rate, page views, and time on page for their mobile traffic. These numbers should be commensurate with their desktop traffic. If they're way off — say you're averaging 2.3 page views per mobile visitor and 5.9 page views per desktop visitor — that could be an indicator that the mobile site is doing something to drive users away. In that case, you should analyze how fast your pages are loading or how usable they are.

Does a one-second delay on the mobile side have more significant repercussions than a one-second delay on the desktop side?

Joshua Bixby: I'm in the process of researching this very question. I recently did some fairly exhaustive analysis of mobile KPIs for a couple of Strangeloop's customers, which involved looking at more than 500,000 unique visits and analyzing how page load time affected metrics like conversion, abandonment rate, cart size, and page views. I presented my findings at Velocity Europe and Velocity Berlin. When I got back home, I asked myself, "Why didn't I compare these numbers against some kind of desktop baseline?" So I'm doing that now. I'll be happy to share my findings when I'm done.

Have you found that users of one type of mobile OS or device are more accepting of delays than others?

Joshua Bixby: This is a really neat area of research, and one that we've only begun to explore. In the mobile research I just mentioned, we did a little experiment where we used network quality as a proxy for performance. The reason for doing this was to try to get a sense of what major changes in page load — say the difference between pages that load in six seconds versus pages that load in 20-plus seconds — do to metrics. We found a few interesting things.

To start, when it comes to bounce rate, we found that iPad users have about the same bounce rate as Android and iPhone users — about 22-24% — when pages are served slowly to all three groups (iPad, iPhone and Android). But speeding pages up had a much greater impact on the iPad group than it did on the other two groups: The iPad users' bounce rates dropped to around 5%, compared to 8% for iPhone users and 11% for Android users.

What this means is anyone's guess, but I'd conjecture that the iPad experience is probably more conducive to extended browsing, so iPad users are more likely to stick around once they're confident that they'll get a reasonable user experience. Or maybe it's because iPad users are more likely to be sitting at home on their couch, while smartphone users are more likely to be out and about.

What are the most common mobile optimization mistakes? How should they be addressed?

Joshua Bixby: The most common mistake I see is when site owners aggressively optimize their sites for very specific platforms. There's a blog that I'm kind of addicted to called WTF Mobile Web, that is filled with examples of this from companies that should probably know better. For example, if you go to Canon's site on your iPad, you're served a fixed-width site that takes up about two-fifths of the screen. If I'm using a tablet, I've got a big shiny screen. I want to see pictures and rich content.

Canon's website on an iPad
Something seems to be missing ... fails to take full advantage of the iPad's screen size.

Speaking of stripped-down sites, there's such a thing as being too stripped down. The overly minimalist look of so many "mobile-optimized" pages was an understandable reaction to the wave of complaints about pages being too busy on mobile devices. But the bare-bones menu concept is not a solution. People expect more from websites in the aesthetic sense, and they want more than a plain column of buttons.

Here's one last complaint: "Mobile-optimized" sites that contain a "See full site" link, but then after you click on it, it's impossible to return to the mobile version. This has happened to me a couple of times, and it's kind of maddening. When you're used to the relative ease and fluidity of the web, these rabbit holes are hard to accept.

What's your take on the Kindle Fire and the Silk browser?

Joshua Bixby: The Kindle Fire and the Silk browser are really fascinating developments because they represent the truly out-of-the-box thinking that's going to usher in the next phase of aggressive performance optimization. That said, I don't think they're going to radically change the landscape in and of themselves.

We all got excited when Amazon described Silk's so-called "split browser" design — the fact that Silk can offload complex browser tasks to Amazon's cloud, resulting in dramatically faster load times. The logical next questions became: Will there be any imitators among the other browser vendors? Does the split-browser concept represent a new direction for browsers? The short answer to both questions is "no," at least for 2012.

For starters, there are privacy concerns surrounding the fact that this kind of split architecture has the potential to capture private user data. And while Silk offers a performance boost for some tablet content, even its own product manager, Brett Taylor, says of tablet browsing, "It's not meant to process and crunch a lot of heavy data."

Basic optimization techniques — such as those embedded in Silk — that transform code on a per-page basis to render pages faster in the browser, can actually slow down, or even break, pages. Web pages are becoming even more complex, data-intensive, and dynamic. For now, advanced content optimization — which takes a big-picture approach to accelerating an entire site — is still the only reliable way to radically optimize sites without causing harm.

Since we're in the midst of the holiday shopping season, what role do you see mobile playing this year?

Joshua Bixby: The 2011 holiday shopping season has proven that the mobile web is no longer a curiosity: 15.2% of online traffic came from mobile devices on Thanksgiving Day, more than double the previous year. Not only were these mobile shoppers spending money, they spent more on average — particularly shoppers using iPads — than desktop shoppers. This will be a wake-up call to retailers. Rather than keeping mobile on the sideline, companies will grow their mobile teams to match the size and scope of their regular development teams.

Joshua Bixby discussed mobile speed and KPIs at Velocity Europe. His full presentation is available in the following video.


December 13 2011

Now available: "Breaking the Page" preview edition

I'm thrilled to announce the release of the preview edition of "Breaking the Page: Transforming Books and the Reading Experience" (available through the iBookstore, Amazon and O'Reilly). In this free download, I tackle one big-ticket question: how do we make digital books as satisfying as their print predecessors?

I've studied hundreds of recent publishing experiments, comparing them all to what I've learned during a 20-plus year career as writer, editor, and publisher. My goal: distill best-practice principles and spotlight model examples. I want to help authors understand how to use the digital canvas to convey their best ideas, and how to do so in a reader-friendly way. As app book tinkering flourishes, and as EPUB 3 emerges as an equally rich alternative, the time felt right for a look at the difference between what can and what should be done in digital book-land. That's my mission in "Breaking the Page."

The preview edition's three chapters focus on some basics: browsing, searching, and navigating. This ain't the sexiest crew, I know, but it's amazing how hard it is to get this stuff right. I focus on examples good and bad, toss in a few design ideas of my own, and suggest how to include these services in a way that makes digital books pleasing on eyes, hands, and minds.

Ahead, I've got a head-to-toe tour of model digital book features planned for the full edition (coming mid-2012). I'll be focusing on questions like:

  • What's the best way to integrate — and not just add — different media types? And, on a related note: is it possible to make the viewing experience as seamless and immersive as reading is in print?
  • How do you design content and reading paths on what is, essentially, an infinite canvas?
  • How do you pick the best balance between personalized design (reader-controllable font sizing, for example) and author-driven fixed layout? Are there any acceptable compromises?

While I'm pushing ahead to the finish line, I'd love to hear what you think. Suggestions, examples, critiques … send 'em all my way.

TOC NY 2012 — O'Reilly's TOC Conference, being held Feb. 13-15, 2012, in New York City, is where the publishing and tech industries converge. Practitioners and executives from both camps will share what they've learned and join together to navigate publishing's ongoing transformation.

Register to attend TOC 2012


November 08 2011

October 26 2011

"Revolution in the Valley," revisited

It's one thing to look back on a project with the power of hindsight and recognize that project's impact. It's quite another to realize a project will be profound as it's still playing out.

Andy Hertzfeld had that rare experience when he was working on the original Macintosh. Hertzfeld and his colleagues knew the first Mac could reshape the boundaries of computing, and that knowledge proved to be an important motivator for the team.

Over the years, Hertzfeld has chronicled many of the stories surrounding the Macintosh's development through his website,, and in his 2004 book, "Revolution in the Valley." With the book making its paperback debut and the work of Steve Jobs and Apple fresh in people's minds, I checked in with Hertzfeld to revisit a few of those past stories and discuss the long-term legacy of the Macintosh.

Our interview follows.

What traits did people on the Macintosh team share?

Andy HertzfeldAndy Hertzfeld: Almost everyone on the early Macintosh team was young, super smart, artistic and idealistic. We all shared a passionate conviction that personal computers would change the world for the better.

At what point during the Macintosh's development did you realize this was a special project?

Andy Hertzfeld: We knew it from the very beginning; that's why we were so excited about it. We knew that we had a chance to unlock the incredible potential of affordable personal computers for the average individual for the very first time. That was incredibly exciting and motivating.

Between the book and your website, did the story of the Macintosh take on a "Rashomon"-like quality for you?

Andy Hertzfeld: I was hoping for more of that than we actually achieved — there haven't been enough stories by varied authors for that to come to the forefront. The most "Rashomon"-like experience has been in the comment section of the website, where people have corrected some of my mistakes and sometimes described an alternate version that's different than my recollections.

How did the Macintosh project change after Steve Jobs got involved?

Andy Hertzfeld: It became real. Before Steve, the Macintosh was a small research project that was unlikely to ever ship; after Steve, it was the future of Apple. He infused the team with a fierce urgency to ship as soon as possible, if not sooner.

There's a story about how after Jobs brought you onto the Macintosh team, he finalized the move by yanking your Apple II's power chord out of its socket. Was that an unusual exchange, or did Jobs always act with that kind of boldness?

Andy Hertzfeld: Well, not always, but often. Steve was more audacious than anyone else I ever encountered.

In the book, you wrote that as soon as you saw the early Mac logic board you knew you had to work on the project. What about that board caught your attention?

Andy Hertzfeld: It was the tightness and cleverness of the design. It was a clear descendant of the brilliant work that Steve Wozniak did for the Apple II, but taken to the next level.

Was the logic board the gravitational center for the project?

Andy Hertzfeld: I would say that Burrell Smith's logic board was the seed crystal of brilliance that drew everyone else to the project, but it wasn't the gravitational center. That would have to be the user interface software. My nomination for the gravitational center is Bill Atkinson's QuickDraw graphics library.

Revolution in The Valley: The Insanely Great Story of How the Mac Was Made — "Revolution in the Valley" reveals what it was like to be there at the birth of the personal computer revolution. The story comes to life through the book's portrait of the talented and often eccentric characters who made up the Macintosh team.

Today only: Get the ebook edition of "Revolution in the Valley" for $9.99 (Save 50%).

Where does the graphical user interface (GUI) rank among tech milestones? Is it on the same level as the printing press or the Internet?

Andy Hertzfeld: It's hard to compare innovations from different eras. I think both the printing press and the Internet were more important than the GUI, but the Internet couldn't have reached critical mass without the GUI enabling ordinary people to enjoy using computers.

In the book's introduction you wrote that the team "failed" because computers are still difficult. Has computer complexity improved at all?

Andy Hertzfeld: I think things have improved significantly since I wrote that in 2004, mainly because of the iPhone and iPad, which eliminated lots of the complexity plaguing desktop computing. That said, I think we still have a ways to go, but newer developments like Siri are very promising.

What do you think the long-term legacy of the Macintosh will be?

Andy Hertzfeld: The Macintosh was the first affordable computer that ordinary people could tolerate using. I hope we're remembered for making computing delightful and fun.

Would a current Apple fan have anything in common with a Macintosh-era Apple fan?

Andy Hertzfeld: Sure, a love for elegant design and bold innovation. They both think differently.

This interview was edited and condensed.


October 20 2011

Note to visualization creators: Add subtitles and narration

I came across a fantastic animated visualization from NASA that shows a spaced-based view of the Earth's fires. Like many visualizations in this genre, the combination of maps, data points and time-based animation proves quite powerful.

Beyond being just plain fascinating, this particular visualization also includes two simple but often neglected features: subtitles and narration.

NASA fire visualization
NASA's animated visualization of the Earth's fires is enhanced by subtitles and simple narration. (Click to see full version.)

It's common for visualization creators to write accompanying posts that detail the development of their visualizations and/or dig further into findings (case in point: NASA's excellent write-up). These sorts of posts are great, but why not go the extra step and embed a degree of context within the animations? That way, the information contained in subtitles and narration will always float along with the core material. This is particularly important with embeddable objects because the flashy animated bits are what gets shared across the web. The contextual elements rarely come along for the ride.

That said, the NASA visualization is far from perfect. Notably, there doesn't appear to be an easy way to embed a version of the video with formal subtitles and narration. A Fast Company version is embeddable, but captions and narration are absent. A YouTube version gets a little closer — it includes the narration and YouTube's experimental captions — but it's not on par with the full version available through NASA's site. What I really want is an embeddable "official" animation with narration and subtitles that I can toggle on or off.

Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.

Save 20% on registration with the code RADAR20


October 18 2011

Six ways to think about an "infinite canvas"

masterpiece by 416style, on FlickrThis is part of an ongoing series related to Peter Meyers' project "Breaking the Page: Transforming Books and the Reading Experience." We'll be featuring additional material in the weeks ahead. (Note: This post originally appeared on A New Kind of Book. It's republished with permission.)

Next week, I'm speaking at the Books in Browsers conference on "the infinite canvas." When I started chewing on this topic, my thoughts centered on a very literal vision: a super-ginormous sheet for authors to compose on. And while I think there's some great creative territory to explore in this notion of space spanning endlessly up, down, left, and right, I also think there are a bunch of other ways to define what an infinite canvas is. Not simply a huge piece of virtual paper, but instead, an elastic space that does things no print surface could do, no matter how big it is. So, herewith, a quick stab at some non-literal takes on the topic. My version, if you will, of six different ways of thinking about the infinite canvas.

Continuously changeable

The idea here is simple: refreshable rather than static content. The actual dimensions of the page aren't what's elastic; instead, it's what's being presented that's continuously changing. In some ways, the home page of a newspaper's website serves as a good example here. Visit The Boston Globe half a dozen times over the course of a week and each time you'll see a new serving of news. (Haven't seen that paper's recent online makeover yet? Definitely worth checking out, and make sure to do so using a few different screen sizes — laptop, big monitor, mobile phone ... each showcases a different version of its morphing, on-the-fly design.)

Deep zooms

Ever seen that great short video, "The Power of Ten"? It's where the shot begins just above two picnickers on a blanket and then proceeds to zoom out so that you see the same picnic blanket, but now from 100 feet up, and then 1,000 feet, and on and on until you've got a view from outer space. (After the zoom out, the process reverses, and you end up getting increasingly microscopic glimpses of the blanket, its fabric, the individual strands of cotton, and so on.) Here's a presentational canvas that adds new levels of meaning at different magnifications. So, the viewer doesn't simply move closer or further away, as you might in a room when looking at, say, a person. As you get closer, you see progressively deeper into the body. Microsoft calls this "semantic zooming" (as part of its forthcoming touchscreen-friendly Metro interface). Bible software maker Glo offers some interesting content zooming tools that implement this feature for readers looking to flip between birds-eye and page views.

Alternate geometries

A printed page is a 2-D rectangle of fixed dimensions. On the infinite canvas, the possibilities vary widely, deeply, and as Will Ferrell's character in "Old School" might say, "in ways we've never even heard of." Some possible shapes here: a 3-D cube with content on each side, or pyramid-shaped ebooks (Robert Darnton wrote about those in The New Age of the Book, where he proposes a multi-layered structure for academics with excess material that would bust the bindings of a printed book).

Canvases that give readers room to contemplate and respond

I just got a wonderful print book the other day called "Finish This Book." It contains a collection of fill-in-the-blank and finish-this-thought creative exercises. It reminded me that one thing digital books haven't yet explored much is leaving space for readers to compose their reactions. Sure, every ebook reader today lets you take notes, but as I've written before, these systems are pale replicas of the rich, reader-friendly note taking experiences we get in print books. Job No. 1 is solving those shortcomings, but then imagine the possibilities if digital books are designed to allow readers to compose extensive thoughts and reactions.


Print book lovers (I'm one of 'em) wax on about their beloved format's special talents: the smell, the feel, its nap-friendly weight. But touchscreen fans can play that game, too. Recall, for starters, the first time you tapped an iPhone or similarly modern touchscreen. Admit it: the way it felt to pinch, swipe, flick, and spread ... those gestures introduce a whole new pleasure palette. Reading and books have heretofore primarily been a visual medium: you look and ponder what's inside. Now, as we enter the age of touchscreen documents, content becomes a feast for our fingers as much as our eyes. Authors, publishers, and designers are just beginning to appreciate this opportunity, making good examples hard to point to. I do think that Erik Loyer is among the most interesting innovators with his Strange Rain app, a kind of mashup between short fiction and those particle visualizers like Uzu. It's not civilian-friendly yet, I don't think, but it points the way for artists interested in incorporating touch into their creations.

Jumbo content

A movable viewport lets your audience pan across massive content panoramas. Some of the possibilities here are photographic (Photosynth, Virtual History ROMA). Others have begun to explore massively wide content landscapes, such as timelines (History of Jazz). One new example I just learned about yesterday: London Unfurled for iPad, a hand-illustrated pair of 37-foot long drawings of every building on the River Thames between Hammersmith Bridge and Millennium Dome, complete with tappable backstories on most of the architecture that's on display.

These are just a few of the possibilities that I've spotted. What comes to mind when you think about the infinite canvas?

Webcast: Digital Bookmaking Tools Roundup #2 — Back by popular demand, in a second look at Digital Bookmaking Tools, author and book futurist Pete Meyers explores the existing options for creating digital books.

Join us on Thursday, November 10, 2011, at 10 am PT
Register for this free webcast

Photo: masterpiece by 416style, on Flickr


October 17 2011

Open Question: What needs to happen for tablets to replace laptops?

Open QuestionI've owned an iPad since you could own an iPad. I upgraded from iPad 1 to iPad 2 because the thinner form factor, faster response and Smart Cover were too hard to resist. So, I suppose you could say I'm a fan — both of the iPad itself and the overall tablet experience it provides.

But here's the thing: I now often carry a tablet and a laptop and a smartphone. The dream of one device to rule them all has morphed into a hazy vision of three devices that are all somehow necessary (tablet for browsing/consuming, laptop for real work, phone for on-the-go updates/camera — how did it come to this?).

Now, I know there are people out there who can bend a tablet to their will. I don't have that super power. "Inputting" on my tablet is an exercise in hunt-and-peck futility. More often than not I delay long email responses and other typing-intensive work until I'm stationed in front of a proper computer. This is why my tablet experience, in its current form, can never replace my laptop experience.

I bring all this up because participants in a recent back-channel email thread did something really interesting: They ignored the question of where tablets fit in now and instead examined the specific features they would need before tablets could replace their laptops. The focus was shifted from how tablets currently work to how they should work.

Here's a few tablet wish lists from the email thread (republished with permission; names withheld).

Participant 1:

I want a laptop with a removable screen that acts like a tablet — in other words, a dockable tablet. I want it to have great voice recognition. I want it to have Swype, so I can input text without having to "poke type" at a virtual keyboard with fingers or thumbs — and so I can input text one-handed quickly and easily. I want it to have great battery life in tablet mode, augmented by a second battery in the dock. I also want it to have a stylus, but the stylus should slide into the tablet, like my old Windows phone (v 6.5), so it doesn't get lost easily.

The dock would have a touchpad, the large battery as mentioned earlier, and would have extra USB ports so I can hook up other peripherals. The dock should obviously have a built-in keyboard and a reasonably large hard drive (250 GB or so). The total weight should not be much greater than existing lightweight laptops (a little heavier because of the extra battery). The tablet should be chargeable from the dock battery, so that if I run out of tablet power and place it in the dock, the tablet recharges from the dock battery. I want it to have a decent rear-facing camera (I don't care much about a front-facing one), Wi-Fi, GPS, NFC, Bluetooth, ambient light sensor, accelerometer, speakers, and (optionally) 4G cell radio capability.

Participant 2:

I find it hard to fault my Lenovo S12 — 3 pounds, 6 hrs of battery, great keyboard, 250 GB hard drive (no CD drive), HDMI output, 3 USB ports, Wi-Fi, ethernet, Windows 7, Office 2010. It makes tablets seem like Vespas (not to denigrate Vespas — just being realistic).

I've tried to take my iPad to meetings, and I've seen people with that toy keyboard Apple offered initially (though I like the looks of some of the new case/keyboard combos), and I've seen people do great presentations with an iPad. But input is the barrier. I've never had an opportunity to use Swype, but it's an intriguing solution. Voice recognition also seems plausible if you're not in a public setting.

I think what I really want at this point is a 1-pound S12. Lenovo has an interesting Android tablet, but it all comes back to the keyboard and input, doesn't it? If Windows 8 can deliver both the traditional desktop experience and a tablet experience that builds on WinPhone7, that gets closer to what I want. If I can get to three screens (TV-Tablet-Phone) instead of a dozen or whatever it is, that would be good.

Your take?

As you can see, people on the thread indulged their specificity. I'd like to invite Radar readers to do the same thing by addressing these open questions:

  • Do you use multiple devices throughout the day? If so, which ones?
  • How about when you travel — which devices do you pack?
  • Have you tried going tablet-only? What worked? What didn't?
  • And finally: What improvements do you need to see before you go tablet-only?

Please weigh in through the comments or Google+.

TOC NY 2012 — O'Reilly's TOC Conference, being held Feb 13-15, 2012, in New York City, is where the publishing and tech industries converge. Practitioners and executives from both camps will share what they've learned and join together to navigate publishing's ongoing transformation.

Register to attend TOC 2012


October 11 2011

October 05 2011

The making of a "minimum awesome product"

This post is part of the TOC podcast series, which we'll be featuring here on Radar in the coming months. You can also subscribe to the free TOC podcast through iTunes.

When the Flipboard iPad app first arrived, it helped us to look at the tablet user interface in a whole new way. Suddenly, those ugly RSS feeds became beautiful, and they could be navigated alongside Twitter and Facebook streams. Flipboard's co-founder, Evan Doll (@edog1203), recently sat down with me to talk about how the app was designed and where it might be heading. Key highlights from the full video interview (below) include:

  • A key to design at Apple: Every time you present the user with a non-essential decision to make, you have failed as a designer. [Discussed at the 1:00 mark]
  • Steve Jobs and user interface design: All those rumors are true. Steve Jobs has indeed played a significant role in even the tiniest of user interface design decisions. [Discussed at 1:20]
  • Exceeding customer expectations: Focus less on producing a "minimum viable product" and more on making it a "minimum awesome product." [Discussed at 2:28]
  • Anonymized data is a crucial tool: You may not like it, but your browsing habits are being studied. Don't worry though — it's not some big brother conspiracy, but rather the Flipboard team looking for ways to improve the user experience. [Discussed at 4:05]
  • Focus on being "fundamentally social": The social component of your product needs to be organic, not something that's tacked on later. Flipboard is an "inherently social browser." [Discussed at 5:15]

You can view the entire interview in the following video.

TOC Frankfurt 2011 — Being held on Tuesday, Oct. 11, 2011, TOC Frankfurt will feature a full day of cutting-edge keynotes and panel discussions by key figures in the worlds of publishing and technology.

Save 100€ off the regular admission price with code TOC2011OR


September 28 2011

Spoiler alert: The mouse dies. Touch and gesture take center stage

Mouse Macro by orangeacid, on FlickrThe moment that sealed the future of human-computer interaction (HCI) for me happened just a few months ago. I was driving my car, carrying a few friends and their children. One child, an 8-year old, pointed to the small LCD screen on the dashboard and asked me whether the settings were controlled by touching the screen. They were not. The settings were controlled by a rotary button nowhere near the screen. It was placed conveniently between the driver and passenger seats. An obvious location in a car built at the tail-end of an era when humans most frequently interacted with technology through physical switches and levers.

The screen could certainly have been one controlled by touch, and it is likely a safe bet that a newer model of my car has that very feature. However, what was more noteworthy was the fact that this child was assuming the settings could be changed simply by passing a finger over an icon on the screen. My epiphany: for this child's generation, a rotary button was simply old school.

This child is growing up in an environment where people are increasingly interacting with devices by touching screens. Smartphones and tablets are certainly significant innovations in areas such as mobility and convenience. But these devices are also ushering in an era that shifts everyone's expectations of how we engage in the use of technology. Children raised in a world where technology will be pervasive will touch surfaces, make gestures, or simply show up in order for systems to respond to their needs.

This means we must rethink how we build software, implement hardware, and design interfaces. If you are in any of the professions or businesses related to these activities, there are significant opportunities, challenges and retooling needs ahead.

It also means the days of the mouse are probably numbered. Long live the mouse.

The good old days of the mouse and keyboard

Probably like most of you, I have never formally learned to type, but I have been typing since I was very young, and I can pound out quite a few words per minute. I started on an electric typewriter that belonged to my dad. When my oldest brother brought home our first computer, a Commodore VIC-20, my transition was seamless. Within weeks, I was impressing relatives by writing small software programs that did little more than change the color of the screen or make a sound when the spacebar was pressed.

Later, my brother brought home the first Apple Macintosh. This blew me away. For the first time I could create pictures using a mouse and icons. I thought it was magical that I could click on an icon and then click on the canvas, hold the mouse button down, and pull downward and to the right to create a box shape.

Imagine my disappointment when I arrived in college and we began to learn a spreadsheet program using complex keyboard combinations.

Fortunately, when I joined the workforce, Microsoft Windows 3.1 was beginning to roll out in earnest.

The prospect of the demise of the mouse may be disturbing to many, not least of whom is me. To this day, even with my laptop, if I want to be the most productive, I will plug in a wireless mouse. It is how I work best. Or at least, it is currently the most effective way for me.

For most of us, we have grown up using a mouse and a keyboard to interact with computers. It has been this way for a long time, and we have probably assumed it would continue to be that way. However, while the keyboard probably has considerable life left in it, the mouse is likely dead.

Fortunately, while the trend suggests mouse extinction, we can momentarily relax, as it is not imminent.

But what about voice?

From science fiction to futurist projections, it has always been assumed that the future of human-computer interaction would largely be driven by using our voices. Movies over decades have reinforced this image, and it has seemed quite plausible. We were more likely to see a door open via voice rather than a wave. After all, it appears to be the most intuitive and requires the least amount of effort.

Today, voice recognition software has come a long way. For example, accuracy and performance when dictating to a computer is quite remarkable. If you have broken your arms, this can be a highly efficient way to get things done on a computer. But despite having some success and filling important niches, broad-based voice interaction has simply not prospered.

It may be that a world in which we control and communicate with technology via voice is yet to come, but my guess is that it will likely complement other forms of interaction instead of being the dominant method.

There are other ways we may interact, too, such as via eye-control and direct brain interaction, but these technologies remain largely in the lab, niche-based, or currently out of reach for general use.

The future of HCI belongs to touch and gesture

Apple's Magic TrackpadIt is a joy to watch how people use their touch-enabled devices. Flicking through emails and songs seems so natural, as does expanding pictures by using an outward pinching gesture. Ever seen how quickly someone — particularly a child — intuitively gets the interface the first time they use touch? I have yet to meet someone who says they hate touch. Moreover, we are more likely to hear people say just how much they enjoy the ease of use. Touch (and multi-touch) has unleashed innovation and enabled completely new use cases for applications, utilities and gaming.

While not yet as pervasive, gesture-based computing (in the sense of computers interpreting body movements or emotions) is beginning to emerge in the mainstream. Anyone who has ever used Microsoft Kinect will be able to vouch for how compelling an experience it is. The technology responds adequately when we jump or duck. It recognizes us. It appears to have eyes, and gestures matter.

And let us not forget, too, that this is version 1.0.

The movie "Minority Report" teased us about a possible gesture-based future: the ability to manipulate images of objects in mid air, to pile documents in a virtual heap, and to cast aside less useful information. Today many of us can experience its early potential. Now imagine that technology embedded in the world around us.

The future isn't what it used to be

My bet is that in a world of increasingly pervasive technology, humans will interact with devices via touch and gestures — whether they are in your home or car, the supermarket, your workplace, the gym, a cockpit, or carried on your person. When we see a screen with options, we will expect to control those options by touch. Where it makes sense, we will use a specific gesture to elicit a response from some device, such as (dare I say it) a robot! And, yes, at times we may even use voice. However, to me, voice in combination with other behaviors is more obvious than voice alone.

But this is not some vision of a distant future. In my view, the touch and gesture era is right ahead of us.

What you can do now

Many programmers and designers are responding to the unique needs of touch-enabled devices. They know, for example, that a paradigm of drop-down menus and double-clicks is probably the wrong set of conventions to use in this new world of swipes and pinches. After all, millions of people are already downloading millions of applications for their haptic-ready smartphones and tablets (and as the drumbeat of consumerization continues, they will also want their enterprise applications to work this way, too). But viewing the future through too narrow a lens would be an error. Touch and gesture-based computing forces us to rethink interactivity and technology design on a whole new scale.

How might you design a solution if you knew your users would exclusively interact with it via touch and gesture, and that it might also need to be accessed in a variety of contexts and on a multitude of form factors?

At a minimum, it will bring software developers even closer to graphical interface designers and vice versa. Sometimes the skillsets will blur, and often they will be one and the same.

If you are an IT leader, your mobile strategy will need to include how your applications must change to accommodate the new ways your users will interact with devices. You will also need to consider new talent to take on these new needs.

The need for great interface design will increase, and there will likely be job growth in this area. In addition, as our world becomes increasingly run by and dependent upon software, technology architects and engineers will remain in high demand.

Touch and gesture-based computing are yet more ways in which innovation does not let us rest. It keeps the pace of change, already on an accelerated trajectory, even more relentless. But the promise is the reward. New ways to engage with technology enables novel ways to use it to enhance our lives. Simplifying the interface opens up technology so it becomes even more accessible, lowering the complexity level and allowing more people to participate and benefit from its value.

Those who read my blog know my view that I believe we are in a golden age of technology and innovation. It is only going to get more interesting in the months and years ahead.

Are you ready? I know there's a whole new generation that certainly is!

Photo (top): Mouse Macro by orangeacid, on Flickr

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD


September 19 2011

At its best, digital design is choreography

The upcoming Books in Browsers conference will focus on books as "networked, distributed sets of interactions," as opposed to content containers. I've asked several of the event's participants to address the larger concepts surrounding books in browsers. We'll be publishing these interviews over the coming weeks.

Below, Liza Daly (@liza), owner of Threepress Consulting and developer of Bookworm, ePub Zen Garden and Ibis Reader, addresses a question about tackling browser display issues.

What kinds of formatting and display issues do browsers need to overcome to handle the various forms of book content — 3D, game-like narratives, immersive texts and the like?

liza_daly_mug.jpgLiza Daly: There is, of course, an art to formatting fixed text beautifully. We call this process "laying out" a design, which brings to mind the pre-digital method of physically laying down type or visual elements in collage to produce a unified final page. The challenge for ebook designers and developers is to think less about "layout" and more about "choreography."

Text can be fluid and responsive — it can reshuffle itself due to display size, orientation, or user interaction. Our job is not to dictate where words on a virtual page must be, but instead to guide them to where they should be. It is not enough to overload a digital page with clickable doo-dads, overlays, and animation: all the elements must move together in concert and, above all, not impair the basic reading experience or enjoyment of the work. This implies a close relationship between an author, a visual artist and a developer — all three must work together to create compelling, adaptive, interactive texts.

This interview was edited and condensed.

TOC Frankfurt 2011 — Being held on Tuesday, Oct. 11, 2011, TOC Frankfurt will feature a full day of cutting-edge keynotes and panel discussions by key figures in the worlds of publishing and technology.

Save 100€ off the regular admission price with code TOC2011OR


  • Digital publishing should put design above file conversion
  • If you're a content designer, the web browser will be your canvas
  • What if a book is just a URL?

  • September 02 2011

    August 19 2011

    May 10 2011

    In the past few years, there has been massive growth in new and exciting cheap or free web site usability testing tools, so here’s my list of 24 tools you may need to use from time to time. Gone are the days of using expensive recruitment firms, labs and massive amounts of time to create, deploy and report on usability tests. By using these usability testing tools and others like them, you have for the first time a complete set of tools designed to tackle almost any usability research job. From recruiting real users (with tools such as Ethnio) to conducting live one on one remote moderated tests (UserVue) to analyzing results of usability changes using A/B testing (Google Website Optimizer), there is a plethora of useful and usable tools to conduct usability testing.
    24 Usability Testing Tools | Useful Usability
    Reposted fromjoshdamon joshdamon

    March 21 2011

    Open question: Are ereaders too complex?

    War and Peace on the Kindle app
    Screenshot of "War and Peace" from the Kindle iPad app
    In a recent post for Gear Diary, Douglas Moran bemoaned the direction technological "advancements" are taking ereader apps and devices. As examples, he compared the original Barnes & Noble eReader (which he liked) to its replacement, the Nook app (which "kinda stinks").

    On a personal level, functionality is an ereader obstacle that turns me into an ebook curmudgeon. I recently was gifted a Kindle and I nearly threw it across the room trying to read "War and Peace" (as part of a year-long book club; I'm way behind).

    Moran and others noted the simplicity of the Kindle and how its fewer features might make for a more straightforward reading experience. But perhaps the Kindle isn't quite simple enough. In the end, I bought the print version of "War and Peace" and gave up on the device. Trying to toggle around links to read book notes was so clunky as to make that feature completely useless. Why not put the notes at the bottom of the page? Having links is great if 1. they're easy and quick to access, and 2. you can return to your place in the book in some obvious, speedy fashion. Otherwise, just give me the content.

    All this led me to questions regarding functionality and user experience in ereading:

    • Are ereader developers focusing too much on technological possibilities and losing sight of reader behavior?
    • For those of you who embrace ereading: What features on your reader(s) are extraneous or obtrusive to your reading experience?
    • For developers: When working on a new app or an update, how do you incorporate the end-user into development?

    Please share your thoughts in the comments.


    December 16 2010

    Four short links: 16 December 2010

    1. On Compressing Social Networks (PDF) -- paper looking at the theory and practice of compressing social network graphs. Our main innovation here is to come up with a quick and useful method for generating an ordering on the social network nodes so that nodes with lots of common neighbors are near each other in the ordering, a property which is useful for compression (via My Biased Coin, via Matt Biddulph on Delicious)
    2. Requiring Email and Passwords for New Accounts (Instapaper blog) -- a list of reasons why the simple signup method of "pick a username, passwords are optional" turned out to be trouble in the long run. (via Courtney Johnston's Instapaper feed)
    3. Extreme Design -- building the amazing in an equally-amazing fashion. I want a fort.
    4. rgeo -- a new geo library for Rails. (via Daniel Azuma via Glen Barnes on Twitter)

    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!