Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 21 2014

Architecting the connected world

In the first two posts in this series, I examined the connection between information architecture and user interface design, and then looked closely at the opportunities and constraints inherent in information architecture as we’ve learned to practice it on the web. Here, I outline strategies and tactics that may help us think through these constraints and that may in turn help us lay the groundwork for an information architecture practice tailored and responsive to our increasingly connected physical environments.

The challenge at hand

NYU Media, Culture, and Communication professor Alexander Galloway examines the cultural and political impact of the Internet by tracing its history back to its postwar origins. Built on a distributed information model, “the Internet can survive [nuclear] attacks not because it is stronger than the opposition, but precisely because it is weaker. The Internet has a different diagram than a nuclear attack does; it is in a different shape.” For Galloway, the global distributed network trumped the most powerful weapon built to date not by being more powerful, but by usurping its assumptions about where power lies.

This “differently shaped” foundation is what drives many of the design challenges I’ve raised concerning the Internet of Things, the Industrial Internet and our otherwise connected physical environments. It is also what creates such depth of possibility. In order to accommodate this shape, we will need to take a wider look at the emergent shape of the world, and the way we interact with it. This means reaching out into other disciplines and branches of knowledge (e.g. in the humanities and physical sciences) for inspiration and to seek out new ways to create and communicate meaning.

The discipline I’ve leaned on most heavily in these last few posts is linguistics. From these explorations, a few tactics for information architecture for connected environments emerge: leveraging multiple modalities, practicing soft assembly, and embracing ambiguity.

Leverage multiple modalities

In my previous post, I examined the nature of writing as it relates to information architecture for the web. As a mode of communication – and of meaning making – writing is incredibly powerful, but that power does come with limitations. Language’s linearity is one of these limitations; its symbolic modality is another.

In linguistics, a sign’s “modality” refers to the nature of the relationship between a signifier and a signified, between the name of a thing and the thing itself. In Ferdinand de Saussure’s classic example (from the Course in General Linguistics), the thing “tree” is associated with the work “tree” only by arbitrary convention.

This symbolic modality, though by far the most common modality in language, isn’t the only way we make sense of the world. Linguist Charles Sanders Peirce identified two additional modes that offer alternatives to the arbitrary sign of spoken language: the iconic and the indexical modes.

Indexical modes link signifier and signified by concept association. Smoke signifies fire, thunder signifies a storm, footprints signify a past presence. The signifier “points” to the signified (yes, the reference is related to our index finger). Iconic modes, on the other hand, link signifier and signified by resemblance or imitation of salient features. A circle with two dots and lines in the right places signifies the human face to humans – our perceptual apparatus is tuned to recognize this pattern as a face (a pre-programmed disposition which easily becomes a source of amusement when comparing the power outlets of various regions).

Before you write off this scrutiny of modes as esoteric navel-gazing – or juvenile visual punning – let’s take an example many of us come into contact with daily: the Apple MacBook trackpad. Prior to the Mountain Lion operating system, moving your finger “down” the trackpad (i.e. from the space bar toward the edge of the computer body) caused a page on screen to scroll down – which is to say, the elements you perceived on the screen moved up (so you could see what had been concealed “below” the bottom edge of the screen).

The root of this spatial metaphor is enlightening: if you think back a few years to the scroll wheel mouse (or perhaps you’re still using one?), you can see where the metaphor originates. When you scroll the wheel toward you, it is as if the little wheel is in contact with a sheet of paper sitting underneath the mouse. You move the wheel toward you, the wheel moves the paper up. It is an easy mechanical model that we can understand based on our embodied relationship to physical things.

This model was translated as-is to the first trackpads — until another kinetic model replaced it: the touchscreen. On a touchscreen, you’re no longer mediating movement with a little wheel; you’re touching it and moving it around directly. Once this model became sufficiently widespread, it became readily accessible to users of other devices – such as the trackpad – and, eventually, became the default trackpad setting for Apple devices.

OutletsOutlets

Left: Danish electrical plugs, on Wikimedia Commons. Right: US electrical plugs, photo by Andy Fitzgerald.

Here we can see different modes at play. The trackpad isn’t strictly symbolic, nor is it iconic. Its relationship to the action it accomplishes is inferred by our embodied understanding of the physical world. This is signification in the indexical mode.

This thought exercise isn’t purely academic: by basing these kinds of interactions on mechanical models that we understand in a physical and embodied way (we know in our bodies what it is like to touch and move a thing with our fingers), we root them in a symbolic experience that is more deeply rooted in our understanding of the physical world. In the case of iconic signification, for example in recognizing faces in power outlets (and even dispositions – compare, for instance, the plug shapes of North America and Denmark), our meaning making is even more deeply rooted in our innate perceptual apparatus.

Practice soft assembly

Attention to modalities is important because the more deeply a pattern or behavior is rooted in embodied experience, the more durable it is – and the more reliably it can be leveraged in design. Stewart Brand’s concept of “pace layers” helps to explain why this is the case. Brand argues in How Buildings Learn that the structures that make up our experienced environment don’t all change at the same rate. Nature is very slow to change; culture changes more quickly. Commerce changes more quickly than the layers below it, and technology changes faster than all of these. Fashion, relative to these other layers, changes most quickly of all.

Pace LayersPace Layers

Illustration: Pace layers, by Preston Grubbs, illustrator at Deloitte Digital.

When we discuss meaning making, our pace layer model doesn’t describe rate of change; it describes depth of embodiment. Remembering the “ctrl-z to undo” function is a learned symbolic (hence arbitrary) association. Touching an icon on a touchscreen (or, indexically, on a trackpad) and sliding it up is an associative embodied action. If we get these actions right, we’ve tapped into a level of meaning making that is rooted in one of the slowest of the physical pace layers: the fundamental way in which we perceive our natural world.

This has obvious advantages to usability. Intuitive interfaces are instantly useable because they capitalize on the operational knowledge we already have. When an interface is completely new, in order to be intuitive it must “borrow” knowledge from another sphere of experience (what Donald Norman in The Psychology of Everyday Things refers to as “knowledge in the world”). The touchscreen interface popularized by the iPhone is a ready example of this. More recently, Nest Protect’s ”wave to hush” interaction provides an example that builds on learned physical interactions (waving smoke away from a smoke detector to try to shut it up) in an entirely new but instantly comprehensible way.

An additional and often overlooked advantage of tapping into deep layers of meaning making is that by leveraging more embodied associations, we’re able to design for systems that fit together loosely and in a natural way. By “natural” here, I mean associations that don’t need to be overwritten by an arbitrary, symbolic association in order to signify; associations that are rooted in our experience of the world and in our innate perceptual abilities. Our models become more stable and, ironically, more fluid at once: remapping one’s use of the trackpad is as simple as swapping out one easily accessible mental model (the wheel metaphor) for another (the touchscreen metaphor).

This loose coupling allows for structural alternatives to rigid (and often brittle) complex top-down organizational approaches. In Beyond the Brain, University of Lethbridge Psychology professor Louise Barrett uses this concept of “soft assembly” to explain how in both animals and robots “a whole variety of local control factors effectively exploit specific local (often temporary) conditions, along with the intrinsic dynamics of an animal’s body, to come up with effective behavior ‘on the fly.’” Barrett describes how soft assembly accounts for complex behavior in simple organisms (her examples include ants and pre-microprocessor robots), then extends those examples to show how complex instances of human and animal perception can likewise be explained by taking into account the fundamental constitutive elements of perception.

For those of us tasked with designing the architectures and interaction models of networked physical spaces, searching hard for the most fundamental level at which an association is understood (in whatever mode it is most basically communicated), and then articulating that association in a way that exploits the intrinsic dynamics of its environment, allows us to build physical and information structures that don’t have to be held together by force, convention, or rote learning, but which fit together by the nature of their core structures.

Embrace ambiguity

Alexander Galloway claims that the Internet is a different shape than war. I have been making the argument that the Internet of Things is a different shape than textuality. The systems that comprise our increasingly connected physical environments are, for the moment, often broader than we can understand fluidly. Embracing ambiguity – embracing the possibility of not understanding exactly how the pieces fit together, but trusting them to come together because they are rooted in deep layers of perception and understanding – opens up the possibility of designing systems that surpass our expectations of them. “Soft assembling” those systems based on cognitively and perceptually deep patterns and insight ensures that our design decision aren’t just guesses or luck. In a word, we set the stage for emergence.

Embracing ambiguity is hard. Especially if you’re doing work for someone else – and even more so if that someone else is not a fan of ambiguity (and this includes just about everyone on the client side of the design relationship). By all means, be specific whenever possible. But when you’re designing for a range of contexts, users, and uses – and especially when those design solutions are in an emerging space – specificity can artificially limit a design.

In practical terms, embracing ambiguity means searching for solutions beyond those we can easily identify with language. It means digging for the modes of meaning making that are buried in our physical interactions with the built and the natural environment. This allows us the possibility of designing for systems which fit together with themselves (even though we can’t see all the parts) and with us (even though there’s much about our own behavior we still don’t understand), potentially reshaping the way we perceive and act in our world as a result.

Architecting connected physical environments requires that we think differently about how people think and about how they understand the world we live in. Textuality is a tool – and a powerful tool – but it represents only one way of making sense of information environments. We can add additional approaches to designing for these environments by thinking beyond text and beyond language. We accomplish this by leveraging our embodied experience (and embodied understanding of the world) in our design process: a combination of considering modalities, practicing soft assembly, and embracing ambiguity will help us move beyond the limits of language and think through ways to architect physical environments that make this new context enriching and empowering.

As we’re quickly discovering, the nature of connected physical environments imbricates the digital and the analog in ways that were unimaginable a decade ago. We’re at a revolutionary information crossroads, one where our symbolic and physical worlds are coming together in an unprecedented way. Our temptation thus far has been to drive ahead with technology and to try to fit all the pieces together with the tried and true methods of literacy and engineering. Accepting that the shape of this new world is not the same as what we have known up until now does not mean we have to give up attempts to shape it to our common good. It does, however, mean that we will have to rethink our approach to “control” in light of the deep structures that form the basis of embodied human experience.


If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.

January 24 2014

The lingering seduction of the page

In an earlier post in this series, I examined the articulatory relationship between information architecture and user interface design, and argued that the tools that have emerged for constructing information architectures on the web will only get us so far when it comes to expressing information systems across diverse digital touchpoints. Here, I want to look more closely at these traditional web IA tools in order to tease out two things: (1) ways we might rely on these tools moving forward, and (2) ways we’ll need to expand our approach to IA as we design for the Internet of Things.

First stop: the library

Information Architecture for the World Wide WebInformation Architecture for the World Wide WebThe seminal text for Information Architecture as it is practiced in the design of online information environments is Peter Morville’s and Louis Rosenfeld’s Information Architecture for the World Wide Web, affectionately known as “The Polar Bear Book.”

First published in 1998, The Polar Bear Book gave a name and a clear, effective methodology to a set of practices many designers and developers working on the web had already begun to encounter. Morville and Rosenfeld are both trained as professional librarians and were able to draw on this time-tested field in order to sort through many of the new information challenges coming out of the rapidly expanding web.

If we look at IA as two faces of the same coin, The Polar Bear Book focuses on the largely top-down “Internet Librarian” side of information design. The other side of the coin approaches the problems posed by data from the bottom up. In Everything is Miscellaneous: The Power of the New Digital Disorder, David Weinberger argues that the fundamental problem of the “second order” (think “card catalogue”) organization typical of library sciences-informed approaches is that they fail to recognize the key differentiator of digital information: that it can exist in multiple locations at once, without any single location being the “home” position. Weinberger argues that in the “third order” of digital information practices, “understanding is metaknowledge.” For Weinberger, “we understand something when we see how the pieces fit together.”

Successful approaches to organizing electronic data generally make liberal use of both top-down and bottom-up design tactics. Primary navigation (driven by top-down thinking) gives us a birds-eye view of the major categories on a website, allowing us to quickly focus on content related to politics, business, entertainment, technology, etc. The “You May Also Like” and “Related Stories” links come from work in the metadata-driven bottom-up space.

On the web, this textually mediated blend of top-down and bottom-up is usually pretty successful. This is no surprise: the web is, after all, primarily a textual medium. At its core, HTML is a language for marking up text-based documents. It makes them interpretable by machines (browsers) so they can be served up for human consumption. We’ve accommodated images and sounds in this information ecology by marking them up with tags (either by professional indexers or “folksonomically,” by whomever cares to pitch in).

There’s an important point here that often goes without saying: the IA we’ve inherited from the web is textual — it is based on the perception of the world mediated through the technology of writing; herin lies the limitation of the IA we know from the web as we begin to design for the Internet of Things.

Reading brains

We don’t often think of writing as “technology,” but inasmuch as technology constitutes the explicit modification of techniques and practices in order to solve a problem, writing definitely fits the bill. Language centers can be pinpointed in the brain — these are areas programmed into our genes that allow us to generate spoken language — but in order to read and write, our brains must create new connections not accounted for in our genetic makeup.

In Proust and the Squid, cognitive neuroscientist Maryanne Wolf describes the physical, neurological difference between a reading and writing brain and a pre-literate linguistic brain. Wolf writes that, with the invention of reading “we rearranged the very organization of our brain.” Whereas we learn to speak by being immersed in language, learning to read and write is a painstaking process of instruction, practice, and feedback. Though the two acts are linked by a common practice of language, writing involves a different cognitive process than speaking. It is one that relies on the technology of the written word. This technology is not transmitted through our genes; it is transmitted through our culture.

It is important to understand that writing is not simply a translation of speech. This distinction matters because it has profound consequences. Wolf writes that “the new circuits and pathways that the brain fashions in order to read become the foundation for being able to think in different, innovative ways.” As the ability to read becomes widespread, this new capacity for thinking differently, too, becomes widespread.

Though writing constitutes a major leap past speech in terms of cognitive process, it shares one very important common trait with spoken language: linearity. Writing, like speech, follows a syntagmatic structure in which meaning is constituted by the flow of elements in order — and in which alternate orders often convey alternate meanings.

When it comes to the design of information environments, this linearity is generally a foregone conclusion, a feature of the cognitive landscape which “goes without saying” and is therefore never called into question. Indeed, when we’re dealing primarily with text or text-based media, there is no need to call it into question.

In the case of embodied experience in physical space, however, we natively bring to bear a perceptual apparatus which goes well beyond the linear confines of written and spoken language. When we evaluate an event in our physical environment — a room, a person, a meaningful glance — we do so with a system of perception orders of magnitude more sophisticated than linear narrative. JJ Gibson describes this as the perceptual awareness resulting from a “flowing array of stimulation.” When we layer on top of that the non-linear nature of dynamic systems, it quickly becomes apparent that despite the immense gains in cognition brought about by the technology of writing, these advances still only partially equip us to adequately navigate immersive, physical connected environments.

The trouble with systems (and why they’re worth it)

Thinking in Systems: A PrimerThinking in Systems: A Primer

Photo: Andy Fitzgerald, of content from Thinking in Systems: A Primer, by Donella Meadows.

I have written elsewhere in more detail about challenges posed to linguistic thinkers by systems. To put all of that in a nutshell, complex systems baffle us because we have a limited capacity to track system-influencing inputs and outputs and system-changing flows. As systems thinking pioneer Donella Meadows characterizes them in her book Thinking in Systems: A Primer, self-organizing, nonlinear, feedback systems are “inherently unpredictable” and “understandable only in the most general way.”

According to Meadows, we learn to navigate systems by constructing models that approximate a simplified representation of the system’s operation and allow us to navigate it with more or less success. As more and more of our world — our information, our social networks, our devices, and our interactions with all of these — becomes connected, our systems become increasingly difficult (and undesirable) to compartmentalize. They also become less intrinsically reliant on linear textual mediation: our “smart” devices don’t need to translate their messages to each other into English (or French or Japanese) in order to interact.

This is both the great challenge and the great potential of the Internet of Things. We’re beginning to interact with our built information environments not only in a classically signified, textual way, but also in a physical-being-operating-in-the-world kind of way. The text remains — and the need to interact with that textual facet with the tools we’ve honed on the web (i.e. traditional IA) remains. But as the information environments we’re tasked with representing become less textual and more embodied, the tools we use to represent them must likewise evolve beyond our current text-based solutions.

Fumbling toward system literacy

In order to rise to meet this new context, we’re going to need as many semiotic paths as we can find — or create. And in order to do that, we will have to pay close attention to the cognitive support structures that normally “go without saying” in our conceptual models.

This will be hard work. The payoff, however, is potentially revolutionary. The threshold at   which we find ourselves is not merely another incremental step in technological advancement. The immersion in dynamic systems that the connected environment foreshadows holds the potential to re-shape the way we think — the way our brains are “wired” — much as reading and writing did. Though mediated by human-made, culturally transmitted technology (e.g. moveable type, or, in this case, Internet protocols), these changes hold the power to affect our core cognitive process, our very capacity to think.

What this kind of “system literacy” might look like is as puzzling to me now as reading and writing must have appeared to pre-literate societies. The potential of being able to grasp how our world is connected in its entirety — people, social systems, economies, trade, climate, mobility, marginalization — is both mesmerizing and terrifying. Mesmerizing because it seems almost magical; terrifying because it hints at how unsophisticated and parochial our current perspective must look from such a vantage point.

As information architects and interface designers, all of this means that we’re going to have to be nimble and creative in the way we approach design for these environments. We’re going to have to cast out beyond the tools and techniques we’re most comfortable with to find workable solutions to new problems of complexity. We aren’t the only ones working on this, but our role is an important one: engineers and users alike look to us to frame the rhetoric and usability of immersive digital spaces. We’re at a major transition in the way we conceive of putting together information environments. Much like Morville and Rosenfeld in 1998, we’re “to some degree all still making it up as we go along.” I don’t pretend to know what a fully developed information architecture for the Internet of Things might look like, but in the spirit of exploration, I’d like to offer a few pointers that might help nudge us in the right direction — a topic I’ll tackle in my next post.

December 13 2013

Architecture, design, and the connected environment

Just when it seems we’re starting to get our heads around the mobile revolution, another design challenge has risen up fiercer and larger right behind it: the Internet of Things. The rise in popularity of “wearables” and the growing activity around NFC and Bluetooth LE technologies are pushing the Internet of Things increasingly closer to the mainstream consumer market. Just as some challenges of mobile computing were pointedly addressed by responsive web design and adaptive content, we must carefully evaluate our approach to integration, implementation, and interface in this emerging context if we hope to see it become an enriching part people’s daily lives (and not just another source of anger and frustration).

It is with this goal in mind that I would like to offer a series of posts as one starting point for a conversation about user interface design, user experience design, and information architecture for connected environments. I’ll begin by discussing the functional relationship between user interface design and information architecture, and by drawing out some implications of this relationship for user experience as a whole.

In follow-up posts, I’ll discuss the library sciences origins of information architecture as it has been traditionally practiced on the web, and situate this practice in the emerging climate of connected environments. Finally, I’ll wrap up the series by discussing the cognitive challenges that connected systems present and propose some specific measures we can take as designers to make these systems more pleasant, more intuitive, and more enriching to use.

Architecture and Design

Technology pioneer Kevin Ashton is widely credited with coining the term “The Internet of Things.” Ashton characterizes the core of the Internet of Things as the “RFID and sensor technology [that] enables computers to observe, identify, and understand the world — without the limitations of human-entered data.”

About the same time that Ashton gave a name to this emerging confluence of technologies, scholar N. Katherine Hayles noted in How We Became Posthuman that “in the future, the scarce commodity will be the human attention span.” In effect, collecting data is a technology problem that can be solved with efficiency and scale; making that mass of data meaningful to human beings (who evolve on a much different timeline) is an entirely different task.

The twist in this story? Both Ashton and Hayles were formulating these ideas circa 1999. Now, 14 years later, the future they identified is at hand. Bandwidth, processor speed, and memory will soon be up to the task of ensuring the technical end of what has already been imagined, and much more. The challenge before us now as designers is in making sure that this future-turned-present world is not only technically possible, but also practically feasible — in a word, we still need to solve the usability problem.

Fortunately, members of the forward guard in emerging technology have already sprung into action and have begun to outline the specific challenges presented by the connected environment. Designer and strategist Scott Jenson has written and spoken at length about the need for open APIs, flexible cloud solutions, and the need for careful attention to the “pain/value“ ratio. Designer and researcher Stephanie Rieger likewise has recently drawn our collective attention to advances in NFC, Android intent sharing, and behavior chaining that all work to tie disparate pieces of technology together.

These challenges, however, lie primarily on the “computer” side of the Human Computer Interaction (HCI) spectrum. As such, they give us only limited insight into how to best accommodate Hayles’ “scarce commodity” – the human attention span. By shifting our point of view from how machines interact with and create information to the way that humans interact with and consume information, we will be better equipped to make the connections necessary to create value for individuals. Understanding the relationship between architecture and design is an important first step in making this shift.

Image by Dan Klyn, used with permission.Image by Dan Klyn, used with permission.

Image by Dan Klyn, used with permission.

Information Architect Dan Klyn explains the difference between architecture and design with a metaphor of tailoring: the architect determines where the cuts should go in the fabric, the designer then brings those pieces together to make the finished product the best it can be, “solving the problems defined in the act of cutting.”

Along the way, the designer may find that some cuts have been misplaced – and should be stitched back together or cut differently from a new piece. Likewise, the architect remains active and engaged in the design phase, making sure each piece fits together in a way that supports the intent of the whole.

The end result – be it a well-fitted pair of skinny jeans or a user interface – is a combination of each of these efforts. As Klyn puts it, the architect specializes in determining what must be built and in determining the overall structure of the finished product; the designer focuses on how to put that product together in a way that is compelling and effective within the constraints of a given context.

Once we make this distinction clear, it becomes equally clear that user interface design is a context-specific articulation of an underlying information architecture. It is this IA foundation that provides the direct connection to how human end users find value in content and functionality. The articulatory relationship between architecture and design creates consistency of experience across diverse platforms and works to communicate the underlying information model we’ve asked users to adopt.

Let’s look at an example. The early Evernote app had a very different look and feel on iOS and Android. On Android, it was a distinctly “Evernote-branded” experience. On iOS, on the other hand, it was designed to look more like a piece of the device operating system.

Evernote_screenshotsEvernote_screenshots

Evernote screenshots, Android (left) versus iOS.

Despite the fact that these apps are aesthetically different, their architectures are consistent across platforms. As a result, even though the controls are presented in different ways, in different places, and at different levels of granularity, moving between the apps is a cognitively seamless experience for users.

In fact, apps that “look” the same across different platforms sometimes end up creating architectural inconsistencies that may ultimately confuse users. This is most easily seen in the case of “ported applications,” where iPhone user interfaces are “ported over” whole cloth to Android devices. The result is usually a jumble of misplaced back buttons and errant tab bars that send mixed messages about the effects of native controls and patterns. This, in turn, sends a mixed message about the information model we have proposed. The link between concept-rooted architecture and context-rooted design has been lost.

In the case of such ports, the full implication of the articulatory relationship between information architecture and user interface becomes clear. In these examples, we can see that information architecture always happens: either it happens by design or it happens by default. As designers, we sometimes fool ourselves into thinking that a particular app or website “doesn’t need IA,” but the reality is that information architecture is always present — it’s just that we might have specified it in a page layout instead of a taxonomy tool (and we might not have been paying attention when that happened).

Once we step back from the now familiar user interface design patterns of the last few years and examine the information architecture structures that inform them, we can begin to develop a greater awareness (and control) of how those structures are articulated across devices and contexts. We can also begin to cultivate the conditions necessary for that articulation to happen in terms that make sense to users in the context of new devices and systems, which subsequently increases our ability to capitalize on those devices’ and systems’ unique capabilities.

This basic distinction between architecture and design is not a new idea, but in the context of the Internet of Things, it does present architects and designers with a new set of challenges. In order to get a better sense of what has changed in this new context, it’s worth taking a closer look at how the traditional model of IA for the web works. This is the topic to which I’ll turn in my next post.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl