Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 23 2013

Open Metrics: Jenseits des Zitatkartells

Viele Wissenschaftler sympathisieren mit Open Access, dem freien Zugang zu wissenschaftlichen Publikationen.

Weiterlesen

June 11 2012

Big ethics for big data

As the collection, organization and retention of data has become commonplace in modern business, the ethical implications behind big data have also grown in importance. Who really owns this information? Who is ultimately responsible for maintaining it? What are the privacy issues and obligations? What uses of technology are ethical — or not — when it comes to big data?

These are the questions authors Kord Davis (@kordindex) and Doug Patterson (@dep923) address in "Ethics of Big Data." In the following interview, the two share thoughts about the evolution of the term "big data," ethics in the era of massive information gathering, and the new technologies that raise their concerns for the big data ecosystem.

How do you define "big data"?

Douglas Patterson: The line between big data and plain old data is something that moves with the development of the technology. The new developments in this space make old questions about privacy and other ethical issues far more pressing. What happens when it's possible to know where just about everyone is or what just about everyone watches or reads? From the perspective of business models and processes, "impact" is probably a better way to think about "big" than in terms of current trends in NoSQL platforms, etc.

One useful definition of big data — for those who, like us, don't think it's best to tie it to particular technologies — is that big data is data big enough to raise practical rather than merely theoretical concerns about the effectiveness of anonymization.

Kord Davis: The frequently-cited characteristics "volume, velocity, and variety" are useful landmarks — persistent features such as the size of datasets, the speed at which they can be acquired and queried, and the wide range of formats and file types generating data.

The impact, however, is where ethical issues live. Big data is generating a "forcing function" in our lives through its sheer size and speed. Recently, CNN published a story similar to an example in our book. Twenty-five years ago, our video rental history was deemed private enough that Congress enacted a law to prevent it from being shared in hopes of reducing misuse of the information. Today, millions of people want to share that exact same information with each other. This is a direct example of how big data's forcing function is literally influencing our values.

The influence is a two-way street. Much like the scientific principle that we can't observe a system without changing it, big data can't be used without an impact — it's just too big and fast. Big data can amplify our values, making them much more powerful and influential, especially when they are collected and focused toward a specific desired outcome.

Big data tends to be a broad category. How do you narrow it down?

Douglas Patterson: One way is the anonymization of datasets before they're released publicly, acted on to target advertising, etc. As the legal scholar Paul Ohm puts it, "data can be either useful or perfectly anonymous, but never both."

So, suppose I know things about you in particular: where you've eaten, what you've watched. It's very unlikely that I'm going to end up violating your privacy by releasing the "information" that there's one particular person who likes carne asada and British sitcoms. But if I have that information about 100 million people, patterns emerge that do make it possible to tie data points to particular named, located individuals.

Kord Davis: Another approach is the balance between risk and innovation. Big data represents massive opportunities to benefit business, education, healthcare, government, manufacturing, and many other fields. The risks, however, to personal privacy, the ability to manage our individual reputations and online identities, and what it might mean to lose — or gain — ownership over our personal data are just now becoming topics of discussion, some parts of which naturally generate ethical questions. To take advantage of the benefits big data innovations offer, the practical risks of implementing them need to be understood.

How do ethics apply to big data?

Kord Davis: Big data itself, like all technology, is ethically neutral. The use of big data, however, is not. While the ethics involved are abstract concepts, they can have very real-world implications. The goal is to develop better ways and means to engage in intentional ethical inquiry to inform and align our actions with our values.

There are a significant number of efforts to create a digital "Bill of Rights" for the acceptable use of big data. The White House recently released a blueprint for a Consumer Privacy Bill of Rights. The values it supports include transparency, security, and accountability. The challenge is how to honor those values in everyday actions as we go about the business of doing our work.

Do you anticipate friction between data providers (people) and data aggregators (companies) down the line?

Douglas Patterson: Definitely. For example: you have an accident and you're taken to the hospital unconscious for treatment. Lots of data is generated in the process, and let's suppose it's useful data for developing more effective treatments. Is it obvious that that's your data? It was generated during your treatment, but also with equipment the hospital provided, based on know-how developed over decades in various businesses, universities, and government-linked institutions, all in the course of saving your life. In addition to generating profits, that same data may help save lives down the road. Creating the data was, so to speak, a mutual effort, so it's not obvious that it's your data. But it's also not obvious that the hospital can just do whatever it wants with it. Maybe under the right circumstances, the data could be de-anonymized to reveal what sort of embarrassing thing you were doing when you got hurt, with great damage to your reputation. And giving or selling data down the line to aggregators and businesses that will attempt to profit from it is one thing the hospital might want to do with the data that you might want to prevent — especially if you don't get a percentage.

Questions of ownership, questions about who gets to say what may and may not be done with data, are where the real and difficult issues arise.

Which data technologies raise ethical concerns?

Douglas Patterson: Geolocation is huge — think of the flap over the iPhone's location logging a while back, or how much people differ over whether or not it's creepy to check yourself or a friend into a location on Facebook or Foursquare. Medical data is going to become a bigger and bigger issue as that sector catches up.

Will lots of people wake up someday and ask for a "do over" on how much information they've been giving away via the "frictionless sharing" of social media? As a teacher, I was struck by how little concern my students had about this — contrasted with my parents, who find something pretty awful about broadcasting so much information. The trend seems to be in favor of certain ideas about privacy going the way of the top hat, but trends like that don't always continue.

Kord Davis: The field of predictive analytics has been around for a long time, but the development of big data technologies has increased accessibility to large datasets and the ability to data mine and correlate data using commodity hardware and software. The potential benefits are massive. A promising example is that longitudinal studies in education can gather and process significantly more minute data characteristics and we have no idea what we might learn. Which is precisely the point. Being able to assess a more refined population of cohorts may well turn out to unlock powerful ways to improve education. Similar conditions exist for healthcare, agriculture, and even being able to predict weather more reliably and reducing damage from catastrophic natural weather events.

On the other hand, the availability of larger datasets and the ability to process and query against them makes it very tempting for organizations to share and cross-correlate to gain deeper insights. If you think it's difficult to identify values and align them with actions within a single organization, imagine how many organizations the trail of your data exhaust touches in a single day.

Even a simple, singular transaction, such as buying a pair of shoes online touches your bank, the merchant card processor, the retail or wholesale vendor, the shoe manufacturer, the shipping company, your Internet service provider, the company that runs or manages the ecommerce engine that makes it possible, and every technology infrastructure organization that supports them. That's a lot of opportunity for any single bit of your transaction to be stored, shared, or otherwise mis-used. Now imagine the data trail for paying your taxes. Or voting — if that ever becomes widely available.

What recent events point to the future impact of big data?

Douglas Patterson: For my money, the biggest impact is in the funding of just about everything on the web by either advertising dollars or investment dollars chasing advertising dollars. Remember when you used to have to pay for software? Now look at what Google will give you for free, all to get your data and show you ads. Or, think of the absolutely pervasive impact of Facebook on the lives of many of its users — there's very little about my social life that hasn't been affected by it.

Down the road there may be more Orwellian or "Minority Report" sorts of things to worry about — maybe we're already dangerously close now. On the positive side again, there will doubtless be some amazing things in medicine that come out of big data. Its impact is only going to get bigger.

Kord Davis: Regime change efforts in the Middle East and the Occupy Movement all took advantage of big data technologies to coordinate and communicate. Each of those social movements shared a deep set of common values, and big data allowed them to coalesce at an unprecedented size, speed, and scale. If there was ever an argument for understanding more about our values and how they inform our actions, those examples are powerful reminders that big data can influence massive changes in our lives.

This interview was edited and condensed.

Ethics of Big Data — This book outlines a framework businesses can use to maintain ethical practices while working with big data.

Related:

December 10 2011

HealthTap's growth validates hypotheses about doctors and patients

A major round of funding for HealthTap gave me the opportunity to talk again with founder Ron Gutman, whom I interviewed earlier this year. You can get an overview of HealthTap from that posting or from its own web site. Essentially, HealthTap is a portal for doctors to offer information to patients and potential patients. In this digital age, HealthTap asks, why should a patient have to make an appointment and drive to the clinic just to find out whether her symptoms are probably caused by a recent medication? And why should a doctor repeat the same advice for each patient when the patient can go online for it?

Now, with 6,000 participating physicians and 500 participating health care institutions, HealthTap has revealed two interesting and perhaps unexpected traits about doctors:

  • Doctors will take the time to post information online for free. Many observations, including my own earlier posting, questioned whether they'd take the time to do this. The benefits of posting information is that doctors can demonstrate their expertise, win new patients, and cut down on time spent answering minor questions.

  • Doctors are willing to rate each other. This can be a surprise in a field known for its reluctance to break ranks and doctors' famous unwillingness to testify in malpractice lawsuits. But doctors do make use of the "Agree" button that HealthTap provides to approve postings by other doctors. When they press this button, they add the approved posting to their own web page (Virtual Practice), thus offering useful information to their own patients and others who can find them through search engines and social networks. The "Agree" ratings also cause postings to turn up higher when patients search for information on HealthTap, and help create a “Trust Score” for the doctor.

HealthTap, Gutman assures me, is not meant to replace doctors' visits, although online chats and other services in the future may allow patients to consult with doctors online. The goals of HealthTap remain to the routine provision of information that's easy for doctors to provide online, and to make medicine more transparent so patients know their doctors, before treatment and throughout their relationships.

HealthTap has leapt to a new stage with substantial backing from Tim Chang (managing director of Mayfield Fund), Eric Schmidt (through his Innovation Endeavors) and Rowan Chapman (Mohr Davidow Ventures). These VCs provide HealthTap with the funds to bring on board the developers, as well as key product and business development hires, required to scale up its growing operations. These investors also lend the business the expertise of some of the leaders in the health IT industry.

April 22 2011

HealthTap explores how big a community you need to crowdsource health information

A new company named HealthTap has just put together an intriguing combination of crowdsourced health advice and community-building. They rest their business case on the proposition that a battalion of doctors and other health experts--starting at the launch with 580 obstetricians and pediatricians--can provide enough information to help people make intelligent decisions. For me, although the venture is worthy in itself, it offers a model of something that might be even better as a national or international effort.

I had a chance just before the launch to visit the HealthTap office and get a pitch from the boisterous and enthusiastic Ron Gutman. The goal of HealthTap is to help ordinary people--in its first project, pregnant women and new mothers--answer basic questions such as, "What possible conditions match this symptom?" or "What should I do about this problem with a baby?" They don't actually provide health care online, but they provide information directly from doctors on the health issues that concern individuals.

The basic goals of HealthTap's "medical expert network" are:

  • To improve the public's understanding of their own health needs by providing precise, targeted information.

  • To engage patients, making them more interested in taking care of themselves.

  • To allow doctors to share their knowledge with communities, including current and prospective patients, by publicizing answer and tips they often share with patients.

  • To increase the efficiency and effectiveness of healthcare by helping users record personal information while researching their concerns before doctor visits.

  • To promote good doctors and create networks around caring

HealthTap needs to bring two populations online to succeed: health care provides and individuals (which I will try to avoid calling patients, because individuals can use health information without suffering from a specific complaint). I'll explain how they attract each population and what they offer these populations. Then I'll explore three challenges suggesting that HealthTap is best seen as a model for a national program: motivation, outreach, and accuracy.

Signing up health care providers

As I mentioned, HealthTap is demonstrating is viability already, boasting of 580 obstetricians and pediatricians. These doctors answer questions from patients, creating a searchable database. (And as we'll see in the next section, the customization for individuals goes far beyond search results.) Doctors can also write up tips for individuals. If a doctor finds she is handing out the same sheet to dozens of patients, she might as well put it online where she can refer her own patients and others can also benefit.

Doctors can be followed by patients and other doctors, an application of classic social networking. Doctors can also recommend other doctors. As with cheat sheets, doctor recommendations reflect ordinary real-life activities. Each doctor has certain specialists she recommends, and by doing it on HealthTap she can raise the general rating of these specialists.

Signing up individuals

Anyone can get a HealthTap account. Although you don't need to answer many questions, the power of customization provides an incentive to fill in as much information about you as you can. The pay-off is that when you search for a symptom or other issue--for instance, "moderate fever"--you will get results tailored to your medical condition, your age, the state of your pregnancy, and so on. A demo I saw at the HealthTap office suggested that the information brought up by a search there is impressively relevant, unlike a typical search for symptoms on Google or Bing.

HealthTap then goes on to ask a few relevant questions to help refine the information it provides even further. For a fever, it may ask what your temperature is and whether you feel pain anywhere. As explained earlier, it doesn't end up giving medical advice, but it does list the percentage match between symptoms reported and a list of possible conditions in its comprehensive database. All these heuristics--the questions, the list of conditions, the probabilities, the other suggestions--derive from the information entered by the providers in the HealthTap network and data published in peer-reviewed medical journals.

Some natural language processing is supported, letting HealthTap interpret questions such as "Should I eat fish?" Gutman ascribed their capabilities to a comprehensive medical knowledgebase built around a medical ontology that is designed especially for use by the lay public, coupled with a Bayesian (probabilistic) reasoning engine driving user interactions that lead to normative value-based choices.

HealthTap lets individuals follow the doctors whose advice they find helpful, and connect to individuals with medical conditions like theirs. Thus, HealthTap incorporates two of the key traits of social networks: forming communities and raising the visibility of doctors who provide useful information.


HealthTap uses easy-to-understand data visualization techniques that distill medical content into simple visual elements that intuitively communicate data that otherwise could seem complex (such as the symptoms, risk factors, tests, and treatments related to a condition).

So now let's look at the challenges HealthTap faces.

I'll mention at the outset that HealthTap doesn't have as much of a problem of trust as most social networks. Doctors have passed a high threshold and start out with the expectation of competence and commitment to their patients. This isn't always borne out by individual doctors, of course, but if you put 580 of them together you'll end up with a good representation of the current state of medical knowledge.

The challenge of motivation

Time will tell whether HealthTap can continue to attract physicians. Every posting by a physician is an advertising opportunity, and the chances of winning new patients--which can be done both by offering insights and by having colleagues recommend you--may motivate doctors to join and stay active. But I suspect that the most competent and effective doctors already have more patients banging down the door than they can handle, so they might stay away from HealthTap. Again, I like the HealthTap model but wish it could be implemented on a more universal scale.

The challenge of outreach

Pregnancy and childbirth was a clever choice for HealthTap's launch. Each trimester--not to mention the arrival of a newborn--feels like a different phase of life, and parents are always checking other people for insights. Still, the people who sign up for HealthTap are the familiar empowered patients, the ones with time and education, those who take the effort to take care of themselves and are likely to be big consumers of information in other formats such as Wikipedia (which has quite high-quality health information), books, courses, and various experts in fields related to birth. HealthTap will have more impact on health if doctors can persuade other people to sign up--those who don't eat right, who consume dangerous stimulants, etc.

And if HealthTap or something like it were a national program--if everybody was automatically signed up for a HealthTap account--we might have an impact on people who desperately need a stronger connection with their doctor and encouragement to lead healthier lives. The psychiatric patients who go off their meds, the diabetics with bad diets, the people with infections who fail to return for health checks--these could really benefit from the intensive online community HealthTap provides.

The challenge of accuracy

Considering that most people depend on the advice of one or two doctors for each condition treated, the combination of insights from 580 doctors should improve accuracy a lot. Still, clinical decision engines are a complex field. It seems to me that a service like HealthTap would be much more trustworthy if it represented a national crowdsourcing effort. Every doctor could benefit from a clinical decision support system, and the current US health care reform includes the use of such systems. I'd like to see a large one that HealthTap could tap into.

Gutman says that HealthTap has indeed started to work with other institutions and innovators. "We have worked with the U.S. Department of Health and Human Services to help make the public health data they released easy to access and useful for developers, and last year we held the first-ever 'health hackathon,' where hundreds gathered to build apps using the newly released health data we organized. We can become a platform for a world of health apps tied to our growing, open data, created by leading physicians and validated research."

April 06 2011

On the Internet, you can hire someone to ensure nobody knows you're a dog

Last Sunday, the New York Times published an article about reputation management that I found surprisingly disturbing. Nick Bilton's reporting of this social trend was excellent. People are coming face to face with their pasts in unwanted ways: pictures of ex-spouses upsetting new relationships, stupid things you did in your youth, abuse from trolls, etc. It's a familiar story to anyone who's used the web since its early days. We used to say "On the Internet, nobody knows you're a dog" (following the famous New Yorker cartoon), but it's more correct to say that the Internet is like the proverbial elephant that never forgets. And, more often than not, it knows exactly what kind of a dog you are.

That's old news. We live with it, and the discomfort it causes. What concerns me isn't the problem of damaged reputations, or of reputation management, but of the techniques reputation consultancies are using to undo the damage. What Bilton describes is essentially search engine optimization (SEO), applied to the reputation problem. The tricks used range from the legitimate to distasteful. I don't mind creating websites to propagate "good" images rather than bad ones, but creating lots of "dummy" sites that link to your approved content as a way of gaining PageRank is certainly "black hat" SEO. Not the worst black hat SEO, but certainly something I'm uncomfortable with. And I'm fairly confident that much more serious SEO abuses are taking place in the back room, all in the service of "reputation management."

On Twitter, one of my followers (@mglauber) made the comment "Right to be forgotten?" I agree, there's a "Right to Be Forgotten." And there's a "Right to Outlive Stupid Stuff You Did in the Past." But the solution is not defacing the public record with black hat SEO tricks. For one thing, that's a cosmetic solution, but someone really out to get dirt will have no trouble finding it. More important, I really dislike the casual acceptance of black hat tricks just because they're deployed in a cause to which most of us are sympathetic.

The real solution is doing something that humans are much better at than computers: forgetting. Your new significant other doesn't like Facebook pictures of you and your previous spouse? Get over it. That's how life is lived now. A potential employer is nervous about dumb stuff you did in high school? Get over it. (And if you're in high school or college, think a bit before you post.) It's true that the Internet never forgets, but is anything ever truly forgotten? There are always old wedding pictures in family albums and newspapers, and pranks recalled by old roommates who are more willing to talk than they should be. The Internet hasn't invented the skeleton in the closet, it's only made it easier to take the skeleton out. That doesn't mean that humans can't be mature.



Related:


March 01 2011

Dusting for device fingerprints

In a previous Strata Week post, I wrote about BlueCava, an Orange County, Calif.-based company that has patented a way of identifying the unique fingerprint of any electronic device connected to the Internet. Last October, they closed a $5 million round of series-A funding led by Mark Cuban.

Recently, BlueCava announced the formation of an advisory board, which includes executives from Facebook, MasterCard, HP, FirstData, Bill to Mobile, and Merchant Warehouse. I caught up with CEO David L. Norris to discuss device identification, reputation technology, online fraud, and consumer privacy.

Our interview follows.


Tell me a bit more about what BlueCava does and how it works.

David NorrisDavid Norris: BlueCava provides a platform that enables businesses to identify devices that are coming to their website. First, we identify the device and then we provide additional information about the device that would be useful to our customers in making decisions about how to interact with that device. One application is finding fraud. Another interesting area is social networking sites: a site may choose not to allow certain users to participate if they have a history of trollish behavior.

As we identify devices, we build information about each device. One of the things we can tell about a device is if it's a shared computer being used by multiples users. We can also determine the specific level of use — whether it's a household computer in the kitchen with a handful of users or an Internet cafe computer with hundreds of users.

It sounds like BlueCava is largely used to identify negative behavior. Can the technology also be used to identify devices or users with a positive history?

David Norris: In some ways it's better to identify a good device rather than the bad ones — it's much harder to mimic or fake a "clean" machine that has no history. So we've taken on the task of identifying a broad set of devices. This year, we'll identify more than 1 billion devices.

From there, among the partnerships we've signed up, we're going to assign direct financial benefits to those with a positive history, such as discounts and rewards. We'll be announcing further details soon. For site managers, it you have a historical reputation that's good, there's an opportunity to reduce some of the costs associated with interacting with you, like performing extensive background checks. So they can afford to pass some of those savings on to users who merit it.

What about the privacy issues associated with device identification?

David Norris: We do not collect any personal information. We don't collect Social Security numbers or email addresses. We identify devices and we characterize a device's behavior. For devices with GPS receivers built in, we collect information at a ZIP-code level, not a granular level. That would be a violation of privacy.

We've also implemented what the FTC is calling "do not track," so users can either opt out or set their preferences when it comes to online marketing.

There's a difference between being identified and being tracked: if you turn tracking off, we can still identify a device but we don't keep track of which websites it's been to.

Since no system is perfect, what are the remedies available to users whose devices or histories are misidentified?

David Norris: If a question comes up about a particular device, the user can go to the merchant or site owner, who can then escalate the issue in a review queue. It becomes a human process at that point.

So BlueCava is not making direct recommendations about user accounts?

David Norris: We're very careful not to position ourselves as a fraud solution. We are a tech company that can be part of an existing fraud solution, but device identification is only part of the story. We're gathering information that's already available and has been used for years by other companies. What we're doing differently is using it in a unique way.

Imagine that you're a store owner, and one day someone walks into your store and then walks out. The next day, they walk in again, and your recognize them. You'd do that naturally based on hair color, eye color, the shape of their face, etc. And you could recognize them even if they were wearing a different shirt, because you know that their shirt can change but their face won't. We do the same thing with devices. Our technology is adaptive, and allows for change to occur. But it's up to each individual client how to use that information.

Some users may find this kind of device identification intimidating because it seems like "magical spying." What would you say to them?

David Norris: Cookies used to seem magical too. But then people got used to the idea of them.

Our technology, I believe, will replace cookies eventually. It just observes your machine instead of reaching into it and dropping something there.

Device identification is an improvement over cookies in part because if you choose to opt out, you're opted out and that's it. If you opt out using cookies, the system actually drops an opt-out cookie on your machine — if you clear your cookies, then you're opted back in! Also, you have to opt out on multiple browsers. From a device identification standpoint, it's much cleaner: you opt out once and it's done.

This interview was edited and condensed.

July 23 2010

November 23 2009

More that sociologist Erving Goffman could tell us about social networking and Internet identity

After

posting some thoughts

a month ago about Erving Goffman's classic sociological text, The
Presentation of Self in Everyday Life
, I heard from a reader who
urged me to try out a deeper work of Goffman's, Frame
Analysis
(Harper Colophon, 1974). This blog presents the thoughts
that came to mind as I made my way through that long and rambling
work.

Let me start by shining a light on an odd phenomenon we've all
experienced online. Lots of people on mailing lists, forums, and
social networks react with great alarm when they witness heated
arguments. This reaction, in my opinion, stems from an ingrained
defense mechanism whose intensity verges on the physiological. We've
all learned, from our first forays to the playground as children, that
rough words easily escalate to blows. So we react to these words in
ways to protect ourselves and others.

Rationally, this defense mechanism wouldn't justify intervening in an
online argument. The people arguing could well be on separate
continents, and have close to zero chance of approaching each other
for battle before they cool down.

When asked why forum participants insert themselves between the
fighters--just as they would in a real-life brawl--they usually say,
"It's because I'm afraid of allowing a precedent to be set on this
forum; I might be attacked the same way." But this still begs the
question of what's wrong with an online argument. No forum member is
likely to be victim of violence.

We can apply Goffman's frame analysis to explain the forum members'
distress. It's what he calls a keying: we automatically apply
the lessons of real-life experiences to artificial ones. Keying allows
us to invest artificial circumstances--plays, ceremonies, court
appearances, you name it--with added meaning.

Human beings instinctively apply keyings. When we see a movie
character enter a victim's home carrying a gun, we forget we're
watching a performance and feel some of the same tightness in our
chest that we would feel had it been ourselves someone was stalking.

Naturally, any person of normal mental capacity can recognize the
difference between reality and an artificial re-enactment. We suspend
disbelief when we watch a play, reacting emotionally to the actors as
if they were real people going about their lives, but we don't
intervene when one tries to run another through with a knife, as we
would (one hopes) in real life.

Why do some people jump eagerly into online disputes, while others
plead with them to hold back? This is because, I think, disputes are
framed by different participants in different ways. Yes, some people
attack others in the hope of driving them entirely off the list; their
words are truly aimed at crushing the other. But many people just see
a healthy exchange of views where others see acts of dangerous
aggression. Goffman even had a term for the urge to flee taken up by
some people when they find that actions go too far: flooding
out
.

I should meekly acknowledge here that I play Nice Guy when I post to
online forums: I respect other people for their positions, seek common
ground, etc. I recognize that forums lose members when hotheads are
free to roam and toss verbal bombs, but I think forums may also lose a
dimension by suppressing the hotheads, who often have valid points and
a drive to aid the cause. One could instead announce a policy that
those who wish to flame can do so, and those who wish to ignore them
are also free to do so.

How much of Goffman's sprawling 575-page text applies online? Many
framing devices that he explored in real life simply don't exist on
digital networks. For instance, forums rarely have beginnings and
endings, which are central to framing for Goffman. People just log in
and start posting, experiencing whatever has been happening in the
meantime.

And as we've heard a million times, one can't use clothing, physical
appearance, facial expressions, and gestures to help evaluate online
text. Of course, we have graphics, audio, and video on the Internet
now as well, but they are often used for one-way consumption rather
than rapid interaction. A lot of online interaction is still carried
on in plain text. So authors toss in smileys such as :-) and other
emoticons. But these don't fill the expressiveness gap because they
must be explicitly included in text, and therefore just substitute for
things the author wanted to say in words. What helps makes
face-to-face interactions richer than text interactions is the
constant stream of unconscious vocal and physical signals that we
(often unconsciously) monitor.

So I imagine that, if Goffman returned to add coverage of the Internet
to Frame Analysis, it would form a very short appendix
(although he could be insufferably long-winded). Still, his analyses
of daily life and of performances bring up interesting points that
apply online.

The online forums are so new that we approach them differently from
real-life situations. We have fewer expectations with which to frame
our interactions. We know that we can't base our assumptions on
framing circumstances, such as when we strike up a conversation with
someone we've just met by commenting on the weather or on a dinner
speaker we both heard.

Instead, we frame our interactions explicitly, automatically providing
more context. For instance, when we respond to email, we quote the
original emails in our response (sometimes excessively).

And we judge everybody differently because we know that they choose
what they say carefully. We fully expect the distorted appearances
described in the Boston Globe article

My profile, myself
,
subtitled "Why must I and everyone else on Facebook be so insufferably
happy?" We wouldn't expect to hear about someone's drug problem or
intestinal upset or sexless marriage on Facebook, any more than we'd
expect to hear it when we're sitting with them on a crowded beach.

Goffman points out that the presence of witnesses is a frame in
itself, changing any interaction between two people. This definitely
carries over online where people do more and more posting to their
friend's Facebook Wall (a stream of updates visible to all their other
friends) instead of engaging in private chats.

But while explaining our loss of traditional frames, I shouldn't leave
the impression that nothing takes their place. The online medium has
powerful frames all its own. Thus, each forum is a self-contained
unit. In real-life we can break out of frames, such as when actors
leave the stage and mingle with audience members. This can't happen
within the rigidity of online technology.

It can be interesting to meet the same person on two different forums.
The sometimes subtle differences between forums affect their
presentation on each one. They may post the same message to different
forums, but that's often a poor practice that violates the frames on
one or more forums. So if they copy a posting, they usually precede it
with some framing text to make it appropriate for a particular forum.

Online forums also set up their own frames within themselves, and
these frames can be violated. Thus, a member may start a discussion
thread with the title "Site for new school," but it may quickly turn
into complaints about the mayor or arguments about noise in the
neighborhood. This breaks the frame, and people may go on for some
time posting all manner of comments under the "Site for new school"
heading until they are persuaded to start a new thread or take the
arguments elsewhere.

A frame, for Goffman, is an extremely broad concept (which I believe
weakens its value). Any assumption underlying an interaction can be
considered part of the frame. For instance, participants on forums
dedicated to social or technical interactions often ask whether it's
considered acceptable to post job listings or commercial offerings. In
other words, do forum participants impose a noncommercial mandate as
part of the frame?

A bit of history here can help newer Internet denizens understand
where this frame comes from. When the Internet began, everything was
run over wires owned by the federal government, the NSFNET Backbone
Network. All communication on the backbone was required to be
noncommercial, a regulation reinforced by the ivory-tower idealism of
many participants. Many years after private companies added new lines
and carried on their business over the Internet, some USENET forums
would react nastily to any posting with a hint of a commercial
message.

Although tedious--despite the amusing anecdotes--my read of Frame
Analysis
was useful because I realized how much of our lives is
lived in context (that is, framed), and how adrift we are when we are
deprived of those frames in today's online environments--cognitively
we know we are deprived, but we don't fully accept its implications.
Conversely, I think that human beings crave context, community, and
references. So the moment we go online, we start to recreate those
things. Whether we're on a simple mailing list or a rich 3D virtual
reality site, we need to explicitly recreate context, community, and
references. It's worth looking for the tools to do so, wherever we
land online.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl