Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 11 2011

The Locker Project: data for the people

Singly, a new company that made an appearance at the Strata Conference Startup showcase, exists to provide oxygen and commercial support to the open source Locker Project, and the new protocol TeleHash.

With some wonderful serendipity I met Singly on my first night at Strata. The next day, I talked in depth to Jeremie Miller and Simon Murtha-Smith, two of the three Singly co-founders (see later in this post). I also had the opportunity to ask Tim O'Reilly and Roger Magoulas for some of their thoughts on the significance of this project (see below for their comments).

It was a real "pinch myself in case I need to wake up from a dream" experience for me, to stumble across Jeremie Miller with Simon Murtha-Smith sitting behind a handwritten sign demoing Singly at Strata (see my pic opening this post). As Marshall Kirkpatrick noted at ReadWriteWeb:

Jeremie Miller is a revered figure among developers, best known for building XMPP, the open source protocol that powers most of the Instant Messaging apps in the world. Now Miller has raised funds and is building a team that will develop software aimed directly at the future of the web.

Singly, by giving people the ability to do things with their own data, has the potential to change our world. And, as Kirkpatrick notes, this won't be the first time Jeremie has done that.

I was drawn over to the Singly table when an awesome app they were demonstrating caught my eye. Fizz, an application from Bloom, was running on a locker with data aggregated from three different places

Fizz is an intriguing early manifestation of capabilities never seen before on the web. It provides the ability for us to control, aggregate, share and play with our own data streams, and bring together the bits and pieces of our digital selves scattered about the web (for more about Bloom and Singly, see Tim O'Reilly's comments below). The picture below is my Fizz. The large circles represent people and the small circles represent their status updates. Bloom explained the functionality in a company blog post:

Clicking a circle will reveal its contents. Typing in the search box will highlight matching statuses. This is an early preview of our work and we'll be adding more features in the next few weeks. We'd love to hear your feedback and suggestions.

If you are not already familiar with the Bloom team — Ben Cerveny, Tom Carden, and Jesper Sparre Andersen — go directly to their about page and you will understand why the match of Bloom and the Locker Project is a cause for great delight.

The Locker Project: A whole new way to connect from the protocol up

Singly, the Locker Project, and TeleHash take on and deliver an elegant and open solution to some of the holy grails of the next generation of networked communications. I have written on, and been nibbling at the edges of some of these grails in various projects myself for quite a while now. A glance at the monster mash of my pre-Strata post will give you an idea of how important I think Singly is.

That previous post raised the question of how to invert the search pyramid and to transform search into a social, democratic act. But if you are really interested in social search, I suggest staying keyed into what Singly is doing with the Locker Project.

One of Singly's three founders, Simon Murtha-Smith, was building a company called Introspectr, a social aggregator and search product. Singly's other founder Jason Cavnar was working on another similar project. And they came together as Singly because social aggregation and search is a very hard problem for one company to solve. They realized that the basic infrastructure needs to be open source and built on an open protocol.

To me, what is so important about the Locker Project is that it is built on a new open protocol, TeleHash. And having the Singly team focused on supplying tools and the trust/security layer for the Locker Project will mean that developers will have the whole stack.

I asked Miller to explain the relationship between TeleHash, the Locker Project and Singly.

Tish Shute: What is TeleHash?

Jeremie Miller: It's a peer-to-peer protocol to move bits of data for applications around. Not file sharing, but it's for actual applications to find each other and connect. So if you had an app and I had an app, whenever we're running that app on our devices, we can actually find those other devices from each other and then connect. Our applications can connect and do something. TeleHash is actually what has led to the Locker project itself.

Tish Shute: So TeleHash led to the Locker Project and the Locker Project led to Singly?

Jeremie Miller: Singly is a company that is sponsoring the open source Locker Project.

TeleHash is a protocol that lets the lockers connect with each other and share things. The locker is like all of your data. It's sort of like a digital person.




Singly interview

Left to right: Jeremie Miller, Jason Cavnar, Simon Murtha-Smith. I took the pic above of all three founders being interviewed by Marshall Kirkpatrick of ReadWriteWeb. I think we will look back on this moment and say it was an inflection point for the web. At least I tweeted that!


Tish Shute: A locker stores bits and pieces of your digital self?

Jeremie Miller: Yes. So TeleHash lets the lockers directly peer-to-peer connect with each other and share things. Singly, as a company, is going to be hosting lockers first and foremost. But the Locker Project is an open source project. You can have a locker in your machine or you can install it wherever you want.

Tish Shute: Will Singly provide the trust layer and hosting?

Jeremie Miller: Yeah. Singly is a company that will host lockers, as well as when people build applications that run inside your lockers or use your data, you need to be able to trust them. Maybe it's initially social data and you don't care that much about, but once you add browsing history, your health data, your running logs, or sleeping information, it's important to be careful about what you're running inside your locker and sharing.

So Singly will also look at the applications that are available that you can install and actually run them and look at what data they access. It will be able to come back and either certify or vouch for them.

I hope in the long-run, as this grows and builds, that power users may actually be able to buy a small device that they can plug into their home network and that would be their locker. Wouldn't that be cool? This little hard drive that you plug in.

Tish Shute: Architecturally, are TeleHash and the Locker Project related to your work on XMPP?

Jeremie Miller: XMPP in Jabber was designed for the specific purpose of instant messaging, but it was still a federated model in that you still had to go through a central point. It was designed with that in mind — for the communication path to be routed through somewhere. Where I've evolved is that I'm fascinated with truly distributed protocols that are completely decentralized so that things are going peer-to-peer instead of going through any server.

Peer-to-peer has gotten a pretty bad rap over the last 10 years because of file sharing, but the potential for it is awesome. There's so many really good things that can be done with peer-to-peer, and it hasn't gotten used much.

But the other side of the peer-to-peer thing that I think is critically important, look at the explosion of the computing devices around an individual — both in the home and on our person. I look at my home network router and I've got 30 devices in my house on Wi-Fi. That's a lot of devices.

But right now, to work with those devices I'm almost always going through a server somewhere, or through a data center somewhere. That's ridiculous.

Tish Shute: So we need a peer-to-peer network just to manage our own devices?

Jeremie Miller: A peer-to-peer network, yes. My phone should be talking straight to my computer, or to the iPad, or to the washing machine, or to the refrigerator. The applications in my TV should all be talking peer-to-peer. And it should be easy to do that. It shouldn't be that the only way you can do that is to go through a data center somewhere.

Note: Updates to the Locker Project will be posted through @lockerproject and GitHub

The Locker Project is not just "one more rebel army trying to undo these big data aggregations"

I discussed the impact TeleHash, the Locker Project and Singly might have on social network incumbents with Roger Magoulas and Tim O'Reilly. Both had insightful comments.

Magoulas pointed out:

I think Singly has Facebook-like aspects, but I think a better description is an app platform that integrates your personal and social network data — including data from Facebook.

Singly is likely to have challenges with some of their data sources, particularly if it gains traction with users. I like the app platform business model, although they face risks getting critical mass and app developer attention. I also like how they plan on using open source connectors to keep up with changing social network platforms.

Jeremie [Miller] has credibility with the open source community and is likely to find cooperating developers. The team seems to bring complementary strengths to the project and you can tell they all work well together.

Tim O'Reilly elaborated on the potential of this platform to bring something new to the ecosystem. Our interview follows:

Tish Shute: Will the Locker Project be able to break the lock of big sites, like Facebook, controlling everyone's data? Sometimes I feel we are stuck in the era of Zyngification, where you have to do what Zynga did and leverage the system in order to gain traction or do anything with social data.

Tim O'Reilly: I don't think breaking the Facebook lock is the objective of the Locker Project. The value of Facebook is having your data there with other people's data. What Singly may be able to do is give people better tools for managing their data. If you can take the data from various sites and manage it yourself, then you can potentially make better decisions about what you're going to allow and not allow. Right now, the interfaces on a lot of these sites make it difficult to understand the implications of making your data available.

If this is done right, it will create a marketplace where people will build interfaces that provide more control over personal data. People will still want to put data on sites for the same reason you put money in the bank: it's more valuable when it's combined with other people's money.

To conceive of this project as one more rebel army trying to undo these big data aggregations is just the wrong way to frame it.

Tish Shute: Framing the question the way you just did — that this is not just one more rebel army — might mean the stage at Strata will be filled with new startups next year. That's what I thought when I found out what the Locker Project and Singly are about: that we're about to see an explosion of creativity with personal and social data.

Tim O'Reilly: The tools we have now are pretty primitive. If we get a better set of tools, I think we'll see a lot of innovation. Some of those startups might be acquired by Facebook or Google, but if those smaller companies give people better visibility and control over their data, that's a good thing.

Tish Shute: I loved the marriage between Singly and Bloom [mentioned above]. It's interesting because Ben Cerveny and the Bloom team haven't really talked a lot about Bloom yet. I gather Bloom is moving toward consumer-facing work with data?

Tim O'Reilly: People think of data visualization as output, and the insight that I think Ben has had with Bloom is that data visualization will become a means of input and control.

I've started to feel that visualization as a way of making sense of complex data is kind of a dead-end. What you really want to do is build feedback loops where people can actually figure something out. Being able to manipulate data in real-time is an important shift. Data visualizations would then become interfaces rather than reports.

This post was edited and condensed. A longer version, featuring additional interviews and analysis, is available at UgoTrade.

April 06 2010

DC Circuit court rules in Comcast case, leaves the FCC a job to do

Today's ruling in Comcast v. FCC will certainly change the
terms of debate over network neutrality, but the win for Comcast is
not as far-reaching as headlines make it appear. The DC Circuit court
didn't say, "You folks at the Federal Communications Commission have
no right to tell any Internet provider what to do without
Congressional approval." It said, rather, "You folks at the FCC didn't
make good arguments to prove that your rights extend to stopping
Comcast's particular behavior."

I am not a lawyer, but to say what happens next will take less of a
lawyer than a fortune-teller. I wouldn't presume to say whether the
FCC can fight Comcast again over the BitTorrent issue. But the court
left it open for the FCC to try other actions to enforce rules on
Internet operators. Ultimately, I think the FCC should take a hint
from the court and stop trying to regulate the actions of telephone
and cable companies at the IP layer. The hint is to regulate them at
the level where the FCC has more authority--on the physical level,
where telephone companies are regulated as common carriers and cable
companies have requirements to the public as well.




The court noted (on pages 30 through 34 of href="http://pacer.cadc.uscourts.gov/common/opinions/201004/08-1291-1238302.pdf">its
order) that the FCC missed out on the chance to make certain
arguments that the court might have looked on more favorably.
Personally and amateurly, I think those arguments would be weak
anyway. For instance, the FCC has the right to regulate activities
that affect rates. VoIP can affect phone rates and video downloads
over the Internet can affect cable charges for movies. So the FCC
could try to find an excuse to regulate the Internet. But I wouldn't
be the one to make that excuse.

The really significant message to the FCC comes on pages 30 and 32.
The court claims that any previous court rulings that give power to
the FCC to regulate the Internet (notably the famous Brand X decision)
are based on its historical right to regulate common carriers (e.g.,
telephone companies) and broadcasters. Practically speaking, this
gives the FCC a mandate to keep regulating the things that
matter--with an eye to creating a space for a better Internet and
high-speed digital networking (broadband).

Finding the right layer

Comcast v. FCC combines all the elements of a regulatory
thriller. First, the stakes are high: we're talking about who
controls the information that comes into our homes. Second, Comcast
wasn't being subtle in handling BitTorrent; its manipulations were
done with a conscious bias, carried out apparently arbitrarily (rather
than being based on corporate policy, it seems that a network
administrator made and implemented a personal decision), and were kept
secret until customers uncovered the behavior. If you had asked for a
case where an Internet provider said, "We can do anything the hell we
want regardless of any political, social, technical, moral, or
financial consequences," you'd choose something like Comcast's
impedance of BitTorrent.

And the court did not endorse that point of view. Contrary to many
headlines, the court affirmed that the FCC has the right to
regulate the Internet. Furthermore, the court acknowledged that
Congress gave the FCC the right to promote networking. But the FCC
must also observe limits.

The court went (cursorily in some cases) over the FCC's options for
regulating Comcast's behavior, and determined either that there was no
precedent for it or (I'm glossing over lots of technicalities here)
that the FCC had not properly entered those options into the case.

The FCC should still take steps to promote the spread of high-speed
networking, and to ensure that it is affordable by growing numbers of
people. But it must do so by regulating the lines, not what travels
over those lines.

As advocates for greater competition have been pointing out for
several years, the FCC fell down on that public obligation. Many trace
the lapse to the chairmanship of Bush appointee Michael Powell. And
it's true that he chose to try to get the big telephone and cable
companies to compete with each other (a duopoly situation) instead of
opening more of a space for small Internet providers. I cover this
choice in a 2004
article
. But it's not fair to say Powell had no interest in
competition, nor is it historically accurate to say this was a major
change in direction for the FCC.

From the beginning, when the 1996 telecom act told the FCC to promote
competition, implementation was flawed. The FCC chose 14 points in the
telephone network where companies had to allow interconnection (so
competitors could come on the network). But it missed at least one
crucial point. The independent Internet providers were already losing
the battle before Powell took over the reins at the FCC.

And the notion of letting two or three big companies duke it out
(mistrusting start-ups to make a difference) is embedded in the 1996
act itself.

Is it too late to make a change? We must hope not. Today's court
ruling should be a wake-up call; it's time to get back to regulating
things that the FCC actually can influence.

Comcast's traffic shaping did not change the networking industry. Nor
did it affect the availability of high-speed networks. It was a clumsy
reaction by a beleaguered company to a phenomenon it didn't really
understand. Historically, it will prove an oddity, and so will the
spat that network advocates started, catching the FCC in its snares.

The difficulty software layers add

The term "Internet" is used far too loosely. If you apply it to all
seven layers of the ISO networking model, it covers common carrier
lines regulated by the FCC (as well as cable lines, which are subject
to less regulation--but still some). But the FCC has historically
called the Internet a "service" that is separate from those lines.

Software blurs and perhaps even erases such neat distinctions. Comcast
does not have to rewire its network or shut down switches to control
it. All they have to do is configure a firewall. That's why stunts
like holding back BitTorrent traffic become networking issues and draw
interest from the FCC. But it also cautions against trying to regulate
what Comcast does, because it's hard to know when to stop. That's what
opponents of network neutrality say, and you can hear it in the court
ruling.

The fuzzy boundaries between software regulation and real-world
activities bedevils other areas of policy as well. Because
sophisticated real-world processing moves from mechanical devices into
software, it encourages inventors to patent software innovations, a
dilemma I explore in href="http://radar.oreilly.com/archives/2007/09/three_vantage_p.html">another
article. And in the 1990s, courts argued over whether encryption
was a process or a form of expression--and decided it was a form of
expression.

Should the FCC wait for Congress to tell it what to do? I don't think
so. The DC Circuit court blocked one path, but it didn't tell the FCC
to turn back. It has a job to do, and it just has to find the right
tool for the job.

January 14 2010

Innovation Battles Investment as FCC Road Show Returns to Cambridge

Opponents can shed their rhetoric and reveal new depths to their
thought when you bring them together for rapid-fire exchanges,
sometimes with their faces literally inches away from each other. That
made it worth my while to truck down to the MIT Media Lab for
yesterday's href="http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-295521A1.pdf">Workshop
on Innovation, Investment and the Open Internet, sponsored by the
Federal Communications Commission. In this article I'll cover:

Context and background

The FCC kicked off its country-wide hearing campaign almost two years
ago with a meeting at Harvard Law School, which quickly went wild. I
covered the href="http://radar.oreilly.com/archives/2008/02/network-neutrality-how-the-fcc.html">
experience in one article and the href="http://radar.oreilly.com/archives/2008/02/network-neutrality-code-words.html">
unstated agendas in another. With a star cast and an introduction
by the head of the House's Subcommittee on Telecommunications and the
Internet, Ed Markey, the meeting took on such a cachet that the
public flocked to the lecture hall, only to find it filled because
Comcast recruited people off the street to pack the seats and keep
network neutrality proponents from attending. (They had an overflow
room instead.)

I therefore took pains to arrive at the Media Lab's Bartos Theater
early yesterday, but found it unnecessary. Even though Tim Berners-Lee
spoke, along with well-known experts across the industry, only 175
people turned up, in my estimation (I'm not an expert at counting
crowds). I also noticed that the meeting wasn't worth a
mention today in the Boston Globe.

Perhaps it was the calamitous earthquake yesterday in Haiti, or the
bad economy, or the failure of the Copenhagan summit to solve the
worst crisis ever facing humanity, or concern over three wars the US
is involved in (if you count Yemen), or just fatigue, but it seems
that not as many people are concerned with network neutrality as two
years ago. I recognized several people in the audience yesterday and
surmised that the FCC could have picked out a dozen people at random
from their seats, instead of the parade of national experts on the
panel, and still have led a pretty darned good discussion.

And network neutrality is definitely the greased pig everyone is
sliding around. There are hundreds of things one could discuss
in the context of innovation and investment, but various political
forces ranging from large companies (AT&T versus Google) to highly
visible political campaigners (Huffington Post) have made network
neutrality the agenda. The FCC gave several of the movement's leaders
rein to speak, but perhaps signaled its direction by sending Meredith
Attwell Baker as the commissioner in attendance.

In contrast to FCC chair Julius Genachowski, who publicly calls for
network neutrality (a position also taken by Barack Obama href="http://www.barackobama.com/issues/technology/index_campaign.php#open-internet">
during his presidential campaign), Baker has traditionally
espoused a free-market stance. She opened the talks yesterday by
announcing that she is "unconvinced there is a problem" and posing the
question: "Is it broken?" I'll provide my own opinion later in this
article.

Two kinds of investment

Investment is the handmaiden, if not the inseminator, of innovation.
Despite a few spectacular successes, like the invention of Linux and
Apache, most new ideas require funding. Even Linux and Apache are
represented now by foundations backed now by huge companies.

So why did I title this article "Innovation Battles Investment"?
Because investment happens at every level of the Internet, from the
cables and cell towers up to the applications you load on your cell
phone.

Here I'll pause to highlight an incredible paradigm shift that was
visible at this meeting--a shift so conclusive that no one mentioned
it. Are you old enough to remember the tussle between "voice" and
"data" on telephone lines? Remember the predictions that data would
grow in importance at the expense of voice (meaning Plain Old
Telephone Service) and the milestones celebrated in the trade press when
data pulled ahead of voice?

Well, at the hearing yesterday, the term "Internet" was used to cover
the whole communications infrastructure, including wires and cell
phone service. This is a mental breakthrough all it's own, and one
I'll call the Triumph of the Singularity.

But different levels of infrastructure benefit from different
incentives. I found that all the participants danced around this.
Innovation and investment at the infrastructure level got short shrift
from the network neutrality advocates, whether in the bedtime story
version delivered by Barbara van Schewick or the deliberately
intimidating, breakneck overview by economist Shane Greenstein, who
defined openness as "transparency and consistency to facilitate
communication between different partners in an independent value
chain."

You can explore his href="http://www.kellogg.northwestern.edu/faculty/greenstein/images/articles.html">
papers on your own, but I took this to mean, more or less, that
everybody sharing a platform should broadcast their intentions and
appraise everybody else of their plans, so that others can make the
most rational decisions and invest wisely. Greenstein realized, of
course that firms have little incentive to share their strategies. He said that
communication was "costly," which I take as a reference not to an expenditure of
money but to a surrender of control and relinquishing of opportunities.

This is just what the cable and phone companies are not going to do.
Dot-com innovator Jeffrey Glueck, founder of href="http://www.skyfire.com/">Skyfire, would like the FCC to
require ISPs to give application providers and users at least 60 to 90
days notice before making any changes to how they treat traffic. This
is absurd in an environment where bad actors require responses within
a few seconds and the victory goes to the router administrators with
the most creative coping strategy. Sometimes network users just have
to trust their administrators to do the best thing for them. Network
neutrality becomes a political and ethical issue when administrators
don't. But I'll return to this point later.

The pocket protector crowd versus the bean
counters

If the network neutrality advocates could be accused of trying to
emasculate the providers, advocates for network provider prerogative
were guilty of taking the "Trust us" doctrine too far. For me, the
best part of yesterday's panel was how it revealed the deep gap that
still exists between those with an engineering point of view and those
with a traditional business point of view.

The engineers, led by Internet designer David Clark, repeated the
mantra of user control of quality of service, the vehicle for this
being the QoS field added to the IP packet header. Van Schewick
postulated a situation where a user increases the QoS on one session
because they're interviewing for a job over the Internet, then reduces
the QoS to chat with a friend.

In the rosy world envisioned by the engineers, we would deal not with
the physical reality of a shared network with our neighbors, all
converging into a backhaul running from our ISP to its peers, but with
the logical mechanism of a limited, dedicated bandwidth pipe (former
senator Ted Stevens can enjoy his revenge) that we would spend our
time tweaking. One moment we're increasing the allocation for file
transfer so we can upload a spreadsheet to our work site; the next
moment we're privileging the port we use for an MPMG.

The practicality of such a network service is open to question. Glueck
pointed out that users are unlikely ever to ask for lower quality of
service (although this is precisely the model that Internet experts
have converged on, as I report in my 2002 article href="http://www.oreillynet.com/pub/a/network/2002/06/11/platform.html">
A Nice Way to Get Network Quality of Service?). He recommends
simple tiers of service--already in effect at many providers--so that
someone who wants to carry out a lot of P2P file transfers or
high-definition video conferencing can just pay for it.

In contrast, network providers want all the control. Much was made
during the panel of a remark by Marcus Weldon of Alcatel-Lucent in
support of letting the providers shape traffic. His pointed out that
video teleconferencing over the fantastically popular Skype delivered
unappealing results over today's best-effort Internet delivery, and
suggested a scenario where the provider gives the user a dialog box
where the user could increase the QoS for Skype in order to enjoy the
video experience.

Others on the panel legitimately flagged this comment as a classic
illustration of the problem with providers' traffic shaping: the
provider would negotiate with a few popular services such as Skype
(which boasts tens of millions of users online whenever you log in)
and leave innovative young services to fend for themselves in a
best-effort environment.

But the providers can't see doing quality of service any other way.
Their business model has always been predicated on designing services
around known costs, risks, and opportunities. Before they roll out a
service, they need to justify its long-term prospects and reserve
control over it for further tweaking. If the pocket protector crowd in
Internet standards could present their vision to the providers in a
way that showed them the benefits they'd accrue from openness
(presumably by creating a bigger pie), we might have progress. But the
providers fear, above else, being reduced to a commodity. I'll pick up
this theme in the next section.

Is network competition over?

Law professor Christopher S. Yoo is probably the most often heard (not
at this panel, unfortunately, where he was given only a few minutes)
of academics in favor of network provider prerogatives. He suggested
that competition was changing, and therefore requiring a different
approach to providers' funding models, from the Internet we knew in
the 1990s. Emerging markets (where growth comes mostly from signing up
new customers) differ from saturated markets (where growth comes
mainly from wooing away your competitors' customers). With 70% of
households using cable or fiber broadband offerings, he suggested the
U.S. market was getting saturated, or mature.

Well, only if you accept that current providers' policies will stifle
growth. What looks like saturation to an academic in the U.S. telecom
field looks like a state of primitive underinvestment to people who
enjoy lightning-speed service in other developed nations.

But Yoo's assertion makes us pause for a moment to consider the
implications of a mature network. When change becomes predictable and
slow, and an infrastructure is a public good--as I think everyone
would agree the Internet is--it becomes a candidate for government
takeover. Indeed, there have been calls for various forms of
government control of our network infrastructure. In some places this
is actually happening, as cities and towns create their own networks.
A related proposal is to rigidly separate the physical infrastructure
from the services, barring companies that provide the physical
infrastructure from offering services (and therefore presumably
relegating them to a maintenance role--a company in that position
wouldn't have much incentive to take on literally ground-breaking new
projects).

Such government interventions are politically inconceivable in the
United States. Furthermore, experience in other developed nations with
more successful networks shows that it is unnecessary.

No one can doubt that we need a massive investment in new
infrastructure if we want to use the Internet as flexibly and
powerfully as our trading partners. But there was disagreement yesterday about
how much of an effort the investment will take, and where it will come
from.

Yoo argued that a mature market requires investment to come from
operating expenditures (i.e., charging users more money, which
presumably is justified by discriminating against some traffic in
order to offer enhanced services at a premium) instead of capital
expenditures. But Clark believes that current operating expenditures
would permit adequate growth. He anticipated a rise in Internet access
charges of $20 a month, which could fund the added bandwidth we need
to reach the Internet speeds of advanced countries. In exchange for
paying that extra $20 per month, we would enjoy all the content we
want without paying cable TV fees.

The current understanding by providers is that usage is rising
"exponentially" (whatever that means--they don't say what the exponent
is) whereas charges are rising slowly. Following some charts from
Alcatel-Lucent's Weldon that showed profits disappearing entirely in a
couple years--a victim of the squeeze between rising usage and slow
income growth--Van Schewick challenged him, arguing that providers can
enjoy lower bandwidth costs to the tune of 30% per year. But Weldon
pointed out that the only costs going down are equipment, and claimed
that after a large initial drop caused by any disruptive new
technology, costs of equipment decrease only 10% per year.

Everyone agreed that mobile, the most exciting and
innovation-supporting market, is expensive to provide and suffering an
investment crisis. It is also the least open part of the Internet and
the part most dependent on legacy pricing (high voice and SMS
charges), deviating from the Triumph of the Singularity.

So the Internet is like health care in the U.S.: in worse shape than
it appears. We have to do something to address rising
usage--investment in new infrastructure as well as new
applications--just as we have to lower health care costs that have
surpassed 17% of the gross domestic product.

Weldon's vision--a rosy one in its own way, complementing the
user-friendly pipe I presented earlier from the engineers--is that
providers remain free to control the speeds of different Internet
streams and strike deals with anyone they want. He presented provider
prerogatives as simple extensions of what already happens now, where
large companies create private networks where they can impose QoS on
their users, and major web sites contract with content delivery
networks such as Akamai (represented at yesterday's panel by lawyer
Aaron Abola) to host their content for faster response time. Susie Kim
Riley of Camiant testified that European providers are offering
differentiated services already, and making money by doing so.

What Weldon and Riley left out is what I documented in href="http://www.oreillynet.com/pub/a/network/2002/06/11/platform.html">
A Nice Way to Get Network Quality of Service? Managed networks
providing QoS are not the Internet. Attempts to provide QoS over the
Internet--by getting different providers to cooperate in privileging
certain traffic--have floundered. The technical problems may be
surmountable, but no one has figured out how to build trust and to design
adequate payment models that would motivate providers to cooperate.

It's possible, as Weldon asserts, that providers allowed to manage
their networks would invest in infrastructure that would ultimately
improve the experience for all sites--those delivered over the
Internet by best-effort methods as well as those striking deals. But
the change would still represent increased privatization of the public
Internet. It would create what application developers such as Glueck
and Nabeel Hyatt of Conduit Labs fear most: a thousand different
networks with different rules that have to be negotiated with
individually. And new risks and costs would be placed in the way of
the disruptive innovators we've enjoyed on the Internet.

Competition, not network neutrality, is actually the key issue facing
the FCC, and it was central to their Internet discussions in the years
following the 1996 Telecom Act. For the first five years or so, the
FCC took seriously a commitment to support new entrants by such
strategies as requiring incumbent companies to allow interconnection.
Then, especially under Michael Powell, the FCC did an about-face.

The question posed during this period was: what leads to greater
investment and growth--letting a few big incumbents enter each other's
markets, or promoting a horde of new, small entrants? It's pretty
clear that in the short term, the former is more effective because the
incumbents have resources to throw at the problem, but that in the
long term, the latter is required in order to find new solutions and
fix problems by working around them in creative ways.

Yet the FCC took the former route, starting in the early 2000s. They
explicitly made a deal with incumbents: build more infrastructure, and
we'll relax competition rules so you don't have to share it with other
companies.

Starting a telecom firm is hard, so it's not clear that pursuing the
other route would have saved us from the impasse we're in today. But a
lack of competition is integral to our problems--including the one
being fought out in the field of "network neutrality."

All the network neutrality advocates I've talked to wish that we had
more competition at the infrastructure level, because then we could
rely on competition to discipline providers instead of trying to
regulate such discipline. I covered this dilemma in a 2006 article, href="http://lxer.com/module/newswire/view/53907/">Network Neutrality
and an Internet with Vision. But somehow, this kind of competition
is now off the FCC agenda. Even in the mobile space, they offer
spectrum though auctions that permit the huge incumbents to gather up
the best bands. These incumbents then sit on spectrum without doing
anything, a strategy known as "foreclosure" (because it forecloses
competitors from doing something useful with it).

Because everybody goes off in his own direction, the situation pits two groups against each other that should be
cooperating: small ISPs and proponents of an open Internet.

What to regulate

Amy Tykeson, CEO of a small Oregon Internet provider named
BendBroadband, forcefully presented the view of an independent
provider, similar to the more familiar imprecations by href="http://www.brettglass.com/"> Brett Glass of Lariat. In their
world--characterized by paper-thin margins, precarious deals with
back-end providers, and the constant pressure to provide superb
customer service--flexible traffic management is critical and network
neutrality is viewed as a straitjacket.

I agree that many advocates of network neutrality have oversimplified
the workings of the Internet and downplayed the day-to-day
requirements of administrators. In contrast, as I have shown, large
network providers have overstepped their boundaries. But to end this
article on a positive note (you see, I'm trying) I'll report that the
lively exchange did produce some common ground and a glimmer of hope
for resolving the differing positions.

First, in an exchange between Berners-Lee and van Schewick on the
pro-regulatory side and Riley on the anti-regulatory side, a more
nuanced view of non-discrimination and quality of service emerged.
Everybody on panel offered vociferous exclamations in support of the position that it was
unfair discrimination for a network provider to prevent a user from
getting legal content or to promote one web site over a competing web
site. And this is a major achievement, because those are precisely
the practices that providers liked AT&T and Verizon claim the
right to do--the practices that spawned the current network neutrality
controversy.

To complement this consensus, the network neutrality folks approved
the concept of quality of service, so long as it was used to improve
the user experience instead of to let network providers pick winners.
In a context where some network neutrality advocates have made QoS a
dirty word, I see progress.

This raises the question of what is regulation. The traffic shaping
policies and business deals proposed by AT&T and Verizon are a
form of regulation. They claim the same privilege that large
corporations--we could look at health care again--have repeatedly
tried to claim when they invoke the "free market": the right of
corporations to impose their own regulations.

Berners-Lee and others would like the government to step in and issue
regulations that suppress the corporate regulations. A wide range of
wording has been proposed for the FCC's consideration. Commissioner
Baker asked whether, given the international reach of the Internet,
the FCC should regulate at all. Van Schewick quite properly responded
that the abuses carried out by providers are at the local level and
therefore can be controlled by the government.

Two traits of a market are key to innovation, and came up over and
over yesterday among dot-com founders and funders (represented by Ajay
Agarwal of Bain Capital) alike: a level playing field, and
light-handed regulation.

Sometimes, as Berners-Lee pointed out, government regulation is
required to level the playing field. The transparency and consistency
cited by Greenstein and others are key features of the level playing
field. And as I pointed out, a vacuum in government regulation is
often filled by even more onerous regulation by large corporations.

One of the most intriguing suggestions of the day came from Clark, who
elliptically suggested that the FCC provide "facilitation, not
regulation." I take this to mean the kind of process that Comcast and
BitTorrent went through, of which Sally Shipman Wentworth of ISOC
boasted about in her opening remarks. Working with the IETF (which she
said created two new working groups to deal with the problem), Comcast
and BitTorrent worked out a protocol that should reduce the load of
P2P file sharing on networks and end up being a win-win for everybody.

But there are several ways to interpret this history. To free market
ideologues, the Comcast/BitTorrent collaboration shows that private
actors on the Internet can exploit its infinite extendibility to find
their own solutions without government meddling. Free market
proponents also call on anti-competition laws to hold back abuses. But
those calling for parental controls would claim that Comcast wanted
nothing to do with BitTorrent and started to work on technical
solutions only after getting tired of the feces being thrown its way
by outsiders, including the FCC.

And in any case--as panelists pointed out--the IETF has no enforcement
power. The presence of a superior protocol doesn't guarantee that
developers and users will adopt it, or that network providers will
allow traffic that could be a threat to their business models.

The FCC at Harvard, which I mentioned at the beginning of this
article, promised intervention in the market to preserve Internet
freedom. What we got after that (as I predicted) was a slap on
Comcast's wrist and no clear sense of direction. The continued
involvement of the FCC--including these public forums, which I find
educational--show, along with the appointment of the more
interventionist Genachowski and the mandate to promote broadband in
the American Recovery and Reinvestment Act, that it can't step away
from the questions of competition and investment.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl