Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 08 2014

How did we end up with a centralized Internet for the NSA to mine?

I’m sure it was a Wired editor, and not the author Steven Levy, who assigned the title “How the NSA Almost Killed the Internet” to yesterday’s fine article about the pressures on large social networking sites. Whoever chose the title, it’s justifiably grandiose because to many people, yes, companies such as Facebook and Google constitute what they know as the Internet. (The article also discusses threats to divide the Internet infrastructure into national segments, which I’ll touch on later.)

So my question today is: How did we get such industry concentration? Why is a network famously based on distributed processing, routing, and peer connections characterized now by a few choke points that the NSA can skim at its leisure?

I commented as far back as 2006 that industry concentration makes surveillance easier. I pointed out then that the NSA could elicit a level of cooperation (and secrecy) from the likes of Verizon and AT&T that it would never get in the US of the 1990s, where Internet service was provided by thousands of mom-and-pop operations like Brett Glass’s wireless service in Laramie, Wyoming. Things are even more concentrated now, in services if not infrastructure.

Having lived through the Boston Marathon bombing, I understand what the NSA claims to be fighting, and I am willing to seek some compromise between their needs for spooking and the protections of the Fourth Amendment to the US Constitution. But as many people have pointed out, the dangers of centralized data storage go beyond the NSA. Bruce Schneier just published a pretty comprehensive look at how weak privacy leads to a weakened society. Others jeer that if social networking companies weren’t forced to give governments data, they’d be doing just as much snooping on their own to raise the click rates on advertising. And perhaps our more precious, closely held data — personal health information — is constantly subject to a marketplace for data mining.

Let’s look at the elements that make up the various layers of hardware and software we refer to casually as the Internet. How does centralization and decentralization work for each?

Public routers

One of Snowden’s major leaks reveals that the NSA pulled a trick comparable to the Great Firewall of China, tracking traffic as it passes through major routers across national borders. Like many countries that censor traffic, in other words, the NSA capitalized on the centralization of international traffic.

Internet routing within the US has gotten more concentrated over the years. There were always different “tiers” of providers, who all did basically the same thing but at inequitable prices. Small providers always complained about the fees extracted by Tier 1 networks. A Tier 1 network can transmit its own traffic nearly anywhere it needs to go for just the cost of equipment, electricity, etc., while extracting profit from smaller networks that need its transport. So concentration in the routing industry is a classic economy of scale.

International routers, of the type targeted by the NSA and many US governments, are even more concentrated. African and Latin American ISPs historically complained about having to go through US or European routers even if the traffic just came back to their same continent. (See, for instance, section IV of this research paper.) This raised the costs of Internet use in developing countries.

The reliance of developing countries on outside routers stems from another simple economic truth: there are more routers in affluent countries for the same reason there are more shopping malls or hospitals in affluent countries. Foreigners who have trespassed US laws can be caught if they dare to visit a shopping mall or hospital in the US. By the same token, their traffic can be grabbed by the NSA as it travels to a router in the US, or one of the other countries where the NSA has established a foothold. It doesn’t help that the most common method of choosing routes, the Border Gateway Protocol (BGP), is a very old Internet standard with no concept of built-in security.

The solution is economic: more international routers to offload traffic from the MAE-Wests and MAE-Easts of the world. While opposing suggestions to “balkanize” the Internet, we can applaud efforts to increase connectivity through more routers and peering.

IaaS cloud computing

Centralization has taken place at another level of the Internet: storage and computing. Data is theoretically safe from intruders in the cloud so long as encryption is used both in storage and during transmission — but of course, the NSA thought of that problem long ago, just as they thought of everything. So use encryption, but don’t depend on it.

Movement to the cloud is irreversible, so the question to ask is how free and decentralized the cloud can be. Private networks can be built on virtualization solutions such as the proprietary VMware and Azure or the open source OpenStack and Eucalyptus. The more providers there are, the harder it will be to do massive data collection.

SaaS cloud computing

The biggest change — what I might even term the biggest distortion — in the Internet over the past couple decades has been the centralization of content. Ironically, more and more content is being produced by individuals and small Internet users, but it is stored on commercial services, where it forms a tempting target for corporate advertisers and malicious intruders alike. Some people have seriously suggested that we treat the major Internet providers as public utilities (which would make them pretty big white elephants to unload when the next big thing comes along).

This was not technologically inevitable. Attempts at peer-to-peer social networking go back to the late 1990s with Jabber (now the widely used XMPP standard), which promised a distributed version of the leading Internet communications medium of the time: instant messaging. Diaspora more recently revived the idea in the context of Facebook-style social networking.

These services allow many independent people to maintain servers, offering the service in question to clients while connecting where necessary. Such an architecture could improve overall reliability because the failure of an individual server would be noticed only by people trying to communicate with it. The architecture would also be pretty snoop-proof, too.

Why hasn’t the decentralized model taken off? I blame SaaS. The epoch of concentration in social media coincides with the shift of attention from free software to SaaS as a way of delivering software. SaaS makes it easier to form a business around software (while the companies can still contribute to free software). So developers have moved to SaaS-based businesses and built new DevOps development and deployment practices around that model.

To be sure, in the age of the web browser, accessing a SaaS service is easier than fussing with free software. To champion distributed architectures such as Jabber and Diaspora, free software developers will have to invest as much effort into the deployment of individual servers as SaaS developers have invested in their models. Business models don’t seem to support that investment. Perhaps a concern for privacy will.

November 29 2010

Susan Landau explores Internet security and the attribution problem

Susan Landau gave a talk at Harvard today on her latest policy work on
cybersecurity. Landau is a noted privacy advocate whose public
advocacy work goes back to the crypto wars of the 1990s. Together with
the renowned Whitfield Diffie, she wrote

Privacy on the Line: the Politics of Wiretapping and Encryption
,
and she's about to release a new book,

Surveillance or Security? The Risks Posed by New Wiretapping Technologies
.
She didn't have far to travel to deliver her talk today, because she's currently a

Radcliffe fellow
. The audience, mostly of Harvard CS students and
postdocs, was appropriately prepped to follow her through cramped and
twisting paths of tech policy.

You'd expect a researcher of Landau's experience to tackle
increasingly difficult problems, and her outline of her current
research certainly fits the expectation. The trigger for this research
is the call by many people, ranging from computer scientists working
on core networking protocols to members of Congress, for an Internet
where people can be tracked. This is called attribution, and
it means that I can be found if I place a threatening anonymous
comment on a blog, or download illegal pornographic material, or
release a virus that places identity-stealing software on ten thousand
computer systems.

The parameters of attribution

A lot of attribution can already be done already. A warrant from law
enforcement or a simple request from the RIAA can force ISPs to
surrender information on who used a particular IP address at a
particular time. ISPs keep this information for a period of time, and
even Internet cafes or public libraries could do this too.

People with a bit of sophistication can evade attribution, though. The
virus that's causing your computer to send out spam or launch a
distributed Denial-of-Service attack may have been placed there
through an unwise visit to a web site years earlier. Security breaches
that pass through multiple systems are called "multi-stage"
attributions problems by Landau. Proxy servers used by people in
countries that block web traffic, and onion routing networks used by
people sending anonymous email, complicate the security picture too.

Attribution, like the whole larger area of cybersecurity, occupies an
ethical hall of mirror where no one's true position is easy to
determine. Obviously, what I consider a crime that I have to uncover
is considered by my quarry to be a liberating act that calls for
protection. We can't let the RIAA find copyright infringers or help
the FBI trace terrorist networks without letting China and Saudi
Arabia arrest online protesters.

Landau rhetorically asked why the United States has not proposed a
cybersecurity, anti-hacking treaty, and answered by suggesting that
the NSA has been engaging in its own cyber-break-ins for a couple
decades. The view of international cybersecurity she laid out, and
that I've read about elsewhere, is quite a jungle. Numerous actors of
varying intent and with varying relationships to the law move in and
out of favor with various governments. Governments spy on companies
for commercial advantage and help companies spy on foreign companies.
Everybody wants just enough security to keep trust in the Internet
from collapsing, without losing competitive advantage in the hacking
wars.

Attribution lies at many levels. For some attacks, Landau says, we
need to know only which machine has launched it. Other attacks need to
be tied to a person, and still others to entities such as corporations
or governments.

Landau's recommendations, and reactions

In this kind of fast-shifting environment with so many competing
agendas, no prim and elegant solutions will be found. The insights
Landau presented today are a work in progress, and several aspects
were challenged from the floor.

Her main point is that we don't need to re-architect the Internet to
make use more attributable, and that we shouldn't try because it could
remove much of what's good about the Internet. As I mentioned before,
we have a good deal of attribution already. Landau recommends we
refine and expand our legal regimes to deal with current attribution
techniques justly, and extend them a bit.

Her most far-reaching proposal was to run software on ordinary users'
PCs to log Internet traffic for a limited time (30 days, for
instance). This can benefit users by helping them figure out where
some malware might have come from. But it mostly benefits
investigators. When they ask the ISP for traffic information (which
faces a low legal threshold, such as a subpoena), the ISP can ask an
end-user for log files from a short time period. If the user refuses
to keep logs, the ISP would be legally entitled to log all traffic
coming from and to the user.

The whole point of this technical and policy change is to help trace
multi-stage attacks. I'm not sure this would help reduce the fifteen
percent or more US computers estimated to be infected by malware,
because as I said earlier, the intruders are quite capable of lying
dormant long past the deadline for discarding Internet traffic. Landau
put forward a scenario where an ISP gets a list of infected web sites
that place malware on client systems, and then sends out email to its
customers asking who has visited that web site recently. But its
customers wouldn't need log files to know whether they had visited the
site.

Landau distinguishes tiers of attribution. The simplest is
single-stage attribution, as when the RIAA identifies a file-sharer or
a blog site identifies a defamatory poster.

Single jurisdiction, multi-stage attribution occurs when a breach has
to be traced across two or more links between computer systems, but
they are both in the same country or in cooperating countries.
Currently, the US works cooperatively with most of Europe and some
Middle Eastern countries to trace illegal traffic. Landau wishes we
could get Russia into the jurisdiction as well. But as she pointed
out, each government has conflicting goals that push and pull it
toward and away from cybersecurity. Diplomacy will be required to
expand cooperation.

The most complex scenario is multiple jurisdiction, multi-stage
attribution. This can be accomplished through treaties and policy
mechanisms.

I wonder whether we should look outside the Internet for solutions to
many security problems, just as e-commerce sites depend on a vast and
sophisticated credit system to protect online transactions. Here's an
example: spam can be sent anonymously. But if the spammer is also a
scammer, he needs to provide an address where victims send their
money. If your money can find a scammer, so can law enforcement.

Landau took several challenges from the audience, who wondered whether
her solution would be too weak to cover more than a few scenarios or
just, as one person put it, a way to make the RIAA's work easier. As a
privacy advocate, why is Landau working so hard on technical solutions
to help law enforcement find people?

First, we face many serious cyberthreats that none of us can afford to
ignore, regardless of our love of freedom. Second, Landau wants to
propose low-impact solutions in order to stave off high-impact
ones. She admires the work that organizations such as the
Electronic Privacy Information Center
and the
ACLU
do from a public-interest angle on privacy, and understands that
technology companies can be motivated to oppose bad policies because
of their crimp on innovation. But in Washington, security trumps all
these concerns. Nobody wants to be caught napping in the event of a
major terror attack or other security breach. Landau is asking them to
try on a new and lighter framework for improving attribution.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl