Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

April 21 2011

Developing countries and Open Compute

Open Compute ProjectDuring a panel discussion after the recent Facebook Open Compute announcement, a couple of panelists — Jason Waxman, GM in Intel's server platforms group, and Forrest Norrod, VP and GM of Dell's server platform — indicated the project could be beneficial to developing countries. Waxman said:

The reality is, you walk into data centers in emerging countries and it's a 2-kilowatt rack and there's maybe three servers in that rack, and the whole data center is powered inefficiently — their air is going every which way and it's hot, it's cold. It costs a lot. It's not ecologically conscious. By opening up this platform and by building awareness of what the best practices are in how to build a data center, how to make efficient servers and why you should care about building efficient servers and how to densely populate into a rack, there are a lot of places ... that can benefit from this type of information.

In a similar vein, Norrod said:

I think what you're going to see happen here is an opportunity for those Internet companies in the developing world to take a leap forward, jumping over the last 15 years of learnings, and exploiting the most efficient data center and server designs that we have today.

The developing countries angle intrigued me, so I sent an email to Benetech founder and CEO Jim Fruchterman to get his take. Fruchterman's company has a unique focus: apply the "intellectual capital and resources of Silicon Valley" to create solutions around the world for a variety of social problems. Recent projects have focused on human rights, literacy, and the development of the Miradi nature conservation project software.

His verdict? While efficient data centers are useful, they're secondary to pressing issues like infrastructure, reliable power, and basic literacy.

Fruchterman's reply follows:

JimFruchterman.jpgWhile I'm excited about an open initiative coming from Facebook, I'm not so sure that its impact on developing countries will be all that significant in the foreseeable future. Watching the announcement video, I didn't find these words coming out of the Facebook teams' mouths, but instead the Intel and Dell panelists. And, their comments focused mostly on India, China and Brazil — not exactly your typical "developing" countries.

The good news is, of course, that these open plans show how to reduce energy and acquisition costs per compute cycle. So, anyone building a data center can build a cheaper and lower power data center. That's great. But, building data centers is probably not on the top of the wish lists of most developing countries. Telecom and broadband infrastructure, reliable power (at the grid level, not the server power supply level), end-user device cost and reliability, localization, and even basic literacy seem to be more crucial to these communities. And, most of these factors are prerequisites to investing significantly in data centers.

Of course, our biggest concerns around Facebook are around free speech, anonymous speech, and the protection of human rights defenders. Facebook is increasingly a standard part of global user experience, and we think that it's crucial that Facebook get in front of these concerns, rather than being inadvertently a tool of repressive governments. We're glad that groups like the Electronic Frontier Foundation (EFF) have been working with Facebook and seeing progress, but we need more.

Fruchterman's response was edited and condensed.



Related:


April 07 2011

What Facebook's Open Compute Project means

Open Compute ProjectToday, Jonathan Heiliger, VP of Operations at Facebook, and his team announced the Open Compute Project, releasing their data center hardware stack as open source. This is a revolutionary project, and I believe it's one of the most important in infrastructure history. Let me explain why.

The way we operate systems and datacenters at web scale is fundamentally different than the world most server vendors seem to design their products to run in.

Web-scale systems focus on the entire system as a whole. In our world, individual servers are not special, and treating them as special can be dangerous. We expect servers to fail and we increasingly rely on the software we write to manage those failures. In many cases, the most valuable thing we can do when hardware fails is to simply provision a new one as quickly as possible. That means having enough capacity to do that, a way of programmatically managing the infrastructure, and an easy way to replace the failed components.

The server vendors have been slow to make this transition because they have been focused on individual servers, rather than systems as a whole. What we want to buy is racks of machines, with power and networking preconfigured, which we can wheel in, bolt down, and plug in. For the most part we don't care about logos, faceplates, and paint jobs. We won't use complex integrated proprietary management interfaces, and we haven't cared about video cards in a long time ... although it is still very hard to buy a server without them.

This gap is what led Google to build their own machines optimized for their own applications in their own datacenters. When Google did this, they gained a significant competitive advantage. Nobody else could deploy as much compute power as quickly and efficiently. To complete with Google's developers you also must compete with their operations and data center teams. As Tim O'Reilly said: "Operations is the new secret sauce."

When Jonathan and his team set out to build Facebook's new datacenter in Oregon, they knew they would have to do something similar to achieve the needed efficiency. Jonathan says that the Prineville, Ore. data center uses 38% less energy to do the same work as Facebook's existing facilities, while costing 24% less.

Facebook then took the revolutionary step of releasing the designs for most of the hardware in the datacenter under the Creative Commons license. They released everything from the power supply and battery backup systems to the rack hardware, motherboards, chassis, battery cabinets, and even their electrical and mechanical construction specifications.

This is a gigantic step for open source hardware, for the evolution of the web and cloud computing, and for infrastructure and operations in general. This is the beginning of a shift that began with open source software, from vendors and consumers to a participatory and collaborative model. Jonathan explains:

"The ultimate goal of the Open Compute Project, however, is to spark a collaborative dialogue. We're already talking with our peers about how we can work together on Open Compute Project technology. We want to recruit others to be part of this collaboration — and we invite you to join us in this mission to collectively develop the most efficient computing infrastructure possible."

At the announcement this morning, Graham Weston of Rackspace announced that they would be participating in Open Compute, which is an ideal compliment to the OpenStack cloud computing projects. Representatives from Dell and HP spoke at the announcement and also said that they would participate in this new project. The conversation has already begun.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl