Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 03 2012

Top stories: January 30-February 3, 2012

Here's a look at the top stories published across O'Reilly sites this week.

What is Apache Hadoop?
Apache Hadoop has been the driving force behind the growth of the big data industry. But what does it do, and why do you need all its strangely-named friends? (Related: Hadoop creator Doug Cutting on why Hadoop caught on.)

Embracing the chaos of data
Data scientists, it's time to welcome errors and uncertainty into your data projects. In this interview, Jetpac CTO Pete Warden discusses the advantages of unstructured data.

Moneyball for software engineering, part 2
A look at the "Moneyball"-style metrics and techniques managers can employ to get the most out of their software teams.

With GOV.UK, British government redefines the online government platform
A new beta .gov website in Britain is open source, mobile friendly, platform agnostic, and open for feedback.


When will Apple mainstream mobile payments?
David Sims parses the latest iPhone / near-field-communication rumors and considers the impact of Apple's (theoretical) entrance into the mobile payment space.



Strata 2012, Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work. Save 20% on Strata registration with the code RADAR20.

January 30 2012

Moneyball for software engineering, part 2

scoreboard by popofatticus, on FlickrBrad Pitt was recently nominated for the Best Actor Oscar for his portrayal of Oakland A's general manager Billy Beane in the movie "Moneyball," which was also nominated for Best Picture. If you're not familiar with "Moneyball," the subject of the movie and the book on which it's based is Beane, who helped pioneer an approach to improve baseball team performance based on statistical analysis. Statistics were used to help find valuable players that other teams overlooked and to identify winning strategies for players and coaches that other teams missed.

Last October, when the movie was released, I wrote an article discussing how Moneyball-type statistical analysis can be applied to software teams. As a follow-up, and in honor of the recognition that the movie and Pitt are receiving, I thought it might be interesting to spend a little more time on this, focusing on the process and techniques that a manager could use to introduce metrics to a software team. If you are intrigued by the idea of tracking and studying metrics to help find ways to improve, the following are suggestions of how you can begin to apply these techniques.

Use the data that you have

The first puzzle to solve is how to get metrics for your software team. There are all kinds of things you could "measure." For example you could track how many tasks each developer completes, the complexity of each task, the number of production bugs related to each feature, or the number of users added or lost. You could also measure less obvious activities or contributions, such as the number of times a developer gets directly involved in customer support issues or the number of times someone works after hours.

In the movie "Moneyball," there are lots of scenes showing complicated-looking statistical calculations, graphs, and equations, which makes you think that the statistics Billy Beane used were highly sophisticated and advanced. But in reality, most of the statistics that he used were very basic and were readily available to everyone else. The "innovation" was to examine the statistics more closely in order to discover key fundamentals that contributed to winning. Most teams, Beane realized, disregarded these fundamentals and failed to use them to find players with the appropriate skills. By focusing on these overlooked basics, the Oakland A's were able to gain a competitive edge.

To apply a similar technique to software teams, you don't need hard-to-gather data or complex metrics. You can start by using the data you have. In your project management system, you probably have data on the quantity and complexity of development tasks completed. In your bug-tracking and customer support systems, you probably have data on the quantity, rate, and severity of product issues. These systems also typically have simple reporting or export mechanisms that make the data easy to access and use.

Looking at this type of data and the trends from iteration to iteration is a good place to start. Most teams don't devote time to examining historical trends and various breakdowns of the data by individual and category. For each individual and the team as a whole, you can look at all your metrics, focusing, at least to start, on fundamentals like productivity and quality. This means examining the history of data captured in your project management, bug tracking, and customer support systems. As you accumulate data over time, you can also analyze how the more recent metrics compare to those in the past. Gathering and regularly examining fundamental engineering and quality metrics is the first step in developing a new process for improving your team.

Establish internal support

If you saw the movie "Moneyball," you know that much was made of the fact that some of the old experienced veterans had a hard time getting on board with Beane and his new-fangled ideas. The fact that Beane had statistics to back-up his viewpoints didn't matter. The veterans didn't get it, and they didn't want to. They were comfortable relying on experience and the way things were already done. Some of them saw the new ideas and approaches — and the young guys who were touting them — as a threat.

If you start talking about gathering and showing metrics, especially individual metrics that might reveal how one person compares to another, some people will likely have concerns. One way to avoid serious backlash, of course, is to move slowly and gradually. The other thing you might do to decrease any negative reaction is to cultivate internal supporters. If you can get one, two, or a few team members on board with the idea of reviewing metrics regularly as a way to identify areas for improvement, they can be a big help to allay the fears of others, if such fears arise.

How do you garner support? Try sitting down individually with as many team members as possible to explain what you hope to do. It's the kind of discussion you might have informally over lunch. If you are the team manager, you'll want to explain carefully that historical analysis of metrics isn't designed to assign blame or grade performance. The goal is to spend more time examining the past in the hopes of finding incremental ways to improve. Rather than just reviewing the past through memory or anecdotes, you hope to get more accuracy and keener insights by examining the data.

After you talk with team members individually, you'll have a sense for who's supportive, who's ambivalent, and who's concerned. If you have a majority of people who are concerned and a lack of supporters, then you might want to rethink your plan and try to allay concerns before you even start.

Once you begin gathering and reviewing metrics as a team, it's a good idea to go back and check in periodically with both the supporters and the naysayers, either individually or in groups. You should get their reactions and input on suggestions to improve. If support is going down and concern is going up, then you'll need to make adjustments, or your use of metrics is headed for failure. If support is going up and concern is going down, then you are on the right track. Beane didn't get everyone to buy into his approach, but he did get enough internal supporters to give him a chance, and more were converted once they saw the results.

Codermetrics: Analytics for Improving Software Teams — This concise book introduces codermetrics, a clear and objective way to identify, analyze, and discuss the successes and failures of software engineers — not as part of a performance review, but as a way to make the team a more cohesive and productive unit

Embed metrics in your process

While there might be some benefit to gathering and looking at historical metrics on your own, to gain greater benefits, you'll want to share metrics with your team. The best time to share and review metrics is in meetings that are part of your regular review and planning process. You can simply add an extra element to those meetings to review metrics. Planning or review meetings that occur on a monthly or quarterly basis, for example, are appropriate forums. Too-frequent reviews, however, may become repetitive and wasteful. If you have biweekly review and planning meetings, for example, you might choose to review metrics every other meeting rather than every time.

To make the review of metrics effective and efficient, you can prepare the data for presentation, possibly summarizing key metrics into spreadsheets and graphs, or a small number of presentation slides (examples and resources for metric presentation can be found and shared at codermetrics.org). You will want to show your team summary data and individual breakdowns for the metrics gathered. For example, if you are looking at productivity metrics, then you might look at data such as:

  • The number and complexity of tasks completed in each development iteration.
  • A breakdown of task counts completed grouped by complexity.
  • The total number of tasks and sum of complexity completed by each engineer.
  • The trend of task counts and complexity over multiple development iterations.
  • A comparison of the most recent iteration to the average, highs, and lows of past iterations.

To start, you are just trying to get the team in the habit of looking at recent and historical data more closely, and it's not necessary to have a specific intent defined. The goal, when you begin, is to just "see what you can see." Present the data, and then foster observations and open discussion. As the team examines its metrics, especially over the course of time, patterns may emerge. Individual and team ideas about what the metrics reveal and potential areas of improvement may form. Opinions about the usefulness of specific data or suggestions for new types of metrics may come out.

Software developers are smart people. They are problem-spotters and problem-solvers by nature. Looking at the data from what they have done and the various outcomes is like looking at the diagnostics of a software program. If problems or inefficiencies exist, it is likely the team or certain individuals will spot them. In the same way that engineers fix bugs or tune programs, as they more closely analyze their own metrics, they may identify ways to tune their performance for the better.

There's a montage in the middle of the movie "Moneyball" where Beane and his assistant are interacting with the baseball players. It's my favorite part of the movie. They are sharing their statistics-inspired ideas of how games are won and lost, and making small suggestions about how the players can improve. Albeit briefly, we see in the movie that the players themselves begin to figure it out. Beane, his assistant, the coaches and the players are all a team. Knowledge is found, shared, and internalized. As you incorporate metrics-review and metrics-analysis into your development activities, you may see a similar organic process of understanding and evolution take place.

Set short-term, reasonable goals

Small improvements and adjustments can be significant. In baseball, one or two runs can be the difference between a win or a loss, and a few wins over the course of a long season can be the difference between second place and first. On a software team, a 2% productivity improvement equates to just 10 minutes "gained" per eight-hour workday, but that translates to an "extra" week of coding for every developer each year. The larger the team, the more those small improvements add up.

Once you begin to keep and review metrics regularly, the next step is to identify areas that you believe can be improved. There is no rush to do this. You might, for example, share and review metrics as a team for many months before you begin to discuss specific areas that might be improved. Over time, having reviewed their contributions and outcomes more closely, certain individuals may themselves begin to see ways to improve. For example, an engineer whose productivity is inconsistent may realize a way to become more consistent. Or the team may realize there are group goals they'd like to achieve. If, for example, regular examination makes everyone realize that the rate of new production bugs found matches or exceeds the rate of bugs being fixed, the team might decide they'd like to be more focused on turning that trend around.

It's fine — maybe even better — to target the easy wins to start. It gets the team going and allows you to test and demonstrate the potential usefulness of metrics in setting and achieving improvement goals. Later, you can extend and apply these techniques to other areas for different, and possibly more challenging, types of improvements.

When you have identified an area for improvement, either for an individual or a group, you can identify the associated metric and the target goal. Pick a reasonable goal, especially when you are first testing this process, remembering that small incremental improvements can still have significant effects. Once the goal is set, you can use your metrics to track progress month by month.

To summarize, the simple process for employing metrics to make improvements is:

  1. Gather and review historical metrics for a specific area.
  2. Set metrics-based goals for improvement in that area.
  3. Track the metrics at regular intervals to show progress toward the goal.

The other thing to keep in mind when getting started is that it's best to focus on goals that can be achieved quickly. Like any test case, you want to see results early to know if it's working. If you target areas that can show improvement in less than three months, for example, then you can evaluate more quickly whether utilizing metrics is helpful or not. If the process works, then these early and easier wins can help build support for longer-term experiments.

Take one metric at a time

It pays to look at one metric at a time. Again, this is similar to tuning a software program. In that case, you instrument the code or implement other techniques to gather performance metrics and identify key bottlenecks. Once the improvable areas are identified, you work on them one at a time, tuning the code and then testing the results. When one area is completed, you move on to the next.

Focusing the team and individuals on one key metric and one area at a time allows everyone to apply their best effort to improve that area. As with anything else, if you give people too many goals, you run the risk of making it harder to achieve any of the goals, and you also make it harder to pinpoint the cause of failure should that occur.

If you are just starting with metrics, you might have the whole team focus on the same metric and goal. But over time you can have individuals working on different areas with separate metrics, as long as each person is focused on one area at a time. For example, some engineers might be working to improve their personal productivity while others are working to improve their quality.

Once an area is "tuned" and improvement goals are reached, you'll want to continue reviewing metrics to make sure you don't fall back. Then you can move on to something else.

Build on small successes

Let's say that you begin reviewing metrics on production bug counts or development productivity per iteration; then you set some small improvement targets; and after a time, you reach and sustain those goals. Maybe, for example, you reduce a backlog of production bugs by 10%. Maybe this came through extra effort for a short period of time, but at the end, the team determines that metrics helped. Perhaps the metrics helped increase everyone's understanding of the "problem" and helped maintain a focus on the goal and results.

While this is a fairly trivial example, even a small success like this can help as a first step toward more. If you obtain increased support for metrics, and hopefully some proof of the value, then you are in a great position to gradually expand the metrics you gather and use.

In the long run, the areas that you can measure and analyze go well beyond the trivial. For example, you might expand beyond core software development tasks and skills, beyond productivity and quality, to begin to look at areas like innovation, communication skills, or poise under pressure. Clearly, measuring such areas takes much more thought and effort. To get there, you can build on small, incremental successes using metrics along the way. In so doing, you will not only be embedding metrics-driven analysis in your engineering process, but also in your software development culture. This can extend into other important areas, too, such as how you target and evaluate potential recruits.

Moneyball-type techniques are applicable to small and big software teams alike. They can apply in organizations that are highly successful as well as those just starting out. Bigger teams and larger organizations can sometimes afford to be less efficient, but most can't, and smaller teams certainly don't have this luxury. Beane's great success was making his organization highly competitive while spending far less money (hence the term "Moneyball"). To do this, his team had to be smarter and more efficient. It's a goal to which we can all aspire.

Jonathan Alexander looked at the connection between Moneyball and software teams in the following webcast:

Photo: scoreboard by popofatticus, on Flickr

Related:

October 14 2011

October 10 2011

Moneyball for software engineering

While sports isn't always a popular topic in software engineering circles, if you find yourself sitting in a theater watching "Moneyball," you might wonder if the story's focus on statistics and data can be applied to the technical domain. I think it can, and gradually, a growing number of software companies are starting to use Moneyball-type techniques to build better teams. Here, I'd like to provide you a brief introduction to some of these ideas and how they might be applied.

The basic idea of the Moneyball approach is that organizations can use "objective" statistical analysis to be smarter and build more competitive teams, which means:

  • Studying player skills and contributions much more carefully and gathering statistics on many more elements of what players do to categorize their skills.
  • Analyzing team results and player statistics to discover patterns and better determine what skills and strategies translate to winning.
  • Identifying undervalued assets, namely those players, skills, and strategies that are overlooked but which contribute heavily to winning.
  • Creating strategies to obtain undervalued assets and build consistent, winning teams.

While smaller-market baseball teams such as the Oakland A's were the original innovators of Moneyball, these techniques have now spread to all teams. The "old-school" way of identifying talent and strategies based on experience and subjective opinion has given way to a "new school" that relies on statistical analysis of situations, contributions, and results. Many sports organizations now employ statisticians with wide-ranging backgrounds to help evaluate players and prospects, and even to analyze in-game coaching strategies. Paul DePodesta, who was featured in both the book and the film, majored in economics at Harvard.

In software engineering, most of us work in teams. But few of us utilize metrics to identify strengths and weaknesses, set and track goals, or evaluate strategies. Like baseball teams of the past, our strategies for hiring, team building, project management, performance evaluation, and coaching are mostly based on experience. For those willing to take the time to make a more thorough analysis through metrics, there is an opportunity to make software teams better — sometimes significantly better.

Measure skills and contributions

It starts with figuring out ways to measure the variety of skills and contributions that individuals make as part of software teams. Consider how many types of metrics could be gathered and prove useful for engineering teams. For example, you could measure:

  • Productivity by looking at the number of tasks completed or the total complexity rating for all completed tasks.
  • Precision by tracking the number of production bugs and related customer support issues.
  • Utility by keeping track of how many areas someone works on or covers.
  • Teamwork by tallying how many times someone helps or mentors others, or demonstrates behavior that motivates teammates.
  • Innovation by noting the times when someone invents, innovates, or demonstrates strong initiative to solve an important problem.
  • Intensity by keeping track of relative levels and trends, increases or decreases in productivity, precision, or other metrics.
  • Effort by looking at the number of times someone goes above and beyond what is expected to fix an issue, finish a project, or respond to a request.

These might not match the kinds of metrics you have used or discussed before for software teams. But gathering and analyzing metrics like these for individuals and teams will allow you to begin to characterize their dynamics, strengths and weaknesses.


Web 2.0 Summit, being held October 17-19 in San Francisco, will examine "The Data Frame" — focusing on the impact of data in today's networked economy.

Save $300 on registration with the code RADAR

Measure success

In addition to measuring skills and contributions for software teams, to apply Moneyball strategies, you will need to find a way to measure success. In sports, you have wins and losses, so that's much simpler. In software, there are many ways a software team may be determined to succeed or fail depending on the kinds of software they work on, such as:

  • Looking at the number of users acquired or lost.
  • Calculating the impact of software enhancements that deliver benefit to existing users.
  • Tracking the percentage of trial users who convert to customers.
  • Factoring in the number of support cases resulting from user problems.

From a Moneyball perspective, measuring the relative success of software projects and teams is a key step to an objective analysis of team dynamics and strategies. In the end, you'll want to replicate the patterns of teams with excellent results and avoid the patterns of teams with poor results.

Focus on relative comparisons

One important Moneyball concept is that exact values don't matter, relative comparisons do. For example, if your system of measurement shows that one engineer is 1% more productive than another, what that really tells you is that they are essentially the same in productivity. But if one engineer is 50% or 100% more productive than another, then that shows an important difference between their skills.

You can think of metrics as a system for categorization. Ultimately, you are trying to rate individuals or teams as high, medium, or low in a set of chosen categories based on their measured values (or pick any other relative scale). You want to know whether an engineer has high productivity compared to others or shows a high level of teamwork or effort. This is the information you need at any point in time or over the long term to identify patterns of success and areas for improvement on teams. For example, if you know that your successful teams have more engineers with high levels of innovation, intensity, or effort, or that 80% of the people on a poorly performing team exhibit a low level of teamwork, then those are useful insights.

Knowing that your metrics are for categorization frees you up from needing exact, precise measurements. This means a variety of mechanisms to gather metrics can be used, including personal observation, without as much concern that the metrics are completely accurate. You can assume a certain margin of error (plus or minus 5-10% for example) and still gather meaningful data that can be useful in determining the relative level of people's skills and a team's success.

Identify roles on the team

With metrics like those discussed above, you can begin to understand the dynamics of teams and the contributions of individuals that lead to success. A popular concept in sports that is applicable to software engineering is that of "roles." On sports teams, you have players that fill different roles; for example, in baseball you have roles like designated hitters, pitchers, relievers, and defensive specialists. Each role has specific attributes and can be analyzed by specific statistics.

Roles and the people in those roles are not necessarily static. Some people are truly specialists, happily filling a specific role for a long period of time. Some people fill one role on a team for awhile, then move into a different role. Some people fill multiple roles. The value of identifying and understanding the roles is not to pigeon-hole people, but to analyze the makeup and dynamics of various teams, particularly with an eye toward understanding the mix of roles on successful teams.

By gathering and examining metrics for software engineering teams, you can begin to see the different types of roles that comprise the teams. Using sports parlance, you can imagine that a successful software team might have people filling a variety of roles, such as:

  • Scorers (engineers who have a high level of productivity or innovation).
  • Defenders (engineers who have a high level of precision or utility).
  • Playmakers (engineers who exhibit a high level of teamwork).
  • Motivators (engineers who exhibit a high level of intensity or effort).

One role or skill is not necessarily more valuable than another. Objective analysis of results may, in fact, lead you to conclude that certain roles are underappreciated. Having engineers who are strong in teamwork or effort, for example, may be as crucial to team success as having those who are highly productive. Also, you might come to realize that the perceived value of other roles or skills doesn't match the actual results. For example, you might find that teams filled with predominantly highly productive engineers might not necessarily be more successful (assuming that they lack in other areas, such as precision or utility).

Develop strategies to improve teams

With metrics to help you identify and validate the roles that people play on software teams, and thereby identify the relative strengths and weaknesses of your teams, you can implement strategies to improve those teams. If you identify that successful teams have certain strengths that other teams lack, you can work on strengthening those aspects. If you identify "undervalued assets," meaning skills or roles that weren't fully appreciated, you can bring greater attention to developing or adding those to a team.

There are a variety of techniques you can use to add more of those important skills and undervalued assets to your teams. For example, using Moneyball-like sports parlance, you can:

  • Recruit based on comparable profiles, or "comps" — profile your team, identify the roles you need to fill, and then recruit engineers that exhibit the strengths needed in those roles.
  • Improve your farm system — whether you use interns, contract-to-perm, or you promote from within, gather metrics on the participants and then target them for appropriate roles.
  • Make trades — re-organize teams internally to fill roles and balance strengths.
  • Coach the skills you need — identify those engineers with aptitude to fill specific roles and use metrics to set goals for development.

Blend metrics and experience

One mistaken conclusion that some people might reach in watching the movie "Moneyball" is that statistical analysis has completely supplanted experience and subjective analysis. That is far from the truth. The most effective sports organizations have added advanced statistical analysis as a way to uncover hidden opportunities and challenge assumptions, but they mix that with the personal observations of experienced individuals. Both sides factor into decisions. For example, a sports team is more likely to draft a new player if both the statistical analysis of that player and the in-person observations of experienced scouts identify the player as a good prospect.

In the same way, you can mix metrics-based analysis with experience and best practices in software engineering. Like sports teams, you can use metrics to identify underappreciated skills, identify opportunities for improvement, or challenge assumptions about team strategies and processes by measuring results. But such analysis is also limited and therefore should be balanced by experience and personal observation. By employing metrics in a balanced way, you can also alleviate concerns that people may have about metrics turning into some type of divisive grading system to which everyone is blindly subjected.

In conclusion, we shouldn't ignore the ideas in "Moneyball" because it's "just sports" (and now a Hollywood movie) and we are engineers. There are many smart ideas and a lot of smart people who are pioneering new ways to analyze individuals and teams. The field of software engineering has team-based dynamics similar to sports teams. Using similar techniques, with metrics-based analysis augmenting our experience and best practices, we have the same opportunity to develop new management techniques that can result in stronger teams.

Jonathan Alexander discussed the connection between "Moneyball" and software teams in a recent webcast:

Photo: TechEd 617 by betsyweber, on Flickr

Related:

September 29 2011

Strata Week: Facebook builds a new look for old data

Here are a few of the data stories that caught my attention this week.

Facebook data and the "story of your life"

Last week at its F8 developer conference, Facebook announced several important changes, including updates to its Open Graph that enable what it calls "frictionless sharing" as well as a more visual version of users' profiles — the "Timeline." As is always the case with a Facebook update, particularly one that involves a new UI, there have been a number of vocal responses. And not surprisingly, given Facebook's history, there have been a slew of questions raised about how the changes will impact users' privacy.

Facebook Timeline

Some of those concerns stem from the fact that now, with the Timeline, a person's entire (Facebook) history can be accessed and viewed far more easily. On stage at F8, CEO Mark Zuckerberg described the Timeline as a way to "curate the story of your life." But whether or not you view the new visual presentation of your Facebook profile with such grand, sweeping terms, it's clear that the new profile is a way for Facebook to re-present user data. Some of this data may be things that would have otherwise been forgotten — not just banal status updates, but progress in games, friendships made, relationships broken and so on. As Facebook describes it:

The way your profile works today, 99 percent of the stories you share vanish. The only way to find the posts that matter is to click 'Older Posts' at the bottom of the page. Again. And again. Imagine if there was an easy way to rediscover the things you shared, and collect all your best moments in a single place.

That new way was helped, in part, by Facebook's hiring earlier this year of data visualization experts Nicholas Felton and Ryan Case. But turning old Facebook data into new user profiles has caused some consternation, including the insistence by Silicon Filter's Frederic Lardinois that "sorry, Facebook, but the stuff I share on your site is not the story of my life."

But it wasn't an announcement from the stage at F8 that raised the most questions about Facebook data this week. Rather, it was a post by Nik Cubrilovic arguing that "logging out of Facebook is not enough." Cubrilovic discovered that even if you log out of Facebook, its cookies persisted. "With my browser logged out of Facebook," he wrote, "whenever I visit any page with a Facebook like button, or share button, or any other widget, the information, including my account ID, is still being sent to Facebook. The only solution to Facebook not knowing who you are is to delete all Facebook cookies."

Facebook responded to Cubrilovic and addressed the issue so that upon logout the cookie containing a user's ID is destroyed.

Web 2.0 Summit, being held October 17-19 in San Francisco, will examine "The Data Frame" — focusing on the impact of data in today's networked economy.

Save $300 on registration with the code RADAR

Faster than the speed of light?

Just as some tech pundits were busy debating whether the latest changes to Facebook had "changed everything," an observation by the physicists at CERN also prompted many to say the same thing — "this changes everything" — in terms of what we know about physics and relativity.

But not so fast. The scientists at the particle accelerator in Switzerland have been measuring how neutrinos change their properties as they travel. According to their measurements (some three years' worth of data and 15,000 calculations), the neutrinos appeared to have traveled from Geneva to Gran Sasso, Italy, faster than the speed of light. According to Einstein, that's not possible: nothing can exceed the speed of light.

CERN researchers released the data in hopes that other scientists can help shed some light on the findings. The scientists at Fermilab are among those who will pour over the information.

For those who need a little brushing up on their physics, an article at CNET has a good illustration of the experiment. High school physics teacher John Burk also has a great explanation of the discovery and the calculations behind it as well as insights into why the discovery and the discussions demonstrate good science (but in some cases, lousy journalism).

Data and baseball (and Brad Pitt)

MoneyballThe film "Moneyball," based on the 2003 bestseller by Michael Lewis, was released this past week. It's the story of Oakland Athletics general manager Billy Beane — played in the film by Brad Pitt — who helped revive the franchise by using data analysis to assemble his team.

Of course, stats and baseball have long gone hand in hand, but Beane argued that the on-base percentage (OPB) was a better indicator of a player's strengths than just batting average. By looking at the numbers that other teams weren't, Beane was able to assemble a team made up of players that other teams viewed as less valuable. (And by extension, of course, this helped Beane assemble that team at a much lower price.)

Thanks, in part, to Pitt's star power, a story about data making a difference is a Hollywood hit, and the movie's release has spurred others to ask if that sort of strategy can work elsewhere. In a post called "Moneyball for Startups," SplatF's Dan Frommer asked if it would be applicable to tech investing, a question that received a number of interesting follow-up discussions from around the web.


In a recent webcast, "Codermetrics" author Jonathan Alexander examined software teams through a "Moneyball" lens.

Got data news?

Feel free to email me.

Related:

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl