Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

April 30 2013

Leading Indicators

In a conversation with Q Ethan McCallum (who should be credited as co-author), we wondered how to evaluate data science groups. If you’re looking at an organization’s data science group from the outside, possibly as a potential employee, what can you use to evaluate it? It’s not a simple problem under the best of conditions: you’re not an insider, so you don’t know the full story of how many projects it has tried, whether they have succeeded or failed, relations between the data group, management, and other departments, and all the other stuff you’d like to know but will never be told.

Our starting point was remote: Q told me about Tyler Brulé’s travel writing for Financial Times (behind a paywall, unfortunately), in which he says that a club sandwich is a good proxy for hotel quality: you go into the restaurant and order a club sandwich. A club sandwich isn’t hard to make: there’s no secret recipe or technique that’s going to make Hotel A’s sandwich significantly better than B’s. But it’s easy to cut corners on ingredients and preparation. And if a hotel is cutting corners on their club sandwiches, they’re probably cutting corners in other places.

This reminded me of when my daughter was in first grade, and we looked (briefly) at private schools. All the schools talked the same talk. But if you looked at classes, it was pretty clear that the quality of the music program was a proxy for the quality of the school. After all, it’s easy to shortchange music, and both hard and expensive to do it right. Oddly enough, using the music program as a proxy for evaluating school quality has continued to work through middle school and (public) high school. It’s the first thing to cut when the budget gets tight; and if a school has a good music program with excellent teachers, they’re probably not shortchanging the kids elsewhere.

How does this connect to data science? What are the proxies that allow you to evaluate a data science program from the “outside,” on the information that you might be able to cull from company blogs, a job interview, or even a job posting? We came up with a few ideas:

  • Are the data scientists simply human search engines, or do they have real projects that allow them to explore and be curious? If they have management support for learning what can be learned from the organization’s data, and if management listens to what they discover, they’re accomplishing something significant. If they’re just playing Q&A with the company data, finding answers to specific questions without providing any insight, they’re not really a data science group.
  • Do the data scientists live in a silo, or are they connected with the rest of the company? In Building Data Science Teams, DJ Patil wrote about the value of seating data scientists with designers, marketers, with the entire product group so that they don’t do their work in isolation, and can bring their insights to bear on all aspects of the company.
  • When the data scientists do a study, is the outcome predetermined by management? Is it OK to say “we don’t have an answer” or to come up with a solution that management doesn’t like? Granted, you aren’t likely to be able to answer this question without insider information.
  • What do job postings look like? Does the company have a mission and know what it’s looking for, or are they asking for someone with a huge collection of skills, hoping that they will come in useful? That’s a sign of data science cargo culting.
  • Does management know what their tools are for, or have they just installed Hadoop because it’s what the management magazines tell them to do? Can managers talk intelligently to data scientists?
  • What sort of documentation does the group produce for its projects? Like a club sandwich, it’s easy to shortchange documentation.
  • Is the business built around the data? Or is the data science team an add-on to an existing company? A data science group can be integrated into an older company, but you have to ask a lot more questions; you have to worry a lot more about silos and management relations than you do in a company that is built around data from the start.

Coming up with these questions was an interesting thought experiment; we don’t know whether it holds water, but we suspect it does. Any ideas and opinions?

September 09 2012

The many sides to shipping a great software project

Chris Vander Mey, CEO of Scaled Recognition, and author of a new O’Reilly book, Shipping Greatness, lays out in this video some of the deep lessons he learned during his years working on some very high-impact and high-priority projects at Google and Amazon.

Chris takes a very expansive view of project management, stressing the crucial decisions and attitudes that leaders need to take at every stage from the team’s initial mission statement through the design, coding, and testing to the ultimate launch. By merging technical, organizational, and cultural issues, he unravels some of the magic that makes projects successful.

Highlights from the full video interview include:

  • Some of the projects Chris has shipped. [Discussed at the 0:30 mark]
  • How to listen to your audience while giving a presentation. [Discussed at the 1:24 mark]
  • Deadlines and launches. [Discussed at the 6:40 mark]
  • Importance of keeping team focused on user experience of launch. [Discussed at the 12:15 mark]
  • Creating an API, and its relationship to requirements and Service Oriented Architectures. [Discussed at the 15:27 mark]
  • 22:36 What integration testing can accomplish. [Discussed at the 22:36 mark]

You can view the entire conversation in the following video:

January 30 2012

Moneyball for software engineering, part 2

scoreboard by popofatticus, on FlickrBrad Pitt was recently nominated for the Best Actor Oscar for his portrayal of Oakland A's general manager Billy Beane in the movie "Moneyball," which was also nominated for Best Picture. If you're not familiar with "Moneyball," the subject of the movie and the book on which it's based is Beane, who helped pioneer an approach to improve baseball team performance based on statistical analysis. Statistics were used to help find valuable players that other teams overlooked and to identify winning strategies for players and coaches that other teams missed.

Last October, when the movie was released, I wrote an article discussing how Moneyball-type statistical analysis can be applied to software teams. As a follow-up, and in honor of the recognition that the movie and Pitt are receiving, I thought it might be interesting to spend a little more time on this, focusing on the process and techniques that a manager could use to introduce metrics to a software team. If you are intrigued by the idea of tracking and studying metrics to help find ways to improve, the following are suggestions of how you can begin to apply these techniques.

Use the data that you have

The first puzzle to solve is how to get metrics for your software team. There are all kinds of things you could "measure." For example you could track how many tasks each developer completes, the complexity of each task, the number of production bugs related to each feature, or the number of users added or lost. You could also measure less obvious activities or contributions, such as the number of times a developer gets directly involved in customer support issues or the number of times someone works after hours.

In the movie "Moneyball," there are lots of scenes showing complicated-looking statistical calculations, graphs, and equations, which makes you think that the statistics Billy Beane used were highly sophisticated and advanced. But in reality, most of the statistics that he used were very basic and were readily available to everyone else. The "innovation" was to examine the statistics more closely in order to discover key fundamentals that contributed to winning. Most teams, Beane realized, disregarded these fundamentals and failed to use them to find players with the appropriate skills. By focusing on these overlooked basics, the Oakland A's were able to gain a competitive edge.

To apply a similar technique to software teams, you don't need hard-to-gather data or complex metrics. You can start by using the data you have. In your project management system, you probably have data on the quantity and complexity of development tasks completed. In your bug-tracking and customer support systems, you probably have data on the quantity, rate, and severity of product issues. These systems also typically have simple reporting or export mechanisms that make the data easy to access and use.

Looking at this type of data and the trends from iteration to iteration is a good place to start. Most teams don't devote time to examining historical trends and various breakdowns of the data by individual and category. For each individual and the team as a whole, you can look at all your metrics, focusing, at least to start, on fundamentals like productivity and quality. This means examining the history of data captured in your project management, bug tracking, and customer support systems. As you accumulate data over time, you can also analyze how the more recent metrics compare to those in the past. Gathering and regularly examining fundamental engineering and quality metrics is the first step in developing a new process for improving your team.

Establish internal support

If you saw the movie "Moneyball," you know that much was made of the fact that some of the old experienced veterans had a hard time getting on board with Beane and his new-fangled ideas. The fact that Beane had statistics to back-up his viewpoints didn't matter. The veterans didn't get it, and they didn't want to. They were comfortable relying on experience and the way things were already done. Some of them saw the new ideas and approaches — and the young guys who were touting them — as a threat.

If you start talking about gathering and showing metrics, especially individual metrics that might reveal how one person compares to another, some people will likely have concerns. One way to avoid serious backlash, of course, is to move slowly and gradually. The other thing you might do to decrease any negative reaction is to cultivate internal supporters. If you can get one, two, or a few team members on board with the idea of reviewing metrics regularly as a way to identify areas for improvement, they can be a big help to allay the fears of others, if such fears arise.

How do you garner support? Try sitting down individually with as many team members as possible to explain what you hope to do. It's the kind of discussion you might have informally over lunch. If you are the team manager, you'll want to explain carefully that historical analysis of metrics isn't designed to assign blame or grade performance. The goal is to spend more time examining the past in the hopes of finding incremental ways to improve. Rather than just reviewing the past through memory or anecdotes, you hope to get more accuracy and keener insights by examining the data.

After you talk with team members individually, you'll have a sense for who's supportive, who's ambivalent, and who's concerned. If you have a majority of people who are concerned and a lack of supporters, then you might want to rethink your plan and try to allay concerns before you even start.

Once you begin gathering and reviewing metrics as a team, it's a good idea to go back and check in periodically with both the supporters and the naysayers, either individually or in groups. You should get their reactions and input on suggestions to improve. If support is going down and concern is going up, then you'll need to make adjustments, or your use of metrics is headed for failure. If support is going up and concern is going down, then you are on the right track. Beane didn't get everyone to buy into his approach, but he did get enough internal supporters to give him a chance, and more were converted once they saw the results.

Codermetrics: Analytics for Improving Software Teams — This concise book introduces codermetrics, a clear and objective way to identify, analyze, and discuss the successes and failures of software engineers — not as part of a performance review, but as a way to make the team a more cohesive and productive unit

Embed metrics in your process

While there might be some benefit to gathering and looking at historical metrics on your own, to gain greater benefits, you'll want to share metrics with your team. The best time to share and review metrics is in meetings that are part of your regular review and planning process. You can simply add an extra element to those meetings to review metrics. Planning or review meetings that occur on a monthly or quarterly basis, for example, are appropriate forums. Too-frequent reviews, however, may become repetitive and wasteful. If you have biweekly review and planning meetings, for example, you might choose to review metrics every other meeting rather than every time.

To make the review of metrics effective and efficient, you can prepare the data for presentation, possibly summarizing key metrics into spreadsheets and graphs, or a small number of presentation slides (examples and resources for metric presentation can be found and shared at codermetrics.org). You will want to show your team summary data and individual breakdowns for the metrics gathered. For example, if you are looking at productivity metrics, then you might look at data such as:

  • The number and complexity of tasks completed in each development iteration.
  • A breakdown of task counts completed grouped by complexity.
  • The total number of tasks and sum of complexity completed by each engineer.
  • The trend of task counts and complexity over multiple development iterations.
  • A comparison of the most recent iteration to the average, highs, and lows of past iterations.

To start, you are just trying to get the team in the habit of looking at recent and historical data more closely, and it's not necessary to have a specific intent defined. The goal, when you begin, is to just "see what you can see." Present the data, and then foster observations and open discussion. As the team examines its metrics, especially over the course of time, patterns may emerge. Individual and team ideas about what the metrics reveal and potential areas of improvement may form. Opinions about the usefulness of specific data or suggestions for new types of metrics may come out.

Software developers are smart people. They are problem-spotters and problem-solvers by nature. Looking at the data from what they have done and the various outcomes is like looking at the diagnostics of a software program. If problems or inefficiencies exist, it is likely the team or certain individuals will spot them. In the same way that engineers fix bugs or tune programs, as they more closely analyze their own metrics, they may identify ways to tune their performance for the better.

There's a montage in the middle of the movie "Moneyball" where Beane and his assistant are interacting with the baseball players. It's my favorite part of the movie. They are sharing their statistics-inspired ideas of how games are won and lost, and making small suggestions about how the players can improve. Albeit briefly, we see in the movie that the players themselves begin to figure it out. Beane, his assistant, the coaches and the players are all a team. Knowledge is found, shared, and internalized. As you incorporate metrics-review and metrics-analysis into your development activities, you may see a similar organic process of understanding and evolution take place.

Set short-term, reasonable goals

Small improvements and adjustments can be significant. In baseball, one or two runs can be the difference between a win or a loss, and a few wins over the course of a long season can be the difference between second place and first. On a software team, a 2% productivity improvement equates to just 10 minutes "gained" per eight-hour workday, but that translates to an "extra" week of coding for every developer each year. The larger the team, the more those small improvements add up.

Once you begin to keep and review metrics regularly, the next step is to identify areas that you believe can be improved. There is no rush to do this. You might, for example, share and review metrics as a team for many months before you begin to discuss specific areas that might be improved. Over time, having reviewed their contributions and outcomes more closely, certain individuals may themselves begin to see ways to improve. For example, an engineer whose productivity is inconsistent may realize a way to become more consistent. Or the team may realize there are group goals they'd like to achieve. If, for example, regular examination makes everyone realize that the rate of new production bugs found matches or exceeds the rate of bugs being fixed, the team might decide they'd like to be more focused on turning that trend around.

It's fine — maybe even better — to target the easy wins to start. It gets the team going and allows you to test and demonstrate the potential usefulness of metrics in setting and achieving improvement goals. Later, you can extend and apply these techniques to other areas for different, and possibly more challenging, types of improvements.

When you have identified an area for improvement, either for an individual or a group, you can identify the associated metric and the target goal. Pick a reasonable goal, especially when you are first testing this process, remembering that small incremental improvements can still have significant effects. Once the goal is set, you can use your metrics to track progress month by month.

To summarize, the simple process for employing metrics to make improvements is:

  1. Gather and review historical metrics for a specific area.
  2. Set metrics-based goals for improvement in that area.
  3. Track the metrics at regular intervals to show progress toward the goal.

The other thing to keep in mind when getting started is that it's best to focus on goals that can be achieved quickly. Like any test case, you want to see results early to know if it's working. If you target areas that can show improvement in less than three months, for example, then you can evaluate more quickly whether utilizing metrics is helpful or not. If the process works, then these early and easier wins can help build support for longer-term experiments.

Take one metric at a time

It pays to look at one metric at a time. Again, this is similar to tuning a software program. In that case, you instrument the code or implement other techniques to gather performance metrics and identify key bottlenecks. Once the improvable areas are identified, you work on them one at a time, tuning the code and then testing the results. When one area is completed, you move on to the next.

Focusing the team and individuals on one key metric and one area at a time allows everyone to apply their best effort to improve that area. As with anything else, if you give people too many goals, you run the risk of making it harder to achieve any of the goals, and you also make it harder to pinpoint the cause of failure should that occur.

If you are just starting with metrics, you might have the whole team focus on the same metric and goal. But over time you can have individuals working on different areas with separate metrics, as long as each person is focused on one area at a time. For example, some engineers might be working to improve their personal productivity while others are working to improve their quality.

Once an area is "tuned" and improvement goals are reached, you'll want to continue reviewing metrics to make sure you don't fall back. Then you can move on to something else.

Build on small successes

Let's say that you begin reviewing metrics on production bug counts or development productivity per iteration; then you set some small improvement targets; and after a time, you reach and sustain those goals. Maybe, for example, you reduce a backlog of production bugs by 10%. Maybe this came through extra effort for a short period of time, but at the end, the team determines that metrics helped. Perhaps the metrics helped increase everyone's understanding of the "problem" and helped maintain a focus on the goal and results.

While this is a fairly trivial example, even a small success like this can help as a first step toward more. If you obtain increased support for metrics, and hopefully some proof of the value, then you are in a great position to gradually expand the metrics you gather and use.

In the long run, the areas that you can measure and analyze go well beyond the trivial. For example, you might expand beyond core software development tasks and skills, beyond productivity and quality, to begin to look at areas like innovation, communication skills, or poise under pressure. Clearly, measuring such areas takes much more thought and effort. To get there, you can build on small, incremental successes using metrics along the way. In so doing, you will not only be embedding metrics-driven analysis in your engineering process, but also in your software development culture. This can extend into other important areas, too, such as how you target and evaluate potential recruits.

Moneyball-type techniques are applicable to small and big software teams alike. They can apply in organizations that are highly successful as well as those just starting out. Bigger teams and larger organizations can sometimes afford to be less efficient, but most can't, and smaller teams certainly don't have this luxury. Beane's great success was making his organization highly competitive while spending far less money (hence the term "Moneyball"). To do this, his team had to be smarter and more efficient. It's a goal to which we can all aspire.

Jonathan Alexander looked at the connection between Moneyball and software teams in the following webcast:

Photo: scoreboard by popofatticus, on Flickr

Related:

June 23 2011

Scale your JavaScript, scale your team

Building JavaScript applications that will scale smoothly can be challenging — in terms of design, in terms of technology, and in terms of handling large teams of developers with lots of hands in the code. In the following interview, "High Performance JavaScript" author and OSCON speaker Nicholas Zakas discusses the issues that pop up when you build big JavaScript apps with big teams.

What are some of the challenges attached to building large-scale JavaScript apps with a large team?

Nicholas ZakasNicholas Zakas: The biggest challenge is making sure everyone is on the same page. In a lot of ways, you want the entire team to work as a single engineer, meaning that every engineer needs to be committed to doing things the same way. Code conventions are vitally important, as are code reviews, to make sure everyone understands not only their role but the roles of everyone else on the team. At any point in time, you want any engineer to be able to step in and pick up another engineer's work without needing to figure out his or her personal style first.

How do you design an app that balances responsiveness and structure?

Nicholas Zakas: As they say, the first step in solving a problem is admitting you have one. The key is to acknowledge from the start that you have no idea how this will grow. When you accept that you don't know everything, you begin to design the system defensively. You identify the key areas that may change, which often is very easy when you put a little bit of time into it. For instance, you should expect that any part of the app that communicates with another system will likely change, so you need to abstract that away. You should spend most of your time thinking about interfaces rather than implementations.

OSCON JavaScript and HTML5 Track — Discover the new power offered by HTML5, and understand JavaScript's imminent colonization of server-side technology.

Save 20% on registration with the code OS11RAD

What are the challenges specific to JavaScript?

Nicholas Zakas: The "shared everything" nature of the language is the biggest JavaScript challenge. You can set up a system that is well designed, and an inexperienced engineer might come in and — accidentally or not — change some key piece of functionality. ECMAScript 5 helps with some of that by allowing you to lock down modification of objects, but the nature of the language remains largely the same and problematic.

Do lessons applicable to large teams also apply to smaller teams?

Nicholas Zakas: Small teams still need to build scalable solutions because hopefully, one day, the team will grow larger. Your system should work for any number of engineers. The proof of a good design is being able to move seamlessly from five engineers to 10 and then more without needing to fundamentally change the design.

This interview was edited and condensed.

Associated photo on home and category pages: Scaffolding: Not just for construction workers anymore by kevindooley, on Flickr

Related:

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl