Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 18 2014

April 11 2013

Four short links: 11 April 2013

  1. A General Technique for Automating NES Gamessoftware that learns how to play NES games and plays them automatically, using an aesthetically pleasing technique. With video, research paper, and code.
  2. rietveld — open source tool like Mondrian, Google’s code review tool. Developed by Guido van Rossum, who developed Mondrian. Still being actively developed. (via Nelson Minar)
  3. KPI Dashboard for Early-Stage SaaS Startups — as Google Docs sheet. Nice.
  4. Life Without Sleep — interesting critique of Provigil as performance-enhancing drug for information workers. It is very difficult to design a stimulant that offers focus without tunnelling – that is, without losing the ability to relate well to one’s wider environment and therefore make socially nuanced decisions. Irritability and impatience grate on team dynamics and social skills, but such nuances are usually missed in drug studies, where they are usually treated as unreliable self-reported data. These problems were largely ignored in the early enthusiasm for drug-based ways to reduce sleep. [...] Volunteers on the stimulant modafinil omitted these feedback requests, instead providing brusque, non-question instructions, such as: ‘Exit West at the roundabout, then turn left at the park.’ Their dialogues were shorter and they produced less accurate maps than control volunteers. What is more, modafinil causes an overestimation of one’s own performance: those individuals on modafinil not only performed worse, but were less likely to notice that they did. (via Dave Pell)

January 14 2013

Four short links: 14 January 2013

  1. Open Source MetricsTalking about the health of the project based on a single metric is meaningless. It is definitely a waste of time to talk about the health of a project based on metrics like number of software downloads and mailing list activities. Amen!
  2. BitTorrent To Your TVThe first ever certified BitTorrent Android box goes on sale today, allowing users to stream files downloaded with uTorrent wirelessly to their television. The new set-top box supports playback of all popular video formats and can also download torrents by itself, fully anonymously if needed. (via Andy Baio)
  3. Tumblr URL Culture — the FOO.tumblr.com namespace is scarce and there’s non-financial speculation. People hoard and trade URLs, whose value is that they say “I’m cool and quirky”. I’m interested because it’s a weird largely-invisible Internet barter economy. Here’s a rant against it. (via Beta Knowledge)
  4. Design-Fiction Slider Bar of Disbelief (Bruce Sterling) — I love the list as much as the diagram. He lays out a sliding scale from “objective reality” to “holy relics” and positions black propaganda, 419 frauds, design pitches, user feedback, and software code on that scale (among many other things). Bruce is an avuncular Loki, pulling you aside and messing with your head for your own good.

November 09 2012

November 07 2012

Four short links: 7 November 2012

  1. A Slower Speed of Light — game where you control the speed of light and discover the wonders of relativity. (via Andy Baio)
  2. Facebook Demetricator — removes all statistics and numbers from Facebook’s chrome (“37 people like this” becomes “people like this”). (via Beta Knowledge)
  3. Rx — Microsoft open sources their library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators.
  4. Typing Karaoke — this is awesome. Practice typing to song lyrics. With 8-bit aesthetic for maximum quirk.

August 25 2012

Four short links: 30 August 2012

  1. TOS;DR — terms of service rendered comprehensible. “Make the hard stuff easy” is a great template for good ideas, and this just nails it.
  2. Sick of Impact Factorstypically only 15% of the papers in a journal account for half the total citations. Therefore only this minority of the articles has more than the average number of citations denoted by the journal impact factor. Take a moment to think about what that means: the vast majority of the journal’s papers — fully 85% — have fewer citations than the average. The impact factor is a statistically indefensible indicator of journal performance; it flatters to deceive, distributing credit that has been earned by only a small fraction of its published papers. (via Sci Blogs)
  3. A Generation Lost in the Bazaar (ACM) — Today’s Unix/Posix-like operating systems, even including IBM’s z/OS mainframe version, as seen with 1980 eyes are identical; yet the 31,085 lines of configure for libtool still check if and exist, even though the Unixen, which lacked them, had neither sufficient memory to execute libtool nor disks big enough for its 16-MB source code. [...] That is the sorry reality of the bazaar Raymond praised in his book: a pile of old festering hacks, endlessly copied and pasted by a clueless generation of IT “professionals” who wouldn’t recognize sound IT architecture if you hit them over the head with it. It is hard to believe today, but under this embarrassing mess lies the ruins of the beautiful cathedral of Unix, deservedly famous for its simplicity of design, its economy of features, and its elegance of execution. (Sic transit gloria mundi, etc.)
  4. History as Science (Nature) — Turchin and his allies contend that the time is ripe to revisit general laws, thanks to tools such as nonlinear mathematics, simulations that can model the interactions of thousands or millions of individuals at once, and informatics technologies for gathering and analysing huge databases of historical information.

April 25 2012

Velocity Podcast Series - Joshua Bixby on The Business of Performance

This is the second podcast in our new Velocity Podcast Series. It is my intention to keep our conversations that we start at the Velocity conference going throughout the year. We will be talking with conference committee members, speakers, companies, and attendees. So check back weekly for a new podcast.

I recently spoke with Joshua Bixby of Strangeloop about measuring and making sense out of increased performance. Josh has presented at Velocity in Europe, Asia, and the US and always has some very interesting insights into the business of performance. Josh talks about Business KPIs, metrics and business benefits of performance optimization and always has plenty of data and graphs. In our conversation we touch on mostly on the business of speed.

Our conversation lasted 00:22:11 and if you want to pinpoint any particular answer, you can find the specific timing below.

  • A little background on Josh and Strangeloop (what is a StrangeLoop?) 00:00:26
  • You've mentioned in your talks that we are in the middle of a couple of revolutions, what are they? 00:03:18
  • When did you see an institutional need for making a business case for Web Performance Optimization? 00:05:21
  • How do you benchmark a company's web properties that are most mobile, enterprise, or Web2 oriented so you are comparing Apples to Apples? 00:07:38
  • How do you rank variable tradeoffs that engineers will inevitably encounter with Time, Cost, Quality, Scope and Performance? 00:11:25
  • What is the most common cause of mobile users not staying on a site, or not purchasing? Is is performance related? 00:12:36
  • Do you have any real-life examples of dramatic improvements companies have achieved through performance optimization? 00:14:54
  • In your experience, what is the most important benefit a company will get through performance improvements? 00:18:56

Velocity 2012: Web Operations & Performance — The smartest minds in web operations and performance are coming together for the Velocity Conference, being held June 25-27 in Santa Clara, Calif.

Save 20% on registration with the code RADAR20

October 26 2011

Mobile analytics unlock the what and the when

When applied appropriately, mobile analytics reveal both what happened and when it happened. Case in point: "Let's say you have a game," said Flurry CTO Sean Byrnes (@FlurryMobile) during a recent interview. "You want to measure not just that someone got to level 1, 2, 3, or 4, but how long does it take for them to get to those levels? Does someone get to level 3 in one week, get to level 4 in two weeks, and get to level 5 in four weeks? Maybe those levels are too difficult. Or maybe a user is just getting tired of the same mechanic and you need to give them something different as the game progresses." [Discussed at the 2:21 mark.]

This is why a baseline metric, such as general engagement, deserves more than a passing glance. The specific engagements tucked within can unlock a host of improvements.

Byrnes touched on a number of related topics during the full interview (below), including:

  • Why mobile developers are focusing on engagement: Once you engage a user, do they stick around? If it costs you $1 to acquire a user, how much return will you get — if any? Byrnes said app engagement has grown in importance as developers have shifted their thinking from apps as marketing channels to apps as businesses. [Discussed 30 second in.]
  • Tablet apps vs smartphone apps: A tablet app isn't the same as a phone app. Flurry has found that tablet applications are being used "a number of times longer" than phone applications, but tablet consumers use fewer applications overall. [Discussed at 3:28]

You can view the entire interview in the following video.

Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.

Save 20% on registration with the code RADAR20

Related:

October 10 2011

Moneyball for software engineering

While sports isn't always a popular topic in software engineering circles, if you find yourself sitting in a theater watching "Moneyball," you might wonder if the story's focus on statistics and data can be applied to the technical domain. I think it can, and gradually, a growing number of software companies are starting to use Moneyball-type techniques to build better teams. Here, I'd like to provide you a brief introduction to some of these ideas and how they might be applied.

The basic idea of the Moneyball approach is that organizations can use "objective" statistical analysis to be smarter and build more competitive teams, which means:

  • Studying player skills and contributions much more carefully and gathering statistics on many more elements of what players do to categorize their skills.
  • Analyzing team results and player statistics to discover patterns and better determine what skills and strategies translate to winning.
  • Identifying undervalued assets, namely those players, skills, and strategies that are overlooked but which contribute heavily to winning.
  • Creating strategies to obtain undervalued assets and build consistent, winning teams.

While smaller-market baseball teams such as the Oakland A's were the original innovators of Moneyball, these techniques have now spread to all teams. The "old-school" way of identifying talent and strategies based on experience and subjective opinion has given way to a "new school" that relies on statistical analysis of situations, contributions, and results. Many sports organizations now employ statisticians with wide-ranging backgrounds to help evaluate players and prospects, and even to analyze in-game coaching strategies. Paul DePodesta, who was featured in both the book and the film, majored in economics at Harvard.

In software engineering, most of us work in teams. But few of us utilize metrics to identify strengths and weaknesses, set and track goals, or evaluate strategies. Like baseball teams of the past, our strategies for hiring, team building, project management, performance evaluation, and coaching are mostly based on experience. For those willing to take the time to make a more thorough analysis through metrics, there is an opportunity to make software teams better — sometimes significantly better.

Measure skills and contributions

It starts with figuring out ways to measure the variety of skills and contributions that individuals make as part of software teams. Consider how many types of metrics could be gathered and prove useful for engineering teams. For example, you could measure:

  • Productivity by looking at the number of tasks completed or the total complexity rating for all completed tasks.
  • Precision by tracking the number of production bugs and related customer support issues.
  • Utility by keeping track of how many areas someone works on or covers.
  • Teamwork by tallying how many times someone helps or mentors others, or demonstrates behavior that motivates teammates.
  • Innovation by noting the times when someone invents, innovates, or demonstrates strong initiative to solve an important problem.
  • Intensity by keeping track of relative levels and trends, increases or decreases in productivity, precision, or other metrics.
  • Effort by looking at the number of times someone goes above and beyond what is expected to fix an issue, finish a project, or respond to a request.

These might not match the kinds of metrics you have used or discussed before for software teams. But gathering and analyzing metrics like these for individuals and teams will allow you to begin to characterize their dynamics, strengths and weaknesses.


Web 2.0 Summit, being held October 17-19 in San Francisco, will examine "The Data Frame" — focusing on the impact of data in today's networked economy.

Save $300 on registration with the code RADAR

Measure success

In addition to measuring skills and contributions for software teams, to apply Moneyball strategies, you will need to find a way to measure success. In sports, you have wins and losses, so that's much simpler. In software, there are many ways a software team may be determined to succeed or fail depending on the kinds of software they work on, such as:

  • Looking at the number of users acquired or lost.
  • Calculating the impact of software enhancements that deliver benefit to existing users.
  • Tracking the percentage of trial users who convert to customers.
  • Factoring in the number of support cases resulting from user problems.

From a Moneyball perspective, measuring the relative success of software projects and teams is a key step to an objective analysis of team dynamics and strategies. In the end, you'll want to replicate the patterns of teams with excellent results and avoid the patterns of teams with poor results.

Focus on relative comparisons

One important Moneyball concept is that exact values don't matter, relative comparisons do. For example, if your system of measurement shows that one engineer is 1% more productive than another, what that really tells you is that they are essentially the same in productivity. But if one engineer is 50% or 100% more productive than another, then that shows an important difference between their skills.

You can think of metrics as a system for categorization. Ultimately, you are trying to rate individuals or teams as high, medium, or low in a set of chosen categories based on their measured values (or pick any other relative scale). You want to know whether an engineer has high productivity compared to others or shows a high level of teamwork or effort. This is the information you need at any point in time or over the long term to identify patterns of success and areas for improvement on teams. For example, if you know that your successful teams have more engineers with high levels of innovation, intensity, or effort, or that 80% of the people on a poorly performing team exhibit a low level of teamwork, then those are useful insights.

Knowing that your metrics are for categorization frees you up from needing exact, precise measurements. This means a variety of mechanisms to gather metrics can be used, including personal observation, without as much concern that the metrics are completely accurate. You can assume a certain margin of error (plus or minus 5-10% for example) and still gather meaningful data that can be useful in determining the relative level of people's skills and a team's success.

Identify roles on the team

With metrics like those discussed above, you can begin to understand the dynamics of teams and the contributions of individuals that lead to success. A popular concept in sports that is applicable to software engineering is that of "roles." On sports teams, you have players that fill different roles; for example, in baseball you have roles like designated hitters, pitchers, relievers, and defensive specialists. Each role has specific attributes and can be analyzed by specific statistics.

Roles and the people in those roles are not necessarily static. Some people are truly specialists, happily filling a specific role for a long period of time. Some people fill one role on a team for awhile, then move into a different role. Some people fill multiple roles. The value of identifying and understanding the roles is not to pigeon-hole people, but to analyze the makeup and dynamics of various teams, particularly with an eye toward understanding the mix of roles on successful teams.

By gathering and examining metrics for software engineering teams, you can begin to see the different types of roles that comprise the teams. Using sports parlance, you can imagine that a successful software team might have people filling a variety of roles, such as:

  • Scorers (engineers who have a high level of productivity or innovation).
  • Defenders (engineers who have a high level of precision or utility).
  • Playmakers (engineers who exhibit a high level of teamwork).
  • Motivators (engineers who exhibit a high level of intensity or effort).

One role or skill is not necessarily more valuable than another. Objective analysis of results may, in fact, lead you to conclude that certain roles are underappreciated. Having engineers who are strong in teamwork or effort, for example, may be as crucial to team success as having those who are highly productive. Also, you might come to realize that the perceived value of other roles or skills doesn't match the actual results. For example, you might find that teams filled with predominantly highly productive engineers might not necessarily be more successful (assuming that they lack in other areas, such as precision or utility).

Develop strategies to improve teams

With metrics to help you identify and validate the roles that people play on software teams, and thereby identify the relative strengths and weaknesses of your teams, you can implement strategies to improve those teams. If you identify that successful teams have certain strengths that other teams lack, you can work on strengthening those aspects. If you identify "undervalued assets," meaning skills or roles that weren't fully appreciated, you can bring greater attention to developing or adding those to a team.

There are a variety of techniques you can use to add more of those important skills and undervalued assets to your teams. For example, using Moneyball-like sports parlance, you can:

  • Recruit based on comparable profiles, or "comps" — profile your team, identify the roles you need to fill, and then recruit engineers that exhibit the strengths needed in those roles.
  • Improve your farm system — whether you use interns, contract-to-perm, or you promote from within, gather metrics on the participants and then target them for appropriate roles.
  • Make trades — re-organize teams internally to fill roles and balance strengths.
  • Coach the skills you need — identify those engineers with aptitude to fill specific roles and use metrics to set goals for development.

Blend metrics and experience

One mistaken conclusion that some people might reach in watching the movie "Moneyball" is that statistical analysis has completely supplanted experience and subjective analysis. That is far from the truth. The most effective sports organizations have added advanced statistical analysis as a way to uncover hidden opportunities and challenge assumptions, but they mix that with the personal observations of experienced individuals. Both sides factor into decisions. For example, a sports team is more likely to draft a new player if both the statistical analysis of that player and the in-person observations of experienced scouts identify the player as a good prospect.

In the same way, you can mix metrics-based analysis with experience and best practices in software engineering. Like sports teams, you can use metrics to identify underappreciated skills, identify opportunities for improvement, or challenge assumptions about team strategies and processes by measuring results. But such analysis is also limited and therefore should be balanced by experience and personal observation. By employing metrics in a balanced way, you can also alleviate concerns that people may have about metrics turning into some type of divisive grading system to which everyone is blindly subjected.

In conclusion, we shouldn't ignore the ideas in "Moneyball" because it's "just sports" (and now a Hollywood movie) and we are engineers. There are many smart ideas and a lot of smart people who are pioneering new ways to analyze individuals and teams. The field of software engineering has team-based dynamics similar to sports teams. Using similar techniques, with metrics-based analysis augmenting our experience and best practices, we have the same opportunity to develop new management techniques that can result in stronger teams.

Jonathan Alexander discussed the connection between "Moneyball" and software teams in a recent webcast:

Photo: TechEd 617 by betsyweber, on Flickr

Related:

August 29 2011

August 08 2011

Mobile metrics: Like the web, but a lot harder

Flurry is a mobile analytics company that currently tracks more than 10,000 Android applications and 500 million application sessions per day. Since Flurry supports all the major mobile platforms, the company is in a good position to comment on the relative merits and challenges of Android development.

I recently spoke with Sean Byrnes (@FlurryMobile), the CTO and co-founder of Flurry, about the state of mobile analytics, the strengths and weaknesses of Android, and how big a problem fragmentation is for the Android ecosystem. Byrnes will lead a session on "Android App Engagement by the Numbers" at the upcoming Android Open Conference.

Our interview follows.


What challenges do mobile platforms present for analytics?

Sean Byrnes: Mobile application analytics are interesting because the concepts are the same as traditional web analytics but the implementation is very different. For example, with web analytics you can always assume that a network connection is available since the user could not have loaded the web page otherwise. A large amount of mobile application usage happens when no network is available because either the cellular network is unreliable or there is no Wi-Fi nearby. As a result, analytics data has to be stored locally on the device and reported whenever a network is available, which can be weeks after the actual application session. This is complicated by the fact that about 10% of phones have bad clocks and report dates that in some cases are decades in the future.

Another challenge is that applications are downloaded onto the phone as opposed to a website where the content is all dynamic. This means that you have to think through all of the analytics you want to track for your application before you release it, because you can't change the tracking points once it's downloaded.

What metrics are developers most interested in?

Sean Byrnes: Developers are typically focused on metrics that either make or save them the most money. Typically these revolve around retention, engagement and commerce.

One of the interesting things about the mobile application market is how the questions developers ask are changing because the market is maturing so quickly. Until recently the primary metrics used to measure applications were the number of daily active users and the total time spent in the app. These metrics help you understand the size of your audience and they're important when you're focusing on user acquisition. Now, the biggest questions being asked are centering on user retention and the lifetime value of a user. This is a natural shift in focus as applications begin to focus on profitability and developers need to understand how much money they make from users as compared to their acquisition cost. For example, if my application has 100,000 daily active users but an active user only uses the application once, then I have a high level of engagement that is very expensive to maintain.

Social game developers are among the most advanced, scientific and detailed measurement companies. Zynga, for example, measures every consumer action in impressive detail, all for the purpose of maximizing engagement and monetization. They consider themselves a data company as much as a game maker.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD

What do you see as Android's biggest strengths and weaknesses?

Sean Byrnes: Android's biggest strength is adoption and support by so many phone manufacturers, and the fact that it's relatively open, which means development iterations can happen faster and you have the freedom to experiment. The installed base is now growing faster than iOS, although the iOS installed base is still larger, as of now.

Android's biggest weakness is that it offers less business opportunity to developers. Fewer Android consumers have credit cards associated with their accounts, and they expect to get more free apps. The most common business model on Android is still built on ad revenue. Also, because the Android Market is not curated, you can end up with a lower-average-quality app, which further reduces consumer confidence. At the end of the day, developers want a business, not an OS. And they need a marketplace that brings in consumers who are willing and able to pay. This is Android's biggest challenge at the moment.

Is Android fragmentation a problem for developers?

Sean Byrnes: Fragmentation is definitely a problem for Android developers. It's not just the sheer number of new Android devices entering the market, but also the speed at which new versions of the Android OS are being released. As a developer you have to worry about handling both a variety of hardware capabilities and a variety of Android OS capabilities for every application version you release. It's difficult to be truly innovative and take advantage of advanced features since the path of least resistance is to build to the lowest common denominator.

It will likely get worse before it gets better with the current roadmap for Android OS updates and the number of devices coming to market. However, fragmentation will become less of a concern as developers can make more money on Android and the cost of supporting a device becomes insignificant compared to the revenue it can generate.



Related:


June 27 2011

April 18 2011

Four short links: 18 April 2011

  1. Your Community is Your Best Feature -- Gina Trapani's CodeConf talk: useful, true, and moving. There's not much in this world that has all three of those attributes.
  2. Metrics Everywhere -- another CodeConf talk, this time explaining Yammer's use of metrics to quantify the actual state of their operations. Nice philosophical guide to the different ways you want to measure things (gauges, counters, meters, histograms, and timers). I agree with the first half, but must say that it will always be an uphill battle to craft a panegyric that will make hearts and minds soar at the mention of "business value". Such an ugly phrase for such an important idea. (via Bryce Roberts)
  3. On Earthquakes in Tokyo (Bunnie Huang) -- Personal earthquake alarms are quite popular in Tokyo. Just as lightning precedes thunder, these alarms give you a few seconds warning to an incoming tremor. The alarm has a distinct sound, and this leads to a kind of pavlovian conditioning. All conversation stops, and everyone just waits in a state of heightened awareness, since the alarm can’t tell you how big it is—it just tells you one is coming. You can see the fight or flight gears turning in everyone’s heads. Some people cry; some people laugh; some people start texting furiously; others just sit and wait. Information won't provoke the same reaction in everyone: for some it's impending doom, for others another day at the office. Data is not neutral; it requires interpretation and context.
  4. AccentuateUs -- Firefox plugin to Unicodify text (so if you type "cafe", the software turns it into "café"). The math behind it is explained on the dataists blog. There's an API and other interfaces, even a vim plugin.

April 04 2011

Four short links: 4 April 2011

  1. Find The Future -- New York Public Library big game, by Jane McGonigal. (via Imran Ali)
  2. Enable Certificate Checking on Mac OS X -- how to get your browser to catch attempts to trick you with revoked certificates (more of a worry since security problems at certificate authorities came to light). (via Peter Biddle)
  3. Clever Algorithms -- Nature-Inspired Programming Recipes from AI, examples in Ruby. I hadn't realized there were Artificial Immune Systems. Cool! (via Avi Bryant)
  4. Rethinking Evaluation Metrics in Light of Flickr Commons -- conference paper from Museums and the Web. As you move from "we are publishing, you are reading" to a read-write web, you must change your metrics. Rather than import comments and tags directly into the Library's catalog, we verify the information then use it to expand the records of photos that had little description when acquired by the Library. [...] The symbolic 2,500th record, a photo from the Bain collection of Captain Charles Polack, is illustrative of the updates taking place based on community input. The new information in the record of this photo now includes his full name, death date, employer, and the occasion for taking the photo, the 100th Atlantic crossing as ocean liner captain. An additional note added to the record points the Library visitor to the Flickr conversation and more of the story with references to gold shipments during WWI. Qualitative measurements, like level of engagement, are a challenge to gauge and convey. While resources expended are sometimes viewed as a cost, in this case they indicate benefit. If you don't measure the right thing, you'll view success as a failure. (via Seb Chan)

November 17 2010

The CIO's golden rule of management

Evidence-based management is an approach where hunches are discarded and instead decisions are based on hard facts. It's pounded into aircraft pilot training: when there is no visibility, regardless of what senses are suggesting, pilots must trust their flight instruments. In medicine (where the practice of evidence-based decision making originates) new diagnostic technology can trump the wisdom of physicians and can make decades of experience only an input into a course of treatment and not the final call. With few exceptions, good data is the best way to make great decisions. Without it you're essentially flying blind.

We use evidence-based management all the time in business. Historically, we've just called it something else. In business we're consumed with data and metrics. We make decisions with data and we measure performance with metrics. It's letting the data and metrics -- the evidence -- tell the story and then taking some form of action on it.

As a business function, we can all agree that internal IT is replete with data and opportunities for metrics. Even still, it is surprising how poorly IT organizations measure what they do and how they make decisions. In my firsthand experience the problem can be attributed to at least three characteristics:

  1. Lack of recognition for the value of data and metrics.
  2. Insufficient skills in determining how and what to measure.
  3. System limitations/issues.

The effective CIO must address these and knock all three out of the ballpark. The success of an IT strategy is predicated on good metrics. Clearly that's easier said than done. What every IT leader quickly learns is that without even a basic set of metrics, management of processes is close to impossible (at least in any quality manner) and making an argument for a course of action, particularly to the boss, is exponentially more difficult. To paraphrase Druker, who I believe got it exactly right, if you want to manage something you had better measure it. That's an important rule.

Choose wisely if your metrics are currently far and few between. Go after the most valuable items, but keep the list relatively short. Produce a long list and you risk pushing your team into a tizzy. If you inherit an environment that appears to monitor and measure in excess, find a way to reduce it to a list where each item has merit. Simply asking the purpose of each metric will truncate the list quickly!

I'll be the first to admit that metrics aren't the most glamorous part of IT leadership, but they must be a priority. If leveraged in the right way, quality metrics can be the difference between dysfunctional operations and high-performance. That's a golden rule that any CIO should want to follow.



Related:




Four short links: 17 November 2010

  1. Understanding Your Customers -- I enjoyed Keith's take on meaningful metrics. We talk a lot about being data-driven, but we interpret data with a model. The different take on meaningful metrics reflect the different underlying models that are lit up by data. It's an important idea for the Strata conference, that gathering and processing data happens in the context of a world view, a data cosmology. (via Eric Ries)
  2. Bushwick AR Intervention 2010 -- an augmented reality take over of Bushwick, Brooklyn NY. Artists will rework physical space with computer generated 3d graphics. A wide variety of works ranging from a virtual drug which has broken free of its internet constraints and is now effecting people in the real world, to a unicorn park, to serious commentary on the state of United States veterans will be free for the public to view [with correct mobile device]. (via Laurel Ruma)
  3. How to Mass Export all of your Facebook Friends' Private Email Addresses (TechCrunch) -- Arrington gives a big finger to Facebook's "no, you can't export your friends' email addresses" policy by using the tools they provide to do just that. Not only is this useful, it also points out the hypocrisy of the company.
  4. TaintDroid -- an Android ROM that tracks what apps do with your sensitive information. (via Brady Forrest on Twitter)

August 13 2010

Open source givers and takers

Dana Blankenhorn's recent ZDNet blog points to Accenture's "hockey stick for open source" and notes that while 69 percent of the companies Accenture surveyed plan to increase their open source investment in the next year, only 29 percent plan to contribute back to the open source community. That sounds very plausible. But is it a problem? I'm not so sure.

First, I don't think "all take and no give" is a failure. Or even a problem. If you're giving, you shouldn't be surprised if people take. If you're taking something that's been freely given, you shouldn't feel obliged to give back. If you do, that's great. And if you're a giver, you should be glad that people are taking, whether or not you're getting something back in return.

Second, "how many companies plan to contribute" isn't the right metric. One of the things I've learned from my involvement in industry is that the most successful and effective groups are small. The right metric is "are there enough contributors to move the project forward?" For the key projects (like Apache), clearly the answer is "yes." "Enough" is much more important than "how many." The last thing we need are projects that slow to a crawl because of the bloated development-by-committee that characterizes many corporate environments. In the late '80s, I worked for a startup that developed (among other things) a FORTRAN compiler. We sent our 10-person compiler group up to DEC for a meeting, where they found out that DEC's FORTRAN compiler group had 2,000 people. DEC couldn't understand how we got anything done. More to the point, our guys couldn't understand how DEC got anything done.

Further, I suspect that the extent of corporate contribution is significantly higher than this study reports. Twenty-nine percent is probably correct if you define "corporate contributions" as "contributions made by employees while sitting at a company desk and on company time." A better measure would be "contributions made by someone who develops or uses the software as part of his job." The software is used on the job, but the contribution is often made by the individual, at home. But that's hard to measure; it's not what the projects are set up to track. And yes, it would most likely be better if these guys were compensated for their work. In many ways, they're the heroes of the open source movement. But that brings us back to my first point: what was freely given may be freely taken.

The level of corporate contribution would almost certainly be higher if you limit the question to corporations that have something to contribute. I don't mean that facetiously. Apache is probably the most widely used open source project in the corporate world. How many corporations have actually modified the Apache source code to add a feature or fix a bug? Very few, I'd bet. They use it "off the shelf." And if you haven't touched the code, contribution is a moot point. And that's just fine.

There's one more issue worth considering. If you look at any open source project hosting site, such as SourceForge, you'll notice a few projects that are successful, and a much larger number that have clearly failed: "abandonware." Maybe they once served a purpose (like getting a dissertation finished), but they're buggy, incomplete, and stagnant. Could these projects have been contenders, and if so, what went wrong? It seems to me that projects fail for three primary reasons: bad project culture, poor source code base, and "not really needed." The project scratched someone's particular itch, but not enough people had that same itch.

Would corporate backing have allowed some of these failed projects to survive? Maybe. Corporate projects frequently have culture problems, bad source code, and poorly-defined markets. So it's not clear that corporate participation would be a huge win. I'd really like to see a good study of failed open source projects: what, why, and how. I think we'd learn a lot. And one question worth asking would be whether more corporate involvement would have helped the project to survive (and whether survival would have been a good thing).

To put this in a specific context: much has been written about RedHat's significant contribution to Gnome, and Canonical's much smaller contribution. But that's not the right issue. Would Gnome be better (or some definition of "better") if it had 20 percent or 50 percent or 150 percent more contributors? That's not clear to me. Blankenhorn's comment is right on:

Everyone is free to take just as everyone is free to give, and that is what makes the idea powerful. GNOME is far more valuable to Red Hat than a Red Hat UI could ever be, because it's open source, because it's a commons.

That's precisely the point.

I certainly don't want to discourage any corporation from contributing to the open source community. There are many projects that do need help, many projects that could profitably be started, and many sites willing to host them. And I agree with Blankenhorn's larger point: that by engaging with open source projects, companies engage with their customers, their competitors, and people who can help.

Although giving back to the community is an option, it's a good option, and something from which corporations can profit. But before we gauge corporate participation in open source, I think we need better metrics than the ones Accenture is using: how many projects need more committers, and where are the committers coming from? How many companies have employees who work on open source projects on their own time? And how many employees work on open source on company time, unbeknownst to the managers who fill out Accenture surveys?

December 28 2009

Four short links: 28 December 2009

  1. GTFS Data Exchange -- site for sharing the files that Google Transit collects from public transit agencies. This lets third party developers write apps that don't involve Google.
  2. Tenureometer -- if you are what you measure, let's build good measures. This is one for higher education, designed to measure scholars' impact on their fields by counting how much they have contributed to the literature and how frequently those articles have been cited.
  3. The Known Universe -- rendered according to the best data science has. Beautiful.
  4. 100 Incredible Lectures from the World's Top Scientists -- it's an astounding collection for everyone to have access to. I'm cheekily delighted by the thought that TED talks will become the next generation's equivalent of the cheesy 16mm educational film: "oh no, not another famous person giving a 20 minute presentation on a life-changing approach to something! It's as naff as Spongebob and that silly multicolour Google logo!"

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl