Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 11 2013

The future of programming

Programming is changing. The PC era is coming to an end, and software developers now work with an explosion of devices, job functions, and problems that need different approaches from the single machine era. In our age of exploding data, the ability to do some kind of programming is increasingly important to every job, and programming is no longer the sole preserve of an engineering priesthood.

Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.

Is your next program for one of these?
Photo credit: Steve Lodefink/Flickr.

Over the course of the next few months, I’m looking to chart the ways in which programming is evolving, and the factors that are affecting it. This article captures a few of those forces, and I welcome comment and collaboration on how you think things are changing.

Where am I headed with this line of inquiry? The goal is to be able to describe the essential skills that programmers need for the coming decade, the places they should focus their learning, and differentiating between short term trends and long term shifts.

Distributed computing

The “normal” environment in which coding happens today is quite different from that of a decade ago. Given targets such as web applications, mobile and big data, the notion that a program only involves a single computer has disappeared. For the programmer, that means we must grapple with problems such as concurrency, locking, asynchronicity, network communication and protocols. Even the most basic of web programming will lead you to familiarity with concepts such as caching.

Because of these pressures we see phenomena at different levels in the computing stack. At a high level, cloud computing seeks to mitigate the hassle of maintaining multiple machines and their network; at the application development level, frameworks try to embody familiar patterns and abstract away tiresome detail; and at the language level, concurrency and networking computing is made simpler by the features offered by languages such as Go or Scala.

Device computing

Look around your home. There are processors and programming in most every electronic device you have, which certainly puts your computer in a small minority. Not everybody will be engaged in programming for embedded devices, but many developers will certainly have to learn what it is to develop for a mobile phone. And in the not so distant future carsdrones, glasses and smart dust.

Even within more traditional computing, the rise of the GPU array as an advanced data crunching coprocessor needs non-traditional programming. Different form factors require different programming approaches. Hobbyists and prototypers alike are bringing hardware to life with Arduino and Processing.

Languages and programmers must respond to issues previously the domain of specialists, such as low memory and CPU speeds, power consumption, radio communication, hard and soft real-time requirements.

Data computing

The prevailing form of programming today, object orientation, is generally hostile to data. Its focus on behavior wraps up data in access methods, and wraps up collections of data even more tightly. In the mathematical world, data just is, it has no behavior, yet the rigors of C++ or Java require developers to worry about how it is accessed.

As data and its analysis grow in importance, there’s a corresponding rise in use and popularity of languages that treat data as a first class citizen. Obviously, statistical languages such as R are rising on this tide, but within general purpose programming there’s a bias to languages such as Python or Clojure, which make data easier to manipulate.

Democratized computing

More people than ever are programming. These smart, uncounted, accidental developers wrangle magic in Excel macros, craft JavaScript and glue stuff together with web services such as IFTTT or Zapier. Quite reasonably, they know little about software development, and aren’t interested in it either.

However, many of these casual programmers will find it easy to generate a mess and get into trouble, all while only really wanting to get things done. At best, this is annoying, at worst, a liability for employers. What’s more, it’s not the programmer’s fault.

How can providers of programmable environments serve the “accidental developer” better? Do we need new languages, better frameworks in existing languages? Is it an educational concern? Is it even a problem at all, or just life?

There are hints towards a different future from Bret Victor’s work, and projects such as Scratch and Light Table.

Dangerous computing

Finally, it’s worth examining the house of cards we’re building with our current approach to software development. The problem is simple: the brain can only fit so much inside it. To be a programmer today, you need to be able to execute the program you’re writing inside your head.

When the problem space gets too big, our reaction is to write a framework that makes the problem space smaller again. And so we have operating systems that run on top of CPUs. Libraries and user interfaces that run on top of operating systems. Application frameworks that run on top of those libraries. Web browsers that run on top of those. JavaScript that runs on top of browsers. JavaScript libraries that run on top of JavaScript. And we know it won’t stop there.

We’re like ambitious waiters stacking one teacup on top of the other. Right now, it looks pretty wobbly. We’re making faster and more powerful CPUs, but getting the same kind of subjective application performance that we did a decade ago. Security holes emerge in frameworks that put large numbers of systems at risk.

Why should we use computers like this, simultaneously building a house of cards and confining computing power to that which the programmer can fit in their head? Is there a way to hit reset on this view of software?

Conclusion

I’ll be considering these trends and more as I look into the future of programming. If you have experience or viewpoints, or are working on research to do things radically differently, I’d love to hear from you. Please leave a comment on this article, or get in touch.

The future of programming

Programming is changing. The PC era is coming to an end, and software developers now work with an explosion of devices, job functions, and problems that need different approaches from the single machine era. In our age of exploding data, the ability to do some kind of programming is increasingly important to every job, and programming is no longer the sole preserve of an engineering priesthood.

Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.

Is your next program for one of these?
Photo credit: Steve Lodefink/Flickr.

Over the course of the next few months, I’m looking to chart the ways in which programming is evolving, and the factors that are affecting it. This article captures a few of those forces, and I welcome comment and collaboration on how you think things are changing.

Where am I headed with this line of inquiry? The goal is to be able to describe the essential skills that programmers need for the coming decade, the places they should focus their learning, and differentiating between short term trends and long term shifts.

Distributed computing

The “normal” environment in which coding happens today is quite different from that of a decade ago. Given targets such as web applications, mobile and big data, the notion that a program only involves a single computer has disappeared. For the programmer, that means we must grapple with problems such as concurrency, locking, asynchronicity, network communication and protocols. Even the most basic of web programming will lead you to familiarity with concepts such as caching.

Because of these pressures we see phenomena at different levels in the computing stack. At a high level, cloud computing seeks to mitigate the hassle of maintaining multiple machines and their network; at the application development level, frameworks try to embody familiar patterns and abstract away tiresome detail; and at the language level, concurrency and networking computing is made simpler by the features offered by languages such as Go or Scala.

Device computing

Look around your home. There are processors and programming in most every electronic device you have, which certainly puts your computer in a small minority. Not everybody will be engaged in programming for embedded devices, but many developers will certainly have to learn what it is to develop for a mobile phone. And in the not so distant future carsdrones, glasses and smart dust.

Even within more traditional computing, the rise of the GPU array as an advanced data crunching coprocessor needs non-traditional programming. Different form factors require different programming approaches. Hobbyists and prototypers alike are bringing hardware to life with Arduino and Processing.

Languages and programmers must respond to issues previously the domain of specialists, such as low memory and CPU speeds, power consumption, radio communication, hard and soft real-time requirements.

Data computing

The prevailing form of programming today, object orientation, is generally hostile to data. Its focus on behavior wraps up data in access methods, and wraps up collections of data even more tightly. In the mathematical world, data just is, it has no behavior, yet the rigors of C++ or Java require developers to worry about how it is accessed.

As data and its analysis grow in importance, there’s a corresponding rise in use and popularity of languages that treat data as a first class citizen. Obviously, statistical languages such as R are rising on this tide, but within general purpose programming there’s a bias to languages such as Python or Clojure, which make data easier to manipulate.

Democratized computing

More people than ever are programming. These smart, uncounted, accidental developers wrangle magic in Excel macros, craft JavaScript and glue stuff together with web services such as IFTTT or Zapier. Quite reasonably, they know little about software development, and aren’t interested in it either.

However, many of these casual programmers will find it easy to generate a mess and get into trouble, all while only really wanting to get things done. At best, this is annoying, at worst, a liability for employers. What’s more, it’s not the programmer’s fault.

How can providers of programmable environments serve the “accidental developer” better? Do we need new languages, better frameworks in existing languages? Is it an educational concern? Is it even a problem at all, or just life?

There are hints towards a different future from Bret Victor’s work, and projects such as Scratch and Light Table.

Dangerous computing

Finally, it’s worth examining the house of cards we’re building with our current approach to software development. The problem is simple: the brain can only fit so much inside it. To be a programmer today, you need to be able to execute the program you’re writing inside your head.

When the problem space gets too big, our reaction is to write a framework that makes the problem space smaller again. And so we have operating systems that run on top of CPUs. Libraries and user interfaces that run on top of operating systems. Application frameworks that run on top of those libraries. Web browsers that run on top of those. JavaScript that runs on top of browsers. JavaScript libraries that run on top of JavaScript. And we know it won’t stop there.

We’re like ambitious waiters stacking one teacup on top of the other. Right now, it looks pretty wobbly. We’re making faster and more powerful CPUs, but getting the same kind of subjective application performance that we did a decade ago. Security holes emerge in frameworks that put large numbers of systems at risk.

Why should we use computers like this, simultaneously building a house of cards and confining computing power to that which the programmer can fit in their head? Is there a way to hit reset on this view of software?

Conclusion

I’ll be considering these trends and more as I look into the future of programming. If you have experience or viewpoints, or are working on research to do things radically differently, I’d love to hear from you. Please leave a comment on this article, or get in touch.

October 09 2012

Tracking Salesforce’s push toward developers

SalesforceSalesforceHave you ever seen Salesforce’s “no software” graphic? It’s the word “software” surrounded by a circle with a red line through it. Here’s a picture of the related (and dancing) “no software” mascot.

Now, if you consider yourself a developer, this is a bit threatening, no? Imagine sitting at a Salesforce event in 2008 in Chicago while Salesforce.com’s CEO, Marc Benioff, swiftly works an entire room of business users into an anti-software frenzy. I was there to learn about Force.com, and I’ll summarize the message I understood four years ago as “Not only can companies benefit from Salesforce.com, they also don’t have to hire developers.”

The message resonated with the audience. Salesforce had been using this approach for a decade: Don’t buy software you have to support, maintain, and hire developers to customize. Use our software-as-a-service (SaaS) instead.  The reality behind Salesforce’s trajectory at the time was that it too needed to provide a platform for custom development.

Salesforce’s dilemma: They needed developers

This “no software” message was enough for the vast majority of the small-to-medium-sized business (SMB) market, but to engage with companies at the largest scale, you need APIs and you need to be able to work with developers. At the time, in 2008, Salesforce was making moves toward the developer community. First there was Apex, then there was Force.com.

In 2008, I evaluated Force.com, and while capable, it didn’t strike me as something that would appeal to most developers outside of existing Salesforce customers.  Salesforce was aiming at the corporate developers building software atop competing stacks like Oracle.  While there were several attempts to sell it as such, it wasn’t a stand-alone product or framework.  In my opinion, no developer would assess Force.com and opt to use it as the next development platform.

This 2008 TechCrunch article announcing the arrival of Salesforce’s Developer-as-a-Service (DaaS) platform serves as a reminder of what Salesforce had in mind. They were still moving forward with an anti-software message for the business while continuing to make moves into the developer space. Salesforce built a capable platform. Looking back at Force.com, it felt more like an even more constrained version of Google App Engine. In other words, capable and scalable, but at the time a bit constraining for the general developer population. Don’t get me wrong: Force.com wasn’t a business failure by any measure; they have an impressive client list even today, but what they didn’t achieve was traction and awareness among the developer community.

2010: Who bought Heroku? Really?

When Salesforce.com purchased Heroku, I was initially surprised. I didn’t see it coming, but it made perfect sense after the fact. Heroku is very much an analog to Salesforce for developers, and Heroku brought something to Salesforce that Force.com couldn’t achieve: developer authenticity and community.

Heroku, like Salesforce, is the opposite of shrink-wrapped software. There’s no investment in on-premise infrastructure, and once you move to Heroku — or any other capable platform-as-a-service (PaaS), like Joyent — you question why you would ever bother doing without such a service.  As a developer, once you’ve made the transition to never again having to worry about running “yum update” on a CentOS machine or worry about kernel patches in a production deployment, it is difficult to go back to a world where you have to worry about system administration.

Yes, there are arguments to be made for procuring your own servers and taking on the responsibility for system administration: backups, worrying about disks, and the 100 other things that come along with owning infrastructure. But those arguments really only start to make sense once you achieve the scale of a large multi-national corporation (or after a freak derecho in Reston, Va).

Jokes about derecho-prone data centers aside, this market is growing up, and with it we’re seeing an unbeatable trend of obsoleting yesterday’s system administrator.  With capable options like Joyent and Heroku, there’s very little reason (other than accounting) for any SMB to own hardware infrastructure. It’s a large capital expense when compared to the relatively small operational expense you’ll throw down to run a scalable architecture on a Heroku or a Joyent.

Replace several full-time operations employees with software-as-a-service; shift the cost to developers who create real value; insist on a proper service-level agreement (SLA); and if you’re really serious about risk mitigation, use several platforms at once.  Heroku is exactly the same play Salesforce made for customer relationship management (CRM) over the last decade, except this time they are selling PostgreSQL and a platform to run applications.

Heroku is closer to what Salesforce was aiming for with Force.com.  Here are two things to note about this Heroku acquisition almost two years after the fact:

  1. Force.com was focused on developers — a certain kind of developer in the “enterprise.” While it was clear that Salesforce had designs on using Force.com to expand the market, the existing marketing and product management function at Salesforce ended up creating something that was way too connected to the existing Salesforce brand, along with its anti-developer message.
  2. Salesforce.com’s developer-focused acquisitions are isolated from the Salesforce.com brand (on purpose?) — When Benioff discussed Heroku at Dreamforce, he made it clear that Heroku would remain “separate.”  While it is tough to know how “separate” Heroku remains, the brand has changed very little, and I think this is the important thing to note two years after the fact. Salesforce understands that they need to attract independent developers and they understand that the Salesforce brand is something of a “scarecrow” for this audience. They invested an entire decade in telling businesses that software isn’t necessary, and senior management is too smart to confuse developers with that message.

Investments point toward greater developer outreach: Sauce Labs

This brings me to Sauce Labs — a company that recently raised $3 million from “Triage Ventures as well as [a] new investor Salesforce.com,” according to Jolie O’Dell at VentureBeat.

Sauce Labs provides a hosted web testing platform.  I’ve used it for some one-off testing jobs, and it is impressive.  You can spin up a testing machine in about two minutes from an array of operating systems, mobile devices, and browsers, and then run a test script either in Selenium or WebDriver. The platform can be used for acceptance testing, and Jason Huggins and John Dunham’s emphasis of late has been mobile testing.  Huggins supported the testing grid at Google, and he started Selenium while at ThoughtWorks in 2004. By every measure, Sauce Labs is a developer’s company as much as Heroku.

Sauce Labs, like Heroku before it, also satisfies the Salesforce.com analogy perfectly. Say I have a company that develops a web site. Well, if I’m deploying this application to a platform like Joyent or Heroku continuously, I also need to be able to support some sort of continuous automated testing system. If I need to test on an array of browsers and devices, would I procure the necessary hardware infrastructure to set up my own testing infrastructure, or … you can see where I’m going.

I also think I can see where Salesforce is going. They didn’t acquire Sauce Labs, but this investment is another data point and another view into what Salesforce.com is paying attention to. I think it has taken them 4-5 years, but they are continuing a push toward developers. Heroku stood out from the list of Salesforce acquisitions: It wasn’t CRM, sales, or marketing focused; it was a pure technology play. Salesforce’s recent investments, from Sauce Labs to Appirio to Urban Airship, suggest that Salesforce is become more relevant to the individual developer who is uninterested in Salesforce’s other product offerings.

Some random concluding conjecture

Although I think it would be too expensive, I wonder what would happen if Salesforce acquired GitHub. GitHub just received an unreal investment ($100M), so I just don’t see it happening. But if you were to combine GitHub, Heroku, and Sauce Labs into a single company, you’d have a one-stop shop for the majority of development and production infrastructure that people are paying attention to.  Add an Atlassian to the mix, and it would be tough to avoid that company.

This is nothing more than conjecture, but I do get the sense that there has been an interesting shift happening at Salesforce ever since 2008. I think the next few years are going to see even more activity.

September 27 2012

Four short links: 27 September 2012

  1. Paying for Developers is a Bad Idea (Charlie Kindel) — The companies that make the most profit are those who build virtuous platform cycles. There are no proof points in history of virtuous platform cycles being created when the platform provider incents developers to target the platform by paying them. Paying developers to target your platform is a sign of desperation. Doing so means developers have no skin in the game. A platform where developers do not have skin in the game is artificially propped up and will not succeed in the long run. A thesis illustrated with his experience at Microsoft.
  2. Learnable Programming (Bret Victor) — deconstructs Khan Academy’s coding learning environment, and explains Victor’s take on learning to program. A good system is designed to encourage particular ways of thinking, with all features carefully and cohesively designed around that purpose. This essay will present many features! The trick is to see through them — to see the underlying design principles that they represent, and understand how these principles enable the programmer to think. (via Layton Duncan)
  3. Tablet as External Display for Android Smartphones — new app, in beta, letting you remote-control via a tablet. (via Tab Times)
  4. Clay Shirky: How The Internet Will (One Day) Transform Government (TED Talk) — There’s no democracy worth the name that doesn’t have a transparency move, but transparency is openness in only one direction, and being given a dashboard without a steering wheel has never been the core promise a democracy makes to its citizens.

July 30 2012

Open source won

I heard the comments a few times at the 14th OSCON: The conference has lost its edge. The comments resonated with my own experience — a shift in demeanor, a more purposeful, optimistic attitude, less itching for a fight. Yes, the conference has lost its edge, it doesn’t need one anymore.

Open source won. It’s not that an enemy has been vanquished or that proprietary software is dead, there’s not much regarding adopting open source to argue about anymore. After more than a decade of the low-cost, lean startup culture successfully developing on open source tools, it’s clearly a legitimate, mainstream option for technology tools and innovation.

And open source is not just for hackers and startups. A new class of innovative, widely adopted technologies has emerged from the open source culture of collaboration and sharing — turning the old model of replicating proprietary software as open source projects on its head. Think GitHub, D3, Storm, Node.js, Rails, Mongo, Mesos or Spark.

We see more enterprise and government folks intermingling with the stalwart open source crowd who have been attending OSCON for years. And, these large organizations are actively adopting many of the open source technologies we track, e.g., web development frameworks, programming languages, content management, data management and analysis tools.

We hear fewer concerns about support or needing geek-level technical competency to get started with open source. In the Small and Medium Business (SMB) market we see mass adoption of open source for content management and ecommerce applications — even for self-identified technology newbies.

MySQL appears as popular as ever and remains open source after three years of Oracle control and Microsoft is pushing open source JavaScript as a key part of its web development environment and more explicit support for other open source languages. Oracle and Microsoft are not likely to radically change their business models, but their recent efforts show that open source can work in many business contexts.

Even more telling:

  • With so much of the consumer web undergirded with open source infrastructure, open source permeates most interactions on the web.
  • The massive, $100 million, GitHub investment validates the open collaboration model and culture — forking becomes normal.

What does winning look like? Open source is mainstream and a new norm — for startups, small business, the enterprise and government. Innovative open source technologies creating new business sectors and ecosystems (e.g., the distribution options, tools and services companies building around Hadoop). And what’s most exciting is the notion that the collaborative, sharing culture that permeates the open source community spreads to the enterprise and government with the same impact on innovation and productivity.

So, thanks to all of you who made the open source community a sustainable movement, the ones who were there when … and all the new folks embracing the culture. I can’t wait to see the new technologies, business sectors and opportunities you create.

Related:

July 25 2012

Inside GitHub’s role in community-building and other open source advances

In this video interview, Matthew McCullough of GitHub discusses what they’ve learned over time as they grow and watch projects develop there. Highlights from the full video interview include:

  • How GitHub builds on Git’s strengths to allow more people to collaborate on a project [Discussed at the 00:30 mark]
  • The value of stability and simple URLs [Discussed at the 02:05 mark]
  • Forking as the next level of democracy for software [Discussed at the 04:02 mark]
  • The ability to build a community around a GitHub repo [Discussed at the 05:05 mark]
  • GitHub for education, and the use of open source projects for university work [Discussed at the 06:26 mark]
  • The value of line-level comments in source code [Discussed at the 09:36 mark]
  • How to be a productive contributor [Discussed at the 10:53 mark]
  • Tools for Windows users [Discussed at the 11:56 mark]

You can view the entire conversation in the following video:

Related:

June 05 2012

The software professional vs the software artist

I hope that James Turner's post on "The overhead of insecure infrastructure" was ironic or satiric. The attitude he expresses is all too common, and frankly, is the reason that system administrators and other operations people can't keep their systems secure.

Why do we have to deal with vulnerabilities in operating systems and applications? It's precisely because of prima donna software developers who think they're "artists" and can't be bothered to take the time to do things right. That, and a long history of management that was more interested in meeting ship dates than shipping secure software; and the never ending and always escalating battle between the good guys and the bad guys, as black hats find new vulnerabilities that no one thought of a week ago, let alone a few years ago.

Yes, that's frustrating, but that's life. If a developer in my organization said that he was too good and creative to care about writing secure code, he would be out on his ear. Software developers are not artistes. They are professionals, and the attitude James describes is completely unprofessional and entirely too common.

One of the long-time puzzles in English literature is Jonathan Swift's "A Modest Proposal for Preventing the Children of Poor People From Being a Burden on Their Parents or Country, and for Making Them Beneficial to the Publick." It suggests solving the problem of famine in Ireland by cannibalism. Although Swift is one of English literature's greatest satirists, the problem here is that he goes too far: the piece is just too coldly rational, and never gives you the sly look that shows something else is going on. Is Turner a latter-day Swift? I hope so.

Related:

May 23 2012

Clojure's advantage: Immediate feedback with REPL

Chas Emerick (@cemerick) is the co-author of "Clojure Programming" along with Brian Carper and Christophe Grand. He maintains a busy blog at cemerick.com and he also produces "Mostly Lazy … a Clojure podcast".

I asked Chas to enumerate some of the topics that would make a difference to developers: something that would attract attention to Clojure as a language, so we wouldn't spend time talking about yet another syntax. One of the first things he immediately mentioned was REPL. Writing code in Clojure is often about making changes and immediately seeing your results. Clojure has emphasized this shell-like approach to development and created an environment that allows for immediate evaluation of code and incorporating changes into a running process.

Clojure differs from other languages in that this interactive shell isn't an afterthought. Ruby's IRB or Java's beanshell are similar attempts at interactivity, but they are not primary features of each language. With Clojure, REPL is built in, and you can connect to any running Clojure process and modify and execute code. In this interview we discuss some of the possibilities that this introduces for Clojure developers.

The full interview is embedded below and available here. For the entire interview transcript, click here.

Highlights from interview:

On what is unique about REPL

"...what's really unique about Clojure is that most people's workflow when developing and using Clojure is tightly tied to the REPL, using the dynamic interactive development capabilities that the REPL provides to really boost your productivity and give you a very immediate sense of control over both the Clojure runtime, the application, programming or service that you're building." [Discussed 00:52]

How does REPL affect your development?

"Generally what you do in Clojure is you start up a JVM instance that is running the Clojure runtime and starts a REPL that you connect to ... then you stay connected to that running Clojure runtime for hours, days. I've had Clojure environments running for weeks in an interactive development setting, where you gradually massage the code that's running within that runtime that corresponds to files you have on disc.

"You can choose what code to load into that environment at any time, massage the data that's being processed by your application. It's a great feedback mechanism and gives you an immediate, fine‑grain sense of control over what you're doing [Discussed 01:18]

On using REPL to deploy production patches

"It's true, you can start up a REPL running on Heroku right now, connect to it and modify your application. Everything will work as you would expect if you happen to be running the application using Foreman locally.

"You don't want to be, in general, modifying your production environments from an interactive standpoint. You want to have a repeatable process ... Depending on the circumstances, it can be reasonable in a 'fire drill' situation to push a critical, time‑sensitive patch out to production ... 99% of the time you probably shouldn't reach for it, but it's very good to know that it's there." [Discussed 05:44]

On using REPL for direct access to production statistics

"JMX is great in terms of providing a structured interface for doing monitoring. But you need to plan ahead of time for the things you're going to monitor for and make sure you have the right extensions to monitor the things that you care about.

"Having a REPL available to connect to in every environment — whether it's development, user acceptance, functional testing or production — means that when you need to, you can get in there and write a one‑off function. Pull some data from this database, see what's going on with this function, capture some data that you wouldn't normally be capturing, stuff that may be far too large to log or to practically get access through JMX. It's a great tool." [Discussed 05:44]

OSCON 2012 — Join the world's open source pioneers, builders, and innovators July 16-20 in Portland, Oregon. Learn about open development, challenge your assumptions, and fire up your brain.

Save 20% on registration with the code RADAR

Related:

May 01 2012

Want to get ahead in DevOps? Expand your expertise and emotional intelligence

Kate Matsudaira (@katemats), VP of engineering at Decide, will be hosting a session, "Leveling up — Taking your operations and engineering role to the next level," at Velocity 2012 in June. In the following interview, Matsudaira addresses a few of the issues she will explore in her session, including the changing roles and required skill sets for today's developers and engineers.

How are operations and engineering jobs changing?

Kate Matsudaira: Technology has been advancing rapidly — it wasn't more than a decade ago when most software involved shrink-wrapped boxes. Things have evolved, though, and with open source, web, and mobile, the landscape has changed. All of the advances in languages, tools, and computing have created a plethora of options to build and create solutions and products.

In the past, engineering roles were specific, and people tended to specialize in one platform or technology. However, as systems have become more complex, and more organizations adopt virtualization, cloud computing, and software as a service, the line between software engineer, operator, and system administrator has become more blurry. It is no longer sufficient to be an expert in one area. People are expected to continuously grow by keeping up with new technology and offering ideas and leadership outside their own purview. As engineering organizations build systems using more and more third-party frameworks, libraries, and services, it's increasingly necessary for engineers to evaluate technologies not solely on their own merits, but also as they fit into the existing enterprise ecosystem.

What are the most important soft skills developers and engineers need to cultivate?

Kate Matsudaira: When it comes to our careers as engineers, learning new technologies and solving problems has never been much of a challenge. And I would argue that most of us actually really enjoy it — and that is part of why we are good engineers. So, certainly understanding the tools and possessing the knowledge to do your work is important.

Putting knowledge and learning aside, there are many soft skill areas that can impact an individual's success and prevent them from realizing their potential. For example, things like communication, influence, attitude, time management, etc., are just the start of a long list of areas to improve.

The specifics may be different for each individual. However, almost everyone — myself included — could stand to improve their communication skills. Whether it is managing up to your boss, getting buy-in from stakeholders, or helping resolve a technical debate with your team members, learning to communicate clearly and effectively can be a challenge for some people.

Communication is the cornerstone of leadership. Learning how to bridge business and technology can help technologists take their careers to the next level. Organizations value people who can clearly explain the trade-offs or return on investment of a feature, or the person who can help their coworkers understand the internal workings of systems. Becoming a better communicator is good career advice for pretty much anyone.

Velocity 2012: Web Operations & Performance — The smartest minds in web operations and performance are coming together for the Velocity Conference, being held June 25-27 in Santa Clara, Calif.

Save 20% on registration with the code RADAR20

Your Velocity 2012 session description mentions "emotional intelligence." Why is that important?

Kate Matsudaira: Improving my "emotional intelligence" is something I have had to work on a lot personally to grow in my career. Emotional intelligence is the idea that there is a set of qualities that helps one excel to a higher level than another person with similar cognitive skills (for more on the topic, there is a great article here on what makes a leader).

The definition of emotional intelligence includes a cocktail of self-awareness, self-regulation, empathy, motivation, and social skill — and who wouldn't want to be better in those areas? But for me, as an introvert who tends to like the company of my computer better than most people, things like social skills and self-awareness weren't exactly my strengths.

Since emotional intelligence can be learned, one can find ways to improve and be better in these areas. The first step to making progress is to really understand where you need to improve, which can be a challenge if you are lacking in self-awareness. However, making improvements in these areas can help you become more successful without becoming something that you are not (i.e. introverts can realize their potential without becoming extroverts).

What are specific strategies geeks can use to improve communications with non-geeks?

Kate Matsudaira: There are a lot of strategies I have come up with that work well. I tend to think about social situations like pattern recognition, and come up with frameworks and templates to help me muddle through — and sometimes flourish — in situations that are out of my comfort zone.

When it comes to communicating with non-geeky types, here is a good outline to get you started:

  1. Look at the situation from their point of view. What knowledge do they have? What are the goals or outcomes that matter to them? What are they trying to achieve, and how does your involvement help/hinder their path? Empathy will help you learn to appreciate their work and help them relate to you as a teammate and colleague.
  2. Get on the same page. Listen to their point of view and make sure you understand it. A great way to do this is to repeat back what they tell you (and not in a patronizing way, but in a constructive way to ensure you heard and understood correctly). What are their objectives? What are their concerns? How do they want you to help?
  3. Make them feel smart. Non-technical or non-geeky people often complain that they can't understand developers or engineers, and some technologists like it that way. And while this may make the technologist feel smart, it also creates a barrier. The greatest technologists know how to communicate so that everyone understands. Take the time to explain technical issues at a level they can understand in the context that matters to them (such as business metrics, customer impact, or revenue potential).
  4. Reassure them. Since they may not know the technology, what they want to know is that you (and your team) have things under control. They want to know the end result and timeline, and most of the time they don't want to know about the "how." Keep this in mind and help them achieve peace of mind knowing that you are on top of handling the task at hand.

Of course, each situation is different, but really looking at the situation from the other person's point of view and helping them is a great way to build relationships and improve communication.

This interview was edited and condensed.

Related:

April 13 2012

Developer Week in Review: Everyone can program?

I'm devoting this week's edition of the WIR to a single news item. Sometimes something gets stuck in my craw, and I have to cough it out or choke on it (hopefully none of you are reading this over lunch ...).

Yet another attempt to create programming for dummies ...

Just today, I came across a news article discussing a recent Apple patent application for a technology to allow "non-programmers" to create iOS applications.

This seems to be the holy grail of software design, to get those pesky overpaid software developers out of the loop and let end-users create their own software. I'll return to the particulars of the Apple application in a moment, but first I want to discuss the more general myth, because it is a myth, that there's some magic bullet that could let lay people create applications.

The underlying misunderstanding is that it is something technical that is standing between "Joe Sixpack" and the software of his dreams. The line of reasoning goes that because languages are hard to understand and require specialized knowledge, there's a heavy learning curve before a new person could be productive. In reality, the particulars of a specific platform are largely irrelevant to whether a skilled software engineer can be productive in it, though there are certainly languages and operating systems that are easier to code for than others. But the real difference between a productive engineer and a slow one lies in how good the engineer is at thinking about software, not C or Java or VB.

Almost without exception, any software engineer I talk to thinks it's insane when an employer would rather hire someone with two years total experience, all of it in a specific language, rather than one with 10 years experience in a variety of languages, all other factors being equal. When I think about a problem, I don't think in Java or Objective-C, I think in algorithms and data structures. Then, once I understand it, I implement it in whatever language is appropriate.

I believe that a lot of the attitude one sees toward software engineering — that it's an "easy" profession that "anyone" could do if it weren't for the "obfuscated" technology — comes from the fact that it's a relatively well-paid profession that doesn't require a post-graduate degree. I match or out-earn most people slaving away with doctorates in the sciences, yet I only have a lowly bachelors, and not even a BS. "Clearly," we must be making things artificially hard so we can preserve our fat incomes.

In a sense, they are right, in that it doesn't take huge amounts of book learning to be a great programmer. What it takes is an innate sense of how to break apart problems and see the issues and pitfalls that might reach out to bite you. It also takes a certain logical bent of mind that allows you to get to the root of the invariable problems that are going to occur.

Really good software engineers are like great musicians. They have practiced their craft, because nothing comes for free, but they also have a spark of something great inside them to begin with that makes them special. And the analogy is especially apt because while there are always tools being created to make it easier for "anyone" to create music, it still takes a special talent to make great music.

Which brings us back to Apple's patent. Like most DWIM (do what I mean) technologies for programming, it handles a very specific and fairly trivial set of applications, mainly designed for things like promoting a restaurant. No one is going to be writing "Angry Birds" using it. Calling it a tool to let anyone program is like saying that Dreamweaver lets anyone create a complex ecommerce site integrated into a back-end inventory management system.

The world is always going to need skilled software engineers, at least until we create artificial intelligences capable of developing their own software based on vague end-user requirements. So, we're good to go for at least the next 10 years.

Fluent Conference: JavaScript & Beyond — Explore the changing worlds of JavaScript & HTML5 at the O'Reilly Fluent Conference (May 29 - 31 in San Francisco, Calif.).

Save 20% on registration with the code RADAR20

Got news?

Please send tips and leads here.

Related:

January 20 2012

Developer Week in Review: Early thoughts on iBooks Author

One down, two to go, Patriots-wise. Thankfully, this week's game is on Sunday, so it doesn't conflict with my son's 17th birthday on Saturday. They grow up so quickly; I can remember him playing with his Comfy Keyboard, now he's writing C code for robots.

A few thoughts on iBooks Author and Apple's textbook move

iBooks AuthorThursday's Apple announcement of Apple's new iBooks Author package isn't developer news per se, but I thought I'd drop in a few initial thoughts before jumping into the meat of the WIR because it will have an impact on the community in several ways.

Most directly, it is another insidious lock-in that Apple is wrapping inside a candy-covered package. Since iBooks produced with the tool can only be viewed in full on iOS devices, textbooks and other material produced with iBooks Author will not be available (at least in the snazzy new interactive form) on Kindles or other ereaders. If Apple wanted to play fair, it should make the new iBooks format an open standard. Of course, this would cut Apple out of its cut of the royalties as well as yielding the all-important control of the user experience that Steve Jobs installed as a core value in the company.

On a different level, this could radically change the textbook and publishing industry. It will make it easier to keep textbooks up to date and start to loosen the least-common-denominator stranglehold that huge school districts have on the textbook creation process. On the other hand, I can see a day when pressure from interest groups results in nine different textbooks being used in the same class, one of which ignores evolution, one of which emphasizes the role of Antarctic-Americans in U.S. history, etc.

It's also another step in the disintermediation of publishing since the cost of getting your book out to the world just dropped to zero (not counting proofreading, indexing, editing, marketing, and all the other nice things a traditional publisher does for a writer). I wonder if Apple is going to enforce the same puritanical standards on iBooks as they do on apps. What are they going to do when someone submits a My Little Pony / Silent Hill crossover fanfic as an iBook?

Another item off my bucket list

I've been to Australia. I've had an animal cover book published. And now I've been called a moron (collectively) by Richard Stallman.

The occasion was the previously mentioned panel on the legacy of Steve Jobs, on which I participated this previous weekend. As could have been expected, Stallman started in describing Jobs as someone who the world would have been better off without. He spent the rest of the hour defending the position that it doesn't matter how unusable the free alternative to a proprietary platform is, only that it's free. When we disagreed, he shouted us down as "morons."

As I've mentioned before, that position makes a few invalid assumptions. One is that people's lives will be better if they use a crappy free software package over well-polished commercial products. In reality, the perils of commercial software that Stallman demonizes so consistently are largely hypothetical, whereas the usability issues of most consumer-facing free software are very real. For the 99.999% of people who aren't software professionals, the important factor is whether the darn thing works, not if they can swap out an internal module.

The other false premise at play here is that companies are Snidely Whiplash wanna-bes that go out of their way to oppress the masses. Stallman, to his credit as a savvy propagandist, has co-opted the slogans of the Occupy Wall Street movement, referring to the 1% frequently. The reality is that when companies try to pull shady stunts, especially in the software industry, they usually get caught and have to face the music. Remember the furor over Apple's allegedly accidental recording of location data on the iPhone? Stallman's dystopian future, where corporations use proprietary platforms as a tool of subjugation, has pretty much failed every time it's actually been tried on the ground. I'm not saying corporations are angels, or even that they have the consumer's best interests in mind, it's just that they aren't run by demonic beings that eat babies and plot the enslavement of humanity.

Achievement unlocked: Erased user's hard drive

Sometimes life as a software engineer may seem like a game, but Microsoft evidently wants to turn it into a real one. The company has announced a new plug-in for Visual Studio that lets you earn achievements for coding practices and other developer-related activities.

Most of them are tongue in cheek, but I'm terrified that we may start seeing these achievements in live production code as developers compete to earn them all. Among the more fear-inspiring:

  • "Write 20 single letter class-level variables in one file. Kudos to you for being cryptic!"
  • "Write a single line of 300 characters long. Who needs carriage returns?"
  • "More than 10 overloads of a method. You could go with this or you could go with that."
Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.

Save 20% on registration with the code RADAR20

Got news?

Please send tips and leads here.

Related:

January 06 2012

Top Stories: January 2-6, 2012

Here's a look at the top stories published across O'Reilly sites this week.

The feedback economy
We're moving beyond an information economy. The efficiencies and optimizations that come from constant and iterative feedback will soon become the norm for businesses and governments.

Epatients: The hackers of the healthcare world
The epatient community uses digital tools and the connective power of the Internet to empower patients. Here, Fred Trotter offers epatient resources and first steps.

The three topics that will define the developer world in 2012
It's a brand new year, time to look ahead to the stories that will have developers talking in 2012. Mobile will remain a hot topic, the cloud is absorbing everything, and jobs appear to be heading back to the U.S.

Understanding randomness is a double-edged sword
While Leonard Mlodinow's "The Drunkard's Walk" offers a good introduction to probabilistic thinking, it carries two problems: First, it doesn't uniformly account for skill. Second, when we're talking probability and statistics, we're talking about interchangeable events.

Traditional vs self-publishing: Neither is the perfect solution
In this video podcast, author Dan Gillmor talks about the pros and cons of traditional publishing versus self-publishing.


Tools of Change for Publishing, being held February 13-15 in New York, is where the publishing and tech industries converge. Register to attend TOC 2012.

October 10 2011

Moneyball for software engineering

While sports isn't always a popular topic in software engineering circles, if you find yourself sitting in a theater watching "Moneyball," you might wonder if the story's focus on statistics and data can be applied to the technical domain. I think it can, and gradually, a growing number of software companies are starting to use Moneyball-type techniques to build better teams. Here, I'd like to provide you a brief introduction to some of these ideas and how they might be applied.

The basic idea of the Moneyball approach is that organizations can use "objective" statistical analysis to be smarter and build more competitive teams, which means:

  • Studying player skills and contributions much more carefully and gathering statistics on many more elements of what players do to categorize their skills.
  • Analyzing team results and player statistics to discover patterns and better determine what skills and strategies translate to winning.
  • Identifying undervalued assets, namely those players, skills, and strategies that are overlooked but which contribute heavily to winning.
  • Creating strategies to obtain undervalued assets and build consistent, winning teams.

While smaller-market baseball teams such as the Oakland A's were the original innovators of Moneyball, these techniques have now spread to all teams. The "old-school" way of identifying talent and strategies based on experience and subjective opinion has given way to a "new school" that relies on statistical analysis of situations, contributions, and results. Many sports organizations now employ statisticians with wide-ranging backgrounds to help evaluate players and prospects, and even to analyze in-game coaching strategies. Paul DePodesta, who was featured in both the book and the film, majored in economics at Harvard.

In software engineering, most of us work in teams. But few of us utilize metrics to identify strengths and weaknesses, set and track goals, or evaluate strategies. Like baseball teams of the past, our strategies for hiring, team building, project management, performance evaluation, and coaching are mostly based on experience. For those willing to take the time to make a more thorough analysis through metrics, there is an opportunity to make software teams better — sometimes significantly better.

Measure skills and contributions

It starts with figuring out ways to measure the variety of skills and contributions that individuals make as part of software teams. Consider how many types of metrics could be gathered and prove useful for engineering teams. For example, you could measure:

  • Productivity by looking at the number of tasks completed or the total complexity rating for all completed tasks.
  • Precision by tracking the number of production bugs and related customer support issues.
  • Utility by keeping track of how many areas someone works on or covers.
  • Teamwork by tallying how many times someone helps or mentors others, or demonstrates behavior that motivates teammates.
  • Innovation by noting the times when someone invents, innovates, or demonstrates strong initiative to solve an important problem.
  • Intensity by keeping track of relative levels and trends, increases or decreases in productivity, precision, or other metrics.
  • Effort by looking at the number of times someone goes above and beyond what is expected to fix an issue, finish a project, or respond to a request.

These might not match the kinds of metrics you have used or discussed before for software teams. But gathering and analyzing metrics like these for individuals and teams will allow you to begin to characterize their dynamics, strengths and weaknesses.


Web 2.0 Summit, being held October 17-19 in San Francisco, will examine "The Data Frame" — focusing on the impact of data in today's networked economy.

Save $300 on registration with the code RADAR

Measure success

In addition to measuring skills and contributions for software teams, to apply Moneyball strategies, you will need to find a way to measure success. In sports, you have wins and losses, so that's much simpler. In software, there are many ways a software team may be determined to succeed or fail depending on the kinds of software they work on, such as:

  • Looking at the number of users acquired or lost.
  • Calculating the impact of software enhancements that deliver benefit to existing users.
  • Tracking the percentage of trial users who convert to customers.
  • Factoring in the number of support cases resulting from user problems.

From a Moneyball perspective, measuring the relative success of software projects and teams is a key step to an objective analysis of team dynamics and strategies. In the end, you'll want to replicate the patterns of teams with excellent results and avoid the patterns of teams with poor results.

Focus on relative comparisons

One important Moneyball concept is that exact values don't matter, relative comparisons do. For example, if your system of measurement shows that one engineer is 1% more productive than another, what that really tells you is that they are essentially the same in productivity. But if one engineer is 50% or 100% more productive than another, then that shows an important difference between their skills.

You can think of metrics as a system for categorization. Ultimately, you are trying to rate individuals or teams as high, medium, or low in a set of chosen categories based on their measured values (or pick any other relative scale). You want to know whether an engineer has high productivity compared to others or shows a high level of teamwork or effort. This is the information you need at any point in time or over the long term to identify patterns of success and areas for improvement on teams. For example, if you know that your successful teams have more engineers with high levels of innovation, intensity, or effort, or that 80% of the people on a poorly performing team exhibit a low level of teamwork, then those are useful insights.

Knowing that your metrics are for categorization frees you up from needing exact, precise measurements. This means a variety of mechanisms to gather metrics can be used, including personal observation, without as much concern that the metrics are completely accurate. You can assume a certain margin of error (plus or minus 5-10% for example) and still gather meaningful data that can be useful in determining the relative level of people's skills and a team's success.

Identify roles on the team

With metrics like those discussed above, you can begin to understand the dynamics of teams and the contributions of individuals that lead to success. A popular concept in sports that is applicable to software engineering is that of "roles." On sports teams, you have players that fill different roles; for example, in baseball you have roles like designated hitters, pitchers, relievers, and defensive specialists. Each role has specific attributes and can be analyzed by specific statistics.

Roles and the people in those roles are not necessarily static. Some people are truly specialists, happily filling a specific role for a long period of time. Some people fill one role on a team for awhile, then move into a different role. Some people fill multiple roles. The value of identifying and understanding the roles is not to pigeon-hole people, but to analyze the makeup and dynamics of various teams, particularly with an eye toward understanding the mix of roles on successful teams.

By gathering and examining metrics for software engineering teams, you can begin to see the different types of roles that comprise the teams. Using sports parlance, you can imagine that a successful software team might have people filling a variety of roles, such as:

  • Scorers (engineers who have a high level of productivity or innovation).
  • Defenders (engineers who have a high level of precision or utility).
  • Playmakers (engineers who exhibit a high level of teamwork).
  • Motivators (engineers who exhibit a high level of intensity or effort).

One role or skill is not necessarily more valuable than another. Objective analysis of results may, in fact, lead you to conclude that certain roles are underappreciated. Having engineers who are strong in teamwork or effort, for example, may be as crucial to team success as having those who are highly productive. Also, you might come to realize that the perceived value of other roles or skills doesn't match the actual results. For example, you might find that teams filled with predominantly highly productive engineers might not necessarily be more successful (assuming that they lack in other areas, such as precision or utility).

Develop strategies to improve teams

With metrics to help you identify and validate the roles that people play on software teams, and thereby identify the relative strengths and weaknesses of your teams, you can implement strategies to improve those teams. If you identify that successful teams have certain strengths that other teams lack, you can work on strengthening those aspects. If you identify "undervalued assets," meaning skills or roles that weren't fully appreciated, you can bring greater attention to developing or adding those to a team.

There are a variety of techniques you can use to add more of those important skills and undervalued assets to your teams. For example, using Moneyball-like sports parlance, you can:

  • Recruit based on comparable profiles, or "comps" — profile your team, identify the roles you need to fill, and then recruit engineers that exhibit the strengths needed in those roles.
  • Improve your farm system — whether you use interns, contract-to-perm, or you promote from within, gather metrics on the participants and then target them for appropriate roles.
  • Make trades — re-organize teams internally to fill roles and balance strengths.
  • Coach the skills you need — identify those engineers with aptitude to fill specific roles and use metrics to set goals for development.

Blend metrics and experience

One mistaken conclusion that some people might reach in watching the movie "Moneyball" is that statistical analysis has completely supplanted experience and subjective analysis. That is far from the truth. The most effective sports organizations have added advanced statistical analysis as a way to uncover hidden opportunities and challenge assumptions, but they mix that with the personal observations of experienced individuals. Both sides factor into decisions. For example, a sports team is more likely to draft a new player if both the statistical analysis of that player and the in-person observations of experienced scouts identify the player as a good prospect.

In the same way, you can mix metrics-based analysis with experience and best practices in software engineering. Like sports teams, you can use metrics to identify underappreciated skills, identify opportunities for improvement, or challenge assumptions about team strategies and processes by measuring results. But such analysis is also limited and therefore should be balanced by experience and personal observation. By employing metrics in a balanced way, you can also alleviate concerns that people may have about metrics turning into some type of divisive grading system to which everyone is blindly subjected.

In conclusion, we shouldn't ignore the ideas in "Moneyball" because it's "just sports" (and now a Hollywood movie) and we are engineers. There are many smart ideas and a lot of smart people who are pioneering new ways to analyze individuals and teams. The field of software engineering has team-based dynamics similar to sports teams. Using similar techniques, with metrics-based analysis augmenting our experience and best practices, we have the same opportunity to develop new management techniques that can result in stronger teams.

Jonathan Alexander discussed the connection between "Moneyball" and software teams in a recent webcast:

Photo: TechEd 617 by betsyweber, on Flickr

Related:

August 23 2011

Is your Android app getting enough sleep?

Android developers have unprecedented access to smartphone resources, but this access carries responsibility. The potential to unwisely hog available resources, like power consumption, is a real concern.

I recently spoke with Frank Maker, a Ph.D. candidate at the University of California, Davis, who is doing systems research with Android on low-power adaptation techniques. Our interview follows.

What are the main power consumption issues Android developers should be aware of?

Frank MakerFrank Maker: The number one thing developers should do with their app is allow it to sleep. On a desktop computer, you just launch the app. It runs. You close it. You open it. But on an Android device, you actually go through states where it has focus, it doesn't have focus, things like that. A developer needs to handle those states appropriately instead of trying to make an app work like it would on a desktop. So, if the activity loses focus and the user is going on to another application, then your app should respond to that if it has anything that's going to use serious power. You can bundle up the state and put the app to sleep rather than letting your app's tasks run in the background.

What are the risks associated with poor power optimization?

Frank Maker: The biggest risk is that you'll get eaten alive in the Android Market. There are many applications in the Market that are marked one star because of bad battery performance. Even worse, if your app has a reputation for poor power use, you're going to completely lose your market share to competitors with better reputations.

On the user side, are there tools for monitoring and managing power consumption?

Android battery screenFrank Maker: There's a battery usage screen on Android. If you go to "Settings," "Application Settings," and then "Battery Use," there's a nice little display that shows you the ranking of the different applications. I've looked into it as part of our research, and while it's simple and not super accurate, it will definitely tell you if one app is using way too much power.

Battery life basically isn't a problem until you make it a problem. By that I mean it's not something users think about initially, but if their battery starts going down really fast and they just installed your app, they're generally going to put two and two together. They'll then go to the Market and see if other people have posted about their experiences with that app.

Tell us a little bit about the research you're doing at UC Davis. What exactly are you working on?

Frank Maker: I sit on the border between hardware and software. We're trying to solve the problem of taking one piece of software and putting it on lots of different platforms without changing the software each time. Right now there's a lot of refactoring that goes on when you deploy an application to a new mobile platform. We are trying to create automatic methods instead. One of the things we've looked at is building a model for the power usage in real-time on the device. So the idea is that when you launch the software on the device, you figure out how much energy it takes to do tasks on the device, rather than trying to know it beforehand.

Related to porting across platforms, do you think Android's fragmentation issues will improve over time?

Frank Maker: I'm pretty confident that the Android platform will solve the fragmentation problem. Google is addressing it. For example, on the Android developer blog, Dianne Hackborn recently wrote about new tools that help you adapt to different screen sizes. There's also the battery usage meter, which I mentioned earlier, and Android has always provided a directory structure for your resources to adapt to different configurations. You can automatically select the right resource for the right device and pinpoint exactly what you want to do for each device.

Broadly, I feel like Android is in a transition period. It's evolving from a young, scrappy platform into something more established, and now it has to really solve these issues as it moves into adolescence.

This interview was edited and condensed.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD



Related:


August 08 2011

Mobile metrics: Like the web, but a lot harder

Flurry is a mobile analytics company that currently tracks more than 10,000 Android applications and 500 million application sessions per day. Since Flurry supports all the major mobile platforms, the company is in a good position to comment on the relative merits and challenges of Android development.

I recently spoke with Sean Byrnes (@FlurryMobile), the CTO and co-founder of Flurry, about the state of mobile analytics, the strengths and weaknesses of Android, and how big a problem fragmentation is for the Android ecosystem. Byrnes will lead a session on "Android App Engagement by the Numbers" at the upcoming Android Open Conference.

Our interview follows.


What challenges do mobile platforms present for analytics?

Sean Byrnes: Mobile application analytics are interesting because the concepts are the same as traditional web analytics but the implementation is very different. For example, with web analytics you can always assume that a network connection is available since the user could not have loaded the web page otherwise. A large amount of mobile application usage happens when no network is available because either the cellular network is unreliable or there is no Wi-Fi nearby. As a result, analytics data has to be stored locally on the device and reported whenever a network is available, which can be weeks after the actual application session. This is complicated by the fact that about 10% of phones have bad clocks and report dates that in some cases are decades in the future.

Another challenge is that applications are downloaded onto the phone as opposed to a website where the content is all dynamic. This means that you have to think through all of the analytics you want to track for your application before you release it, because you can't change the tracking points once it's downloaded.

What metrics are developers most interested in?

Sean Byrnes: Developers are typically focused on metrics that either make or save them the most money. Typically these revolve around retention, engagement and commerce.

One of the interesting things about the mobile application market is how the questions developers ask are changing because the market is maturing so quickly. Until recently the primary metrics used to measure applications were the number of daily active users and the total time spent in the app. These metrics help you understand the size of your audience and they're important when you're focusing on user acquisition. Now, the biggest questions being asked are centering on user retention and the lifetime value of a user. This is a natural shift in focus as applications begin to focus on profitability and developers need to understand how much money they make from users as compared to their acquisition cost. For example, if my application has 100,000 daily active users but an active user only uses the application once, then I have a high level of engagement that is very expensive to maintain.

Social game developers are among the most advanced, scientific and detailed measurement companies. Zynga, for example, measures every consumer action in impressive detail, all for the purpose of maximizing engagement and monetization. They consider themselves a data company as much as a game maker.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD

What do you see as Android's biggest strengths and weaknesses?

Sean Byrnes: Android's biggest strength is adoption and support by so many phone manufacturers, and the fact that it's relatively open, which means development iterations can happen faster and you have the freedom to experiment. The installed base is now growing faster than iOS, although the iOS installed base is still larger, as of now.

Android's biggest weakness is that it offers less business opportunity to developers. Fewer Android consumers have credit cards associated with their accounts, and they expect to get more free apps. The most common business model on Android is still built on ad revenue. Also, because the Android Market is not curated, you can end up with a lower-average-quality app, which further reduces consumer confidence. At the end of the day, developers want a business, not an OS. And they need a marketplace that brings in consumers who are willing and able to pay. This is Android's biggest challenge at the moment.

Is Android fragmentation a problem for developers?

Sean Byrnes: Fragmentation is definitely a problem for Android developers. It's not just the sheer number of new Android devices entering the market, but also the speed at which new versions of the Android OS are being released. As a developer you have to worry about handling both a variety of hardware capabilities and a variety of Android OS capabilities for every application version you release. It's difficult to be truly innovative and take advantage of advanced features since the path of least resistance is to build to the lowest common denominator.

It will likely get worse before it gets better with the current roadmap for Android OS updates and the number of devices coming to market. However, fragmentation will become less of a concern as developers can make more money on Android and the cost of supporting a device becomes insignificant compared to the revenue it can generate.



Related:


July 22 2011

Top stories: July 18-22, 2011

Here's a look at the top stories published across O'Reilly sites this week.


Google+ is the social backbone
Google+ is the rapidly growing seed of a web-wide social backbone, and the catalyst for the ultimate uniting of the social graph.
Intellectual property gone mad
Patent trolling could undermine app ecosystems, but who can mount a legitimate challenge? Here's four potential solutions.
Software engineering is a team sport: How programmers can deal with colleagues and non-programmers
Ben Collins-Sussman, tech lead and manager at Google, and Brian Fitzpatrick, engineering manager at Google, explain the "art of mass organizational manipulation."
FOSS isn't always the answer
James Turner says the notion that proprietary software is somehow dirty or a corruption of principles ignores the realities of competition, economics, and context.


Emerging languages show off programming's experimental side
Alex Payne, organizer of OSCON's Emerging Languages track, discusses language experimentation and whether these efforts are evolutionary or revolutionary.

Rugby photo: Scrum by MontyPython, on Flickr; Open sign photo: open by tinou bao, on Flickr




OSCON Java 2011, being held July 25-27 in Portland, Ore., is focused on open source technologies that make up the Java ecosystem. Save 20% on registration with the code OS11RAD



July 21 2011

FOSS isn't always the answer

There's been some back and forth between various members of the technical press about whether the open source movement has lost its idealism, and the relative virtues of shunning or accepting proprietary software into your life. The last thing to cross my screen was an essay by Bruce Byfield that includes this gem:

In my mind, to buy and use proprietary products, except in the utmost necessity is something to be ashamed about. And that's how I've felt the few times I've bought proprietary myself. It seems an undermining of FOSS' efforts to provide an alternative.

Over the years, I've become less patient with the stridency of the FOSS movement, or at least some of the more pedantic wings of it. It is certainly not my place to tell anyone that they should buy or not buy any kind of software. However, the repeated assertions by members of the FOSS movement that proprietary software is somehow dirty or a corruption of principles has begun to stick in my craw.

There are plenty of places where FOSS makes all the sense in the world, and those are the places that FOSS has succeeded. No one uses a closed source compiler anymore, Eclipse is one of the leading IDEs for many languages, and Linux is a dominant player in embedded operating systems. All these cases succeeded because, largely, the software is secondary to the main business of the companies using it (the major exception being Linux vendors who contribute to the kernel, but they have a fairly unique business model.)

Where FOSS breaks down pretty quickly is when the software is not a widely desired tool used by the developer community. Much as you can quickly get to the Wikipedia philosophy page by repeated clicking on the first link of articles, you can quickly get to the requirement for a utopian society once you start following the line of assumptions needed to make consumer-level FOSS work.

The typical line of thought runs like this: Let's say we're talking about some truly boring, intricate, detail-laden piece of software, such as something to transmit dental billing records to insurers (this type of stuff is going to be required under the new health care laws.) Clearly, people don't write these things for kicks, and your typical open-source developer is unlikely to be either a dentist, or a dental billing specialist.

OSCON 2011 — Join today's open source innovators, builders, and pioneers July 25-29 as they gather at the Oregon Convention Center in Portland, Ore.

Save 20% on registration with the code OS11RAD

So, if all software should be free and open source, who is going to write this code? One argument is that the dentist, or a group of dentists, should underwrite the production of the code. But dentistry, like most things in western society, tends to be a for-profit competitive enterprise. If everyone gets the benefit of the software (since it's FOSS), but a smaller group pays for it, the rest of the dentists get a competitive advantage. So there is no incentive for a subset of the group to fund the effort.

Another variant is to propose that the software will be developed and given away, and the developers will make their living by charging for support. Leaving alone the cynical idea that this would be a powerful incentive to write hard-to-use software, it also suffers from a couple of major problems. To begin with, software this complex might take a team of 10 people one or more years to produce. Unless they are independently wealthy, or already have a pipeline of supported projects, there's no way they will be able to pay for food (and college!) while they create the initial product.

And once they do, the source is free and available to everyone, including people who live in areas of the world with much lower costs (and standards) of living. What is going to stop someone in the developing world from stepping in and undercutting support prices? It strikes me as an almost automatic race to the bottom.

And this assumes that software development is the only cost. Let's think about a game such as "Portal 2" or one of the "Call of Duty" titles. They have huge up-front costs for actors, motion capture, and so on. And they have very little in the way of potential revenue from support, as well. So do the FOSS proponents believe that modern computer games are all evil, and we should all go back to "NetHack" and "Zork"?

Let me be clear here: I am in no way trying to undermine anyone who wants to develop or use FOSS. But I have spent most of my adult life writing proprietary software — most of it so specialized and complicated that no open source project would ever want to take it on — and I find the implication that the work I do is in some way cheap or degrading to be a direct insult. When it has made sense, I have contributed work to open source projects, sometimes to a significant degree. But when it made sense, and when not, should remain my choice.

In many ways, the web is a perfect example of the marketplace of ideas. No one knows (or in most cases, cares) whether the technology under the covers is FOSS or proprietary. Individuals make the same measured decisions when selecting software for personal or business use. If there is a FOSS package that meets all the requirements, it tends to be selected. If it suffers in comparison to proprietary counterparts, it it may still be selected if the need to modify or extend the package is important, or if the price differential is just too hard to justify. But in many cases, proprietary software fills niches that FOSS software does not. If individual activists want to "wear a hair shirt" and go without functionality in the name of FOSS, that's their decision. But I like linen, thank you.

If there are people out there who are willing to engage in a reasoned, non-strident discussion of this issue, I'd love to talk it out. But they need to accept the ground rules that most of us live in a capitalist society, we have the right to raise and provide for a family, and that until we all wake up in a FOSS developer's paradise, we have to live and work inside of that context. Drop me a note or post a comment. I'd love to hear how a proprietary-free software world could work.



Related:


June 23 2011

Scale your JavaScript, scale your team

Building JavaScript applications that will scale smoothly can be challenging — in terms of design, in terms of technology, and in terms of handling large teams of developers with lots of hands in the code. In the following interview, "High Performance JavaScript" author and OSCON speaker Nicholas Zakas discusses the issues that pop up when you build big JavaScript apps with big teams.

What are some of the challenges attached to building large-scale JavaScript apps with a large team?

Nicholas ZakasNicholas Zakas: The biggest challenge is making sure everyone is on the same page. In a lot of ways, you want the entire team to work as a single engineer, meaning that every engineer needs to be committed to doing things the same way. Code conventions are vitally important, as are code reviews, to make sure everyone understands not only their role but the roles of everyone else on the team. At any point in time, you want any engineer to be able to step in and pick up another engineer's work without needing to figure out his or her personal style first.

How do you design an app that balances responsiveness and structure?

Nicholas Zakas: As they say, the first step in solving a problem is admitting you have one. The key is to acknowledge from the start that you have no idea how this will grow. When you accept that you don't know everything, you begin to design the system defensively. You identify the key areas that may change, which often is very easy when you put a little bit of time into it. For instance, you should expect that any part of the app that communicates with another system will likely change, so you need to abstract that away. You should spend most of your time thinking about interfaces rather than implementations.

OSCON JavaScript and HTML5 Track — Discover the new power offered by HTML5, and understand JavaScript's imminent colonization of server-side technology.

Save 20% on registration with the code OS11RAD

What are the challenges specific to JavaScript?

Nicholas Zakas: The "shared everything" nature of the language is the biggest JavaScript challenge. You can set up a system that is well designed, and an inexperienced engineer might come in and — accidentally or not — change some key piece of functionality. ECMAScript 5 helps with some of that by allowing you to lock down modification of objects, but the nature of the language remains largely the same and problematic.

Do lessons applicable to large teams also apply to smaller teams?

Nicholas Zakas: Small teams still need to build scalable solutions because hopefully, one day, the team will grow larger. Your system should work for any number of engineers. The proof of a good design is being able to move seamlessly from five engineers to 10 and then more without needing to fundamentally change the design.

This interview was edited and condensed.

Associated photo on home and category pages: Scaffolding: Not just for construction workers anymore by kevindooley, on Flickr

Related:

June 09 2011

JavaScript spread to the edges and became permanent in the process

At some point, when nobody was looking, JavaScript evolved from humble beginnings to become an important and full-fledged development language.

James Duncan (@jamesaduncan), the chief architect at Joyent, is one of the people who's now putting JavaScript in unexpected places. In his case, it's using JavaScript as a web server development language through the Node.js platform.

In the following interview, Duncan shares his thoughts on JavaScript's growth and how we came to depend so heavily on the language.

For a lot of us who have been in the industry for a while, JavaScript has always seemed like a toy language. When did that change?

James Duncan: The big change started when Google launched Maps. All of a sudden, there was this highly-interactive, highly-usable, fun-to-play with web interface. It just blew away almost every other map provider's offering, and it was all constructed in JavaScript. Everyone sat up and took notice.

What's happening now is that JavaScript is installed on millions and millions of edges of our network. If you think of every web-accessing device as an edge on the network, then JavaScript is in all of them. Every web browser has it, every smartphone has it.

In the same way that C has become permanent at the systems level, with JavaScript, what you've got is a situation where it's on so many edges of the network, it really can never be eliminated. For better or worse, JavaScript be there in 15 years.

Save 50% - JavaScript Ebooks and Videos

JavaScript is everywhere: servers, rich web client libraries, HTML5, databases, even JavaScript-based languages. If you've avoided JavaScript, this is the year to learn it. And if you don't, you risk being left behind. Whatever your level, we have you covered:

Introductory  /  Intermediate  /  Advanced

One week only—offer expires 14 June. Use discount code HALFD in the shopping cart. Buy now and SAVE.



There is still a lingering perception that JavaScript isn't a
first-class programming language, though. Some vendors that use it for
scripting in their products pitch it as an easier language to
use.


James Duncan: I've long called the JavaScript programming community the "dark development team in the sky" because JavaScript requires a skill set that no one admits to having, but everyone actually has. I can use this same positioning to explain why it's important: You walk into an enterprise and say, "Why are you doing all of this high-ceremony .NET/Objective C-style development? It's just not necessary." Because you can just do it in JavaScript, you already have the skills. That makes it an easier sell.

I don't think JavaScript is any less capable than any other language, per se. It's just that what goes into that JavaScript stack is very much defined by what it's embedded into. Because it originates in the browser, we think of it as being easy and unsophisticated, but if you dive down, it absolutely is sophisticated, capable and powerful. It's just that our mindset — and certainly the corporate mindset — has been that JavaScript is only used for making pop-ups in web pages.

In terms of the browser, some of us remember the nightmares of cross-browser JavaScript compatibility issues. Are they still around? And is HTML5 going to be the next iteration of the same problems?

James Duncan: You're right, the browser environment is a nightmare. That's becoming less true just because of the nature of libraries like JQuery and Prototype. Those libraries have improved circumstances so that, for the most part, if someone's writing code in JQuery and really sticking to JQuery, they're probably not going to have many problems anymore. Or if they do run into issues, they're either impossible to fix because something is fundamentally broken or the fix is trivial. There doesn't seem to be much in between.

HTML5 is fascinating. If you look at things like WebSocket and the Web Worker specs, they seem to be implemented pretty uniformly from browser to browser. So I think it's getting better. The browser vendors have learned their lesson: Trying to fight the standard isn't sensible. Even Microsoft with IE has gotten much better with this — they've got a long way to go, but they're much better than they were.

We're currently seeing somewhat of an arms race regarding JavaScript execution speed. Is there an end in sight?

James Duncan: The cool thing is that we're just at the tip of the iceberg. There's no doubt we're going to see huge performance improvements, which we've already seen in part over the last year. It's going to plateau eventually, but I think there is still room for it to get faster and faster. We've had decades of compiler research going into making the fastest or most optimizing C compiler possible. That sort of research spend has not happened for dynamic languages in non-academic environments before, with the possible exception of Smalltalk, but it's happening now around JavaScript. So you're still going to see performance improvements.

A hot topic in computing is parallel programming in languages such as Erlang. Is JavaScript going to join the party?

James Duncan: Node, in some ways, has a similar view: it's all asynchronous-based programming, all asynchronous IOs. Where it steps away from what Erlang is doing is that it explicitly says, "You shall run as one process." You get one CPU to play with. What we're going to see inside the Node space is some capability to have messaging between CPU cores and, therefore, Node processes running on different CPU cores. I don't know if anyone's going to go to the same extreme as Erlang, but some of the things Node is doing are similar.

What would you like to see changed in JavaScript?

James Duncan: For me, the two points are: One, don't break the web, and two, it would be great to see some sort of concrete class system. That's not to say I dislike the prototype-based object orientation within JavaScript, but I think a lot of people get confused by its capabilities. Rather than understanding it as sort of a liberating thing, they see it as a limiting thing. Putting some syntactic sugar around classes would go a long way toward easing people into the language from other sources.

This interview was edited and condensed.

Associated photo on home and category pages: Node Globe Pics by Dan Zen, on Flickr


Learn about Node.js this July at the O'Reilly Open Source Convention in Portland, Oregon. For a comprehensive orientation to Node, attend Monday's Node tutorials and then join us all day Tuesday at Node Day for total immersion in the people and technologies making up the Node ecosystem. Save 20% on registration with the code OS11RAD




Related:


June 03 2011

The ascendance of App Inventor

App Inventor logoGoogle's App Inventor began as an educational tool to teach people fundamental programming concepts by helping them easily create simple Android apps. But the response — many were excited by the idea of making an app without needing much programming knowledge — turned App Inventor into something more. The App Inventor development team realized they had gained a new audience of hobbyists and creators, and so subsequent App Inventor updates have transformed it from a "toy" educational system into a more robust tool that lets you build more complex Android apps.

David Wolber (@wolberd), a professor at the University of San Francisco, has been teaching students how to use App Inventor. Based on his course materials, he co-authored the book "App Inventor." He also provides feedback and suggestions on the continuing development of the App Inventor kit to the project's lead, Mark Friedman, and creator, Hal Abelson (who is also a co-author of the "App Inventor" book).

In the following interview, Wolber discusses App Inventor's evolution and its applications.


What are App Inventor's major strengths and weaknesses, technically speaking?

David WolberDavid Wolber: I've taught beginning programming for years and there are a lot of smart, creative students who get bogged down with the syntax and cryptic error messages you get when you type in code. They come to computer science excited about technology and immediately face these weird, frustrating mechanics.

With App Inventor, beginners can build really cool apps within hours. Instead of typing code, you plug puzzle pieces together. They only fit together if they're supposed to. Instead of remembering and typing commands, you choose from the high-level functions (puzzle pieces) provided. And your first programs aren't printing "hello world" on the screen — they're reading the GPS sensor or sending a text to a group of friends.

App Inventor includes high-level function blocks, but there are also all the programming constructs you find in a textual language: if statements, while loops, etc. There aren't blocks for every conceivable Android function, so there are some limitations, but the limitations aren't inherent and the App Inventor team adds new blocks all the time.

For some time in the past only highly technical people could set up blogs and wikis. Then tools like WordPress, Blogger and Google Sites transformed things so that anyone could create a blog or wiki. I see App Inventor doing the same thing — it's going to allow all types of people, not just the techies — to create apps for this incredible new mobile world.

If I weren't so busy teaching, I'd start a prototyping and mobile app company based on App Inventor right now. The company would take in ideas and use App Inventor to turn them into interactive, working prototypes within hours. App Inventor is that good, and turn-around time is that fast. I wouldn't be restricted to hiring only experienced, expert programmers.

What's the most common mistake people make when using App Inventor for the first time?

David Wolber: Probably the most common mistake that beginning programmers make, with any language, is to have long coding cycles without testing. I'll see a student with 30 or 40 App Inventor blocks in their app and I'll ask them, "How's it going? Does it work?" and they'll answer, "I think, but I haven't tried it." Even with App Inventor, this "big bang" approach is a strategy that leads to big bugs.

The book addresses software engineering principles: testing your app early and often being a key one. The great thing is that App Inventor provides fantastic tools for doing this — you can plug your phone into your computer and literally test your app as you build it.

App Inventor — Learn the basics of App Inventor with step-by-step instructions for more than a dozen fun projects, such as creating location-aware apps, data storage, and apps that include decision-making logic.

How much of a gap is there between the skills acquired through App Inventor and the skills needed to develop apps professionally?

David Wolber: With App Inventor you'll learn the key programming concepts — conditionals, iteration, functions, parameters, etc. — just without some of the headaches of a text-based language.

I've had a number of students transition from my App Inventor course to a course based on Python and they've thrived. It helps to have a firm basis on the concepts, and because they've had success and built some cool apps with App Inventor, they're motivated and ready to tackle programming with a text-based language.

App Inventor also provides an introduction to the Android operating system. You're not going to learn the same exact commands that are available with the Android SDK, but you'll get a feel for how things work, which will help if you do progress to that level.

If you're already an experienced programmer, you should know that App Inventor won't really help you learn the Android SDK. App Inventor doesn't yet generate code, so you can't go to the SDK level if you reach a limitation.

Having said that, I encourage established programmers to learn App Inventor. It's a great tool to have in your arsenal, especially if you believe in user-centered design and getting feedback early in the development process. There is simply no faster way of creating a mobile app than with App Inventor.

What types of apps work best with App Inventor?

David Wolber: The most exciting apps that App Inventor facilitates are those that take advantage of the technology inside the Android device. In the book we show you how to create social apps that process SMS texts: you can build a "No Text While Driving" app that automatically responds to texts that come in and speaks them aloud using Android's speech synthesizer. You can also build a "Broadcast Hub," an SMS texting app that a group in Helsinki customized for event communication with a group of more than a thousand people. And you can build "Android Where's My Car?," an app that remembers a location for you and gives you a map later so you can get back there. These apps are easy to create in App Inventor because there are high-level blocks for all that technology.

App Inventor is also great for creating games and educational software. One of the first apps students in my class build is "MoleMash," but they personalize it, so they're mashing a picture of a relative or teacher. Later, they build quiz apps and study guides, which come out great because it's easy to add video and sound.

Which App Inventor-created apps have surprised you?

David Wolber: The app that surprised me most was one a man made to propose to his girlfriend. She's a Harry Potter fan, so he made it seem like a Harry Potter quiz with videos and images he clipped in. She had no idea it was a "personal" app, until the last question was "will you marry me?" with their favorite song playing. I liked it because it illustrates a key to App Inventor: you can build apps with very personal utility.

Will App Inventor apps eventually become widely adopted applications?

David Wolber: They haven't yet made it easy to deploy App Inventor apps to the Android Market — you have to jump through some hoops. But this limitation will go away and you'll be able to deploy with a single click.

The potential for building widely adopted apps is definitely there, and the App Inventor team is busy adding more functionality so you can program anything with it that you could do with the Android SDK. They're also working on a "Component Designer" so programmers, using the SDK, can add new components and blocks to App Inventor. I have no doubt App Inventor will be used to build apps for the Market, and will be a common part of the app building arsenal.

The other interesting development is that an App Inventor gallery is being built, which is sort of an open-source alternative to the Android Market. The gallery will be a place where people upload the apps they've built, including the source code (source blocks). So you'll be able to download apps and if there's something you don't like or want to personalize, you can download the blocks and change it with App Inventor. It will be the world's first "customizable app store." App Inventor and the gallery will definitely lead to apps that are widely used.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD



Related:


Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl