Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

December 09 2013

Who will upgrade the telecom foundation of the Internet?

Although readers of this blog know quite well the role that the Internet can play in our lives, we may forget that its most promising contributions — telemedicine, the smart electrical grid, distance education, etc. — depend on a rock-solid and speedy telecommunications network, and therefore that relatively few people can actually take advantage of the shining future the Internet offers.

Worries over sputtering advances in bandwidth in the US, as well as an actual drop in reliability, spurred the FCC to create the Technology Transitions Policy Task Force, and to drive discussion of what they like to call the “IP transition”.

Last week, I attended a conference on the IP transition in Boston, one of a series being held around the country. While we tussled with the problems of reliability and competition, one urgent question loomed over the conference: who will actually make advances happen?

What’s at stake and why bids are coming in so low

It’s not hard to tally up the promise of fast, reliable Internet connections. Popular futures include:

  • Delivering TV and movie content on demand
  • Checking on your lights, refrigerator, thermostat, etc., and adjusting them remotely
  • Hooking up rural patients with health care experts in major health centers for diagnosis and consultation
  • Urgent information updates during a disaster, to aid both victims and responders

I could go on and on, but already one can see the outline of the problem: how do we get there? Who is going to actually create a telecom structure that enables everyone (not just a few privileged affluent residents of big cities) to do these things?

Costs are high, but the payoff is worthwhile. Ultimately, the applications I listed will lower the costs of the services they replace or improve life enough to justify an investment many times over. Rural areas — where investment is currently hardest to get — could probably benefit the most from the services because the Internet would give them access to resources that more centrally located people can walk or drive to.

The problem is that none of the likely players can seize the initiative. Let’s look at each one:

Telecom and cable companies
The upgrading of facilities is mostly in their hands right now, but they can’t see beyond the first item in the previous list. Distributing TV and movies is a familiar business, but they don’t know how to extract value from any of the other applications. In fact, most of the benefits of the other services go to people at the endpoints, not to the owners of the network. This has been a sore point with the telecom companies ever since the Internet took off, and spurs them on constant attempts to hold Internet users hostage and shake them down for more cash.

Given the limitations of the telecom and cable business models, it’s no surprise they’ve rolled out fiber in the areas they want and are actually de-investing in many other geographic areas. Hurricane Sandy brought this to public consciousness, but the problem has actually been mounting in rural areas for some time.

Angela Kronenberg of COMPTEL, an industry association of competitive communications companies, pointed out that it’s hard to make a business case for broadband in many parts of the United States. We have a funny demographic: we’re not as densely populated as the Netherlands or South Korea (both famous for blazingly fast Internet service), nor as concentrated as Canada and Australia, where it’s feasible to spend a lot of money getting service to the few remote users outside major population centers. There’s no easy way to reach everybody in the US.

Governments
Although governments subsidize network construction in many ways — half a dozen subsidies were reeled off by keynote speaker Cameron Kerry, former Acting Secretary of the Department of Commerce — such stimuli can only nudge the upgrade process along, not control it completely. Government funding has certainly enabled plenty of big projects (Internet access is often compared to the highway system, for instance), but it tends to go toward familiar technologies that the government finds safe, and therefore misses opportunities for radical disruption. It’s no coincidence that these safe, familiar technologies are provided by established companies with lobbyists all over DC.

As an example of how help can come from unusual sources, Sharon Gillett mentioned on her panel the use of unlicensed spectrum by small, rural ISPs to deliver Internet to areas that otherwise had only dial-up access. The FCC ruling that opened up “white space” spectrum in the TV band to such use has greatly empowered these mavericks.

Individual consumers
Although we are the ultimate beneficiaries of new technology (and will ultimately pay for it somehow, through fees or taxes) hardly anyone can plunk down the cash for it in advance: the vision is too murky and the reward too far down the road. John Burke, Commissioner of the Vermont Public Service Board, flatly said that consumers choose the phone service almost entirely on the basis of price and don’t really find out its reliability and features until later.

Basically, consumers can’t bet that all the pieces of the IP transition will fall in place during their lifetimes, and rolling out services one consumer at a time is incredibly inefficient.

Internet companies
Google Fiber came up once or twice at the conference, but their initiatives are just a proof of concept. Even if Google became the lynchpin it wants to be in our lives, it would not have enough funds to wire the world.

What’s the way forward, then? I find it in community efforts, which I’ll explore at the end of this article.

Practiced dance steps

Few of the insights in this article came up directly in the Boston conference. The panelists were old hands who had crossed each other’s paths repeatedly, gliding between companies, regulatory agencies, and academia for decades. At the conference, they pulled their punches and hid their agendas under platitudes. The few controversies I saw on stage seemed to be launched for entertainment purposes, distracting from the real issues.

From what I could see, the audience of about 75 people came almost entirely from the telecom industry. I saw just one representative of what you might call the new Internet industries (Microsoft strategist Sharon Gillett, who went to that company after an august regulatory career) and two people who represent the public interest outside of regulatory agencies (speaker Harold Feld of Public Knowledge and Fred Goldstein of Interisle Consulting Group).

Can I get through to you?

Everyone knows that Internet technologies, such as voice over IP, are less reliable than plain old telephone service, but few realize how soon reliability of any sort will be a thing of the past. When a telecom company signs you up for a fancy new fiber connection, you are no longer connected to a power source at the telephone company’s central office. Instead, you get a battery that can last eight hours in case of a power failure. A local power failure may let you stay in contact with outsiders if the nearby mobile phone towers stay up, but a larger failure will take out everything.

These issues have a big impact on public safety, a concern raised at the beginning of the conference by Gregory Bialecki in his role as a Massachusetts official, and repeated by many others during the day.

There are ways around the new unreliability through redundant networks, as Feld pointed out during his panel. But the public and regulators must take a stand for reliability, as the post-Sandy victims have done. The issue in that case was whether a community could be served by wireless connections. At this point, they just don’t deliver either the reliability or the bandwidth that modern consumers need.

Mark Reilly of Comcast claimed at the conference that 94% of American consumers now have access to at least one broadband provider. I’m suspicious of this statistic because the telecom and cable companies have a very weak definition of “broadband” and may be including mobile phones in the count. Meanwhile, we face the possibility of a whole new digital divide consisting of people relegated to wireless service, on top of the old digital divide involving dial-up access.

We’ll take that market if you’re not interested

In a healthy market, at least three companies would be racing to roll out new services at affordable prices, but every new product or service must provide a migration path from the old ones it hopes to replace. Nowhere is this more true than in networks because their whole purpose is to let you reach other people. Competition in telecom has been a battle cry since the first work on the law that became the 1996 Telecom Act (and which many speakers at the conference say needs an upgrade).

Most of the 20th century accustomed people to thinking of telecom as a boring, predictable utility business, the kind that “little old ladies” bought stock in. The Telecom Act was supposed to knock the Bell companies out of that model and turn them into fierce innovators with a bunch of other competitors. Some people actually want to reverse the process and essentially nationalize the telecom infrastructure, but that would put innovation at risk.

The Telecom Act, especially as interpreted later by the FCC, fumbled the chance to enforce competition. According to Goldstein, the FCC decided that a duopoly (baby Bells and cable companies) were enough competition.

The nail in the coffin may have been the FCC ruling that any new fiber providing IP service was exempt from the requirements for interconnection. The sleight of hand that the FCC used to make this switch was a redefinition of the Internet: they conflated the use of IP on the carrier layer with the bits traveling around above, which most people think of as “the Internet.” But the industry and the FCC had a bevy of arguments (including the looser regulation of cable companies, now full-fledged competitors of the incumbent telecom companies), so the ruling stands. The issue then got mixed in with a number of other controversies involving competition and control on the Internet, often muddled together under the term “network neutrality.”

Ironically, one of the selling points that helps maintain a competitive company, such as Granite Telecom, is reselling existing copper. Many small businesses find that the advantages of fiber are outweighed by the costs, which may include expensive quality-of-service upgrades (such as MPLS), new handsets to handle VoIP, and rewiring the whole office. Thus, Senior Vice President Sam Kline announced at the conference that Granite Telecom is adding a thousand new copper POTS lines every day.

This reinforces the point I made earlier about depending on consumers to drive change. The calculus that leads small businesses to stick with copper may be dangerous in the long run. Besides lost opportunities, it means sticking with a technology that is aging and decaying by the year. Most of the staff (known familiarly as Bellheads) who designed, built, and maintain the old POTS network are retiring, and the phone companies don’t want to bear the increasing costs of maintenance, so reliability is likely to decline. Kline said he would like to find a way to make fiber more attractive, but the benefits are still vaporware.

At this point, the major companies and the smaller competing ones are both cherry picking in different ways. The big guys are upgrading very selectively and even giving up on some areas, whereas the small companies look for niches, as Granite Telecom has. If universal service is to become a reality, a whole different actor must step up to the podium.

A beautiful day in the neighborhood

One hope for change is through municipal and regional government bodies, linked to local citizen groups who know where the need for service is. Freenets, which go back to 1984, drew on local volunteers to provide free Internet access to everyone with a dial-up line, and mesh networks have powered similar efforts in Catalonia and elsewhere. In the 1990s, a number of towns in the US started creating their own networks, usually because they had been left off the list of areas that telecom companies wanted to upgrade.

Despite legal initiatives by the telecom companies to squelch municipal networks, they are gradually catching on. The logistics involve quite a bit of compromise (often, a commercial vendor builds and runs the network, contracting with the city to do so), but many town managers swear that advantages in public safety and staff communications make the investment worthwhile.

The limited regulations that cities have over cable companies (a control that sometimes is taken away) is a crude instrument, like a potter trying to manipulate clay with tongs. To craft a beautiful work, you need to get your hands right on the material. Ideally, citizens would design their own future. The creation of networks should involve companies and local governments, but also the direct input of citizens.

National governments and international bodies still have roles to play. Burke pointed out that public safety issues, such as 911 service, can’t be fixed by the market, and developing nations have very little fiber infrastructure. So, we need large-scale projects to achieve universal access.

Several speakers also lauded state regulators as the most effective centers to handle customer complaints, but I think the IP transition will be increasingly a group effort at the local level.

Back to school

Education emerged at the conference as one of the key responsibilities that companies and governments share. The transition to digital TV was accompanied by a massive education budget, but in my home town, there are still people confused by it. And it’s a minuscule issue compared to the task of going to fiber, wireless, and IP services.

I had my own chance to join the educational effort on the evening following the conference. Friends from Western Massachusetts phoned me because they were holding a service for an elderly man who had died. They lacked the traditional 10 Jews (the minyan) required by Jewish law to say the prayer for the deceased, and asked me to Skype in. I told them that remote participation would not satisfy the law, but they seemed to feel better if I did it. So I said, “If Skype will satisfy you, why can’t I just participate by phone? It’s the same network.” See, FCC? I’m doing my part.

October 09 2013

TERRA 821: Glass Half Full

Glass Half Full is the story of three Montana organizations that recycle glass. A city official, a passionate self-starter, and a dedicated craftsman take us through the possibilities that glass recycling have for a community. Produced by Abby Kent

March 28 2013

How crowdfunding and the JOBS Act will shape open source companies

Currently, anyone can crowdfund products, projectscauses, and sometimes debt. Current U.S. Securities and Exchange Commission (SEC) regulations make crowdfunding companies (i.e. selling stocks rather than products on crowdfund platforms) illegal. The only way to sell stocks to the public at large under the current law is through the heavily regulated Initial Public Offering (IPO) process.

The JOBS Act will soon change these rules. This will mean that platforms like Kickstarter will be able to sell shares in companies, assuming those companies follow certain strict rules. This change in finance law will enable open source companies to access capital and dominate the technology industry. This is the dawn of crowdfunded finance, and with it comes the dawn of open source technology everywhere.

The JOBS Act is already law, and it required the SEC to create specific rules by specific deadlines. The SEC is working on the rulemaking, but it has made it clear that given the complexity of this new finance structure, meeting the deadlines is not achievable. No one is happy with the delay but the rules should be done in late 2013 or early 2014.

When those rules are addressed, thousands of open source companies will use this financial instrument to create new types of enterprise open source software, hardware, and bioware. These companies will be comfortably funded by their open source communities. Unlike traditional venture-capital-backed companies, these new companies will narrowly focus on getting the technology right and putting their communities first. Eventually, I think these companies will make most proprietary software companies obsolete.

How are companies like Oracle, Apple, Microsoft, SAS and Cisco able to make so much money in markets that have capable commercial open source competitors? In a word: capital. These companies have access to guaranteed cash flows from locked-in users of their products.

Therefore, venture capital investors are willing to provide startup capital to new business only when they demonstrate the capacity for new lock-in. Investors that start technology companies avoid investments that do not trap their user bases. That means entrenched proprietary players frequently face no serious threats from open source alternatives. The result? Lots of lock-in and lots of customers trapped in long-term relationships with proprietary companies that have little motivation to treat them fairly.

The only real argument against business models that respect software freedom have always been about access to capital. Startups are afraid to release using a FOSS license because that decision limits their access to investment. Early-stage investors love to hear the words “proprietary,” “patent-pending” and “trade secret,” which they mentally translate into “exit strategy.” For these investors, trapping users is a hedge against their inability to evaluate early-stage technology startups. (I am sympathetic; predicting the future is hard.) As a result, most successfully funded technology startups are either proprietary, patented platforms or software-as-a-service (SaaS) platforms.

This is all going to change.

Crowdfunded finance is going to shift the funding of software forever, and it is going to create a new class of tech organization: freedom-first technology companies.

Now, open source projects will be able to seek and find crowds of investors from within their own communities. These companies will have both the traditional advantages of proprietary companies (well-capitalized companies recruit armies of competent programmers and sales forces that can survive long sales cycles) and the advantages of the open source development model (open code review and the ability to integrate the insights of outsiders).

Yesterday, it was a big deal if you could get Intel to invest in your company. Tomorrow, you will seek funding from 500 Intel employees, who are all better qualified to vet your technology startup than 90% of the people in Intel’s investment arm. These crowdfunders are also willing to make a decision to invest in six hours rather than six months.

For this reason, I believe there will be a treasure trove of companies that will soon be born out of open source/libre software/hardware/bioware projects by asking their communities to crowdfund their initial rounds of financing. Large community projects will give birth to one or several different companies that are designed from the ground up to support those projects. GitHub and Thingiverse will become the new hubs for investors. Developers who have demonstrated competence in projects will be rewarded with access to financing that is both cheaper and faster than seed or angel funding.

With this fundamental change in incentive structures, open source projects will have the capital they need to try truly radical approaches to the design of their projects. Currently, open source projects have to choose between begging for capital or living without that capital. Many open source projects choose slow and gradual development not because they prefer it, but because this is what the developers involved can afford to do in their spare time. The Debian and Ubuntu projects are illustrative of the differences in style and result when the same community is “shoestringing it” versus having access to capital. The people running many open source projects know that no angel investor would touch them, so they make slow and steady progress to “good” software releases rather than rapid progress to “amazing” software releases.

These new freedom-first companies will be able to prioritize what is best for their projects and their communities without bearing the wrath of their investors. That’s because their communities are their investors.

This is not going to merely create a class of software that can rival current proprietary software vendors. In a sense, current commercial open source companies are already doing a fine job of that. But those open source companies typically have investors who are similarly desperate for hockey-stick returns. Even they must choose software development strategies that will pay off for investors. This new class of company will prefer technical strategies that will pay off for end users.

That might seem like a small distinction, but this incentive tweak will change everything about how software is made.

The new companies that leverage this funding option will look a lot like Canonical. Canonical is the kind of company you get when the geeks are fully in charge, and you have investors who are very tolerant of long-term risk because they grok the underlying technical problems that sometimes take decades to entirely resolve. Also, the investors probably know what the word “grok” means. But, unfortunately, there are only so many Mark Shuttleworth-types around (one as far as I know, but a guy who can get himself into space can probably be first in line for human cloning, too).

Shuttleworth is famous for reading printouts of the Debian mailing list on vacation as he figured out which Debian developers to hire for Canonical, the new Linux startup he was funding. That kind of behavior is not what most financial analysts do before making an investment, but this, and other similar efforts, allowed Shuttleworth to predict and control the future of a very technical financial opportunity. It is this kind of focus that allowed Shuttleworth to make one great investment and know that it would work, rather than making hundreds of investments hoping that one of them would work. Using the JOBS Act, community members that already sustain that level of research about an open source project can make the same kinds of bets, but with much less money. (It is ironic that so many of the critics of the JOBS Act presume that the crowd is ignorant rather than recognizing the potential for hyper-informed investor communities.)

In addition, companies like Canonical, Rackspace, Google, Amazon, and Red Hat might acquire these new companies. All of these established organizations can afford billion-dollar acquisitions, and they are either entirely open source or they are very open-source friendly. This kind of acquisition potential will ensure that once open source technology companies prove themselves, they will have access to series A and B financing. I also expect there will be several new open source mega-companies that emerge that are even more devoted to community and end-user experience than these current open source leaders.

This new class of company will have lots and lots of hockey sticks and plenty of billion-dollar exits. Companies will achieve these exits precisely because they do not focus on them (it’s very Zen). They will choose and execute visionary technical strategies that no outside investor could understand. These strategies will seem obvious to their communities of users/investors. These companies will be able to move into capitalization as soon as their communities are convinced the technical strategies and execution capabilities are sound. All of this will lead to better, faster, bigger open source stuff.


If you’d like to talk with other people who are interested in getting and giving funding for open source companies, I set up a related mailing list and Twitter account (@WeInvestInUs).

Related:

July 25 2012

Inside GitHub’s role in community-building and other open source advances

In this video interview, Matthew McCullough of GitHub discusses what they’ve learned over time as they grow and watch projects develop there. Highlights from the full video interview include:

  • How GitHub builds on Git’s strengths to allow more people to collaborate on a project [Discussed at the 00:30 mark]
  • The value of stability and simple URLs [Discussed at the 02:05 mark]
  • Forking as the next level of democracy for software [Discussed at the 04:02 mark]
  • The ability to build a community around a GitHub repo [Discussed at the 05:05 mark]
  • GitHub for education, and the use of open source projects for university work [Discussed at the 06:26 mark]
  • The value of line-level comments in source code [Discussed at the 09:36 mark]
  • How to be a productive contributor [Discussed at the 10:53 mark]
  • Tools for Windows users [Discussed at the 11:56 mark]

You can view the entire conversation in the following video:

Related:

April 06 2012

Promoting and documenting a small software project: VoIP Drupal update

Isn't the integration of mobile phones and the Web one of the hot topics in modern technology? If so, VoIP Drupal should become a fixture of web development and administration. I have been meeting with leaders of the project to help with their documentation and publicity. I reported on my first meeting with them here on Radar, and this posting is one of a series of follow-ups I plan to write.

Immediate pressures

When Leo Burd, the lead developer of VoIP Drupal, first pulled together a small cohort of supporters to help with documentation and promotion, only three weeks were left before DrupalCon, the major Drupal annual conference. Leo was doing some presentations there, and the pressing question was what users would want in order to get interested in VoIP Drupal, learn more about it, and be able to get started.

Leo was indefatigable. He planned to get some new modules finished before the conference, but in addition to coding and preparing his own presentations, he wanted to address the lack of introductory materials because it might get in the way of enticing new users to try VoIP Drupal.

In the end, Leo led a webinar to drum up interest. The little group of half a dozen self-selected fans reviewed his slides, sat in on a preview, and helped whip it into a really well-focused, hard-hitting survey. This webinar had 94 participants from 19 countries and more than 40 companies.

Michele Metts, a non-profit activist and Drupal consultant, also stepped up to create slides and lead webinars with slides explaining VoIP Drupal basics.

Slide from a VoIP Drupal webinar
Slide from a VoIP Drupal webinar

Although we agreed that many parts of the system needed more documentation, it was a more effective use of time at this point to do webinars. VoiP Drupal has many interactive aspects that are best shown off through demonstrations. As I'll explain, documentation would require more resources.

Perceptions and challenges

Responses to the webinar and to Leo's DrupalCon presentation helped us understand better what it would take to win more adherents to this useful tool. Leo reported that few people knew of the existence of VoIP Drupal, but that when they heard of it, they assumed it would be expensive. His impression was that they knew traditional PBX systems and didn't realize how much more low-cost and light-weight VoIP is.

In my opinion, Web administrators (and many content providers) still see the Web and the telephone as separate worlds, never to meet. This despite the increasing popularity of running queries and Internet apps on mobile devices. We have to address server-side apathy. There should come a day when the things VoIP Drupal does are taken for granted: people leaving phone numbers on Web sites to receive SMS or voice updates, pressing Web buttons on their cell phones to get an interactive voice menu, etc.

The versatility of VoIP Drupal gives it fantastic potential but makes it harder to explain in an elevator speech. The ways in which voice can be integrated with the Web are nearly infinite. In addition, two distinctly different settings can benefit from VoIP. One consists of highly structured corporate sites, which can use VoIP Drupal for such typical bureaucratic tasks as offering interactive voice menus and routing customers to extensions. The other consists of sites serving underdeveloped areas, where people are more adept at using their phones than dealing with text-based Web sites. We need different pitches for each potential user.

Finally (and here my training as an editor pokes in), there are a number of different audiences who need to understand VoIP Drupal on different levels. Content providers need to understand what it can do and see models for incorporating it into their Drupal sites. Administrators need to understand how to find scripts and offer them to content providers. And programmers need to learn how to write fresh scripts. We also want to attract open source developers who could help enhance and maintain VoIP Drupal. There is even a tool called Visual VoIP Drupal that reduces the amount of programming skill required to create a new script pretty close to a minimum--that creates yet another audience.

Documentation needs

My goal in joining the informal VoiP Drupal promotion team was to use this project as a test case for exploring what software projects do for documentation, and how they could do better.

A number of sites have successfully implemented VoIP Drupal, some of them in quite sophisticated ways, but you could call these the alpha developers or early adopters. They managed, for instance, to find the reference documentation that requires several clicks to access, and which I did not notice for weeks. And needless to say, they required no other documentation, because the tutorials are extremely rudimentary.

I think that VoIP Drupal documentation is typical of early software projects--and in fact better than many. Projects tend to toss the reader a brief tutorial, provide some examples with inadequate explanatory text, and finally dunk the reader in the middle of an API reference. The tutorial tends to be fragile, in the sense that any problem encountered by the reader (a missing step, a change in environment) leaves her in an unrecoverable state, because there is no documentation about the way the system works and its assumptions. And the context for understanding reference documentation is missing, so that it can be used only by experienced developers who have seen similar programs in other environments.

Useful VoIP Drupal documentation includes (these are examples):

Our team knew that more descriptive text was needed to pull together these pieces, but whoever wrote a document would need some intensive time with a core developer. This was unfeasible in the rush to get the software in shape for DrupalCon. I found out about the difficulties of producing documents first-hand when I decided to tackle Visual VoIP Drupal, which looked simple and intuitive. Unfortunately, there were a lot of oddities in its behavior, and the simplicity of the interface didn't save me from having to know some of the subtler and less documented aspects of VoIP Drupal programming, such as handling input variables.

In a recent teleconference, I asked a bunch of preliminary questions and got a better idea of what documentation already covers, as well as a starting point for doing more documentation.

Current documentation tasks, in my opinion, include:

  • Expand the tutorials to show more of the capabilities of VoIP Drupal.

  • Provide explanations of key topics, such as different ways to handle voice input, keyboard input, and the metainformation provided by VoIP Drupal about each call. The developers' provision of a simple scripting system on top of PHP, and even more the creation of Visual VoIP Drupal, demonstrates their commitment to reaching non-programmers, but we have to follow through by filling in the background they lack.

  • Create a few videos or webinars on Visual VoIP Drupal.

  • Make links to the reference documentation more prominent, and link to it liberally in the tutorials and background documents. The use of doc strings from the code is reasonable for reference documentation, because nobody is asking it to look pretty and we want to maximize the probability that it will stay up to date.

  • Ask the community for examples and case studies, and describe what makes each one an interesting use of VoIP Drupal.

There's plenty that could be done, such as describing how to integrate VoIP Drupal into existing PHP code (and therefore more fully into existing Drupal pages) but that can be postponed. Leo said he's "particularly interested in working with Micky and the rest this group in the creation of a Visual VoIP Drupal webinar, and one about things we can do with VoIP Drupal right out of the box, with no or minimum programming."

What motivates people like Michele and me to put so much time into this project? Certainly, Michele wants to promote a project that she uses in her own work so that it thrives and continues to evolve. I can use what I learn from this work to provide services to other open source communities. But beyond all these individual rewards is a gut feeling that VoIP Drupal is cool. Using it is fun, and talking about it is also fun. Projects have achieved success for more light-weight reasons.

March 01 2012

Permission to be horrible and other ways to generate creativity

I met Denise R. Jacobs (@denisejacobs) the old fashioned way, not through Twitter or LinkedIn: a mutual acquaintance introduced us. We corresponded via email and actually got together in person a few months later at Web 2.0 Expo, where Denise was speaking. I was impressed both by her passion for giving people the knowledge, tools and resources to feel more empowered in their work as well as the breadth of her experience. Denise wrote "The CSS Detective Guide" and co-authored "InterAct with Web Standards." She also develops curricula for the Web Standards Project Education Task Force and was nominated for .Net Magazine's 2010 Best of the Web "Standards Champion" award.

I spoke with Denise recently about her experiences writing her book, how that led her to new ways of thinking, how she got started the web design, and other projects.

You're known for your web design work. What motivated you to explore the more non-technical topics of creative inspiration?

Denise JacobsDenise R. Jacobs: During the writing process "The CSS Detective Guide" I had a huge epiphany about myself and my ideas of creativity. I had to do battle on a daily basis with my inner critic and figure out ways to silence it, so that I could just get the work done.

In an industry where people are constantly producing wonderful things, it's really hard not to compare yourself to others. In terms of the creativity and the inspiration, it's easy to have panicky moments when you feel as though you can't come up with another idea, a new design, more content. I wanted to formulate ways to access creativity and channel that amazing feeling that you can take on the world, both for myself to help other people. So I wrote an article as a way to solidify my own techniques and to help anybody else who may need to silence a mean voice in their head as well.

Creativity isn't always associated with the technical community. Why is that?

Denise R. Jacobs: It's because there's such a limited definition of creativity in our culture. People treat artists as if they're off in their own world or put them on a pedestal. But it's a misconception that technical people aren't creative. Developers and coders and database architects are extremely creative, just as scientists are. They have to come up with solutions and code that have never been written before. If that's not creativity, I don't know what is.

I'm reading "A Whole New Mind" by Daniel H. Pink, which explores how right-brain is the new wave. We're entering a new conceptual, high-touch era whereas before we were in a very analytical era. Our industry, the technical industry, is actually a perfect in-between point of left brain and right brain. You have to have both, a whole-brain approach, to be successful in our industry.

What steps can people take to bring creativity into their professional and personal lives?

Denise R. Jacobs: One of my favorite techniques for being creative, and productive in general, is to give yourself permission to be as horrible at something as you possibly can, to even mess it up. That permission actually lowers inhibition filters and allows you to take chances that you would normally not take. Often that ends up making it good because you're not as invested in it and therefore not as self-conscious about the process.

Another important technique is to set aside time where your brain is resting, where you're not actually trying to produce something. Give it space to be able to make connections that it wouldn't necessarily have made before. Insights come when you're taking a walk, sitting on the beach or the park bench, playing with your dog. Because your brain is relaxing, it can go places that it doesn't usually go when you're concentrating or you're thinking hard.

In this industry, there's a subculture that is always on — on the computer, on social networks, connecting with people. There is never a time to not be on. When you're at dinner with a friend, you're checking in on Foursquare. You're tweeting. You're taking a picture to upload to your Facebook profile. Texting friends. To just be off is huge and can make all of the difference in the world.

With social media and other tools for people to come together, both in real life and virtually, what do you think about the state of communities today?

Denise R. Jacobs: I could be biased, but one thing I do see is that despite all of our virtual connections, in real life, it's kind of awkward. People are so used to communicating with each other digitally, texting for instance, that they're starting to lose the capacity to have genuine in-person connections to some degree. People aren't engaging with each other. Yet they try to depict it as such to keep themselves entertained.

A trend I'd like to see is for communities and people who make connections virtually to solidify that with an in-person connection. And if you make an in-person connection, then further solidify that with a virtual connection. Let there be a constant ebb and flow, a circuit going back and forth between both real life and virtual connections so that you can't really rely completely on either one. That's why we have these tools — we crave connection. We don't really have enough of it, but we can't depend solely on tools to create all of the connection that we need and vice versa.

What trends and people are you following?

Denise R. Jacobs: Location and self-publishing are trends I watch popping up all over the place. There are so many things going on that it's kind of overwhelming. I rely on serendipity and I focus more on concepts, ideas, and people because they are what underlie the trends. I am inspired by unapologetic creativity and unapologetic cleverness. I admire the younger people coming into the industry who are developing and innovating like crazy.

I admire the work Jane McGonigal is doing, her "Reality Is Broken" book and her whole gaming productivity movement. She takes ownership for being a woman in an industry where that's not typical and doesn't tone herself down at all. She's very feminine and a badass, has a PhD and awesome ideas and that's just the way it is with her.

I also admire Kathy Sierra because she's been around for a while and she's also an incredibly intelligent and clever person, a great speaker, and also someone with a lot of really wonderful ideas.

Tell us about your Rawk the Web project.

Denise R. Jacobs: There are a lot of diverse experts in the tech industry, women and people of color, but they're not very visible in terms of speaking at conferences or writing articles or books or whatever. It's not that conferences or publishers don't want a more diverse lineup, but often they just don't know who to get or how to go about it.

I was at a conference last year and the organizer asked me to fill in for a speaker who had to cancel. Afterwards, I ended up talking to a woman who really wanted to become a speaker but didn't know where to start. This was a perfect example of what people are probably saying to themselves. "I don't know enough. How do I get started? It seems really imposing. There's no room for anybody new."

I started Rawk the Web to give people actual information and have experts share their story about how they got started so that other people can see that they can do it, too. I also want to provide resources to people who may be inclined to give women and people of color more visibility, a network of people they can talk to and get inspiration from to take that first step. This is a really good time for it because people see me at conferences and notice I'm often the only brown person there — they're very conscious of it and glad to see me on stage. I'm hoping to launch it in June and that there will eventually be a Rawk the Web Conference. I know I'm not the only person working on this issue, but I'd like it to be more of a concentrated effort.

How did you get started with CSS and what do you see in its near future?

Denise R. Jacobs: Back in late 1996, nobody was updating the website at the place I was working so I volunteered to take care of it. During that process, I taught myself HTML — it was actually before CSS had really been widely embraced. Over the course of the next few years, I worked in localization for a Microsoft product, then I was a web group product manager at another software company, then later an instructor at Seattle Central Community College in their web design and development programs. Around 2002, web standards started becoming more popular. It was so much better and so much easier. One file to control the whole website — brilliant! It was an amazing, exciting time, to see the changing of the guard, what the web was moving from and what it was moving toward.

I couldn't call myself a web design instructor in good conscience without knowing CSS and I couldn't send students out into the world with outdated and inefficient skills. So I keep up with the trends, particularly by reading articles on A List Apart, and blogs by Dave Shea, Andy Budd and Doug Bowman.

As for the future of CSS, there's going to be a lot more reliance and trust of browsers. Browser vendors know what an important role they play and that browser wars don't do much good. More browser companies are working together with the W3C to establish and embrace standards.

Because of that, changes are happening faster. There's a big push for people to get up to speed with current best practices and develop new ones. For things like page layouts and CSS3, there are some really neat properties that are going to change the way people think about their approach to web layouts and the craft of building websites. It's going to be interesting to see how long those properties take to be adopted and what people come up with for them.

This interview was edited and condensed.

Related:

February 17 2012

Documentation strategy for a small software project: launching VoIP Drupal introductions

VoIP Drupal is a window onto the promises and challenges faced by a new open source project, including its documentation. At O'Reilly, we've been conscious for some time that we lack a business model for documenting new collaborative projects--near the beginning, at the stage where they could use the most help with good materials to promote their work, but don't have a community large enough to support a book--and I joined VoIP Drupal to explore how a professional editor can help such a team.

Small projects can reach a certain maturity with poor and sparse document. But the critical move from early adopters to mainstream requires a lot more hand-holding for prospective users. And these projects can spare hardly any developer time for documentation. Users and fans can be helpful here, but their documentation needs to be checked and updated over time; furthermore, reliance on spontaneous contributions from users leads to spotty and unpredictable coverage.

Large projects can hire technical writers, but what they do is very different from traditional documentation; they must be community managers as well as writers and editors (see Anne Gentle's book Conversation and Community: The Social Web for Documentation). So these projects can benefit from research into communities also.

I met at the MIT Media Lab this week with Leo Burd, the inventor of VoIP Drupal, and a couple other supporters, notably Micky Metts of DrupalConnection.com. We worked out some long-term plans for firming up VoIP Drupal's documentation and other training materials. But we also had to deal with an urgent need for materials to offer at DrupalCon, which begins in just over one month.

Challenges

One of the difficulties of explaining VoIP Drupal is that it's just so versatile. The foundations are simple:

  • A thin wrapper around PHP permits developers to write simple scripts that dial phone numbers, send SMS messages, etc. These scripts run on services that initiate connections and do translation between voice and text (Tropo, Twilio, and the free Plivo are currently supported).

  • Administrators on Drupal sites can use the Drupal interface to configure VoIP Drupal modules and add phone/SMS scripts to their sites.

  • Content providers can use the VoIP Drupal capabilities provided by their administrators to do such things as send text messages to site users, or to enable site users to record messages using their phone or computer.

Already you can see one challenge: VoIP Drupal has three different audiences that need very different documentation. In fact, we've thought of two more audiences: decision-makers who might build a business or service on top of VoIP Drupal, and potential team members who will maintain and build new features.

Some juicy modules built on top of VoIP Drupal's core extend its versatility to the point where it's hard to explain on an elevator ride what VoIP Drupal could do. Leo tosses out a few ideas such as:

  • Emergency awareness systems that use multiple channels to reach out to a population who live in a certain area. That would require a combination of user profiling, mapping and communication capabilities tend to be extremely hard to put together under one single package.

  • Community polling/voting systems that are accessible via web, SMS, email, phone, etc.

  • CRM systems that keep track (and even record) phone interactions, organize group conference calls with the click of a button, etc.

  • Voice-based bulletin boards.

  • Adding multiple authentication mechanisms to a site.

  • Sending SMS event notifications based on Google Calendars.

In theory you could create a complete voice and SMS based system out of VoIP Drupal and ignore the web site altogether, but that would be a rather cumbersome exercise. VoIP Drupal is well-suited to integrating voice and the Web--and it leaves lots of room for creativity.

Long-term development

A community project, we agreed, needs to be incremental and will result in widely distributed documents. Some people like big manuals, but most want a quickie getting-started guide and then lots of chances to explore different options at their own pace. Communities are good for developing small documents of different types. The challenge is finding someone to cover any particular feature, as well as to do the sometimes tedious work of updating the document over time.

We decided that videos would be valuable for the administrators and content providers, because they work through graphical interfaces. However, the material should also be documented in plain text. This expands access to the material in two ways. First, VoIP Drupal may be popular in part of the world where bandwidth limitations make it hard to view videos. Second, the text pages are easier to translate into other languages.

Just as a video can be worth a thousand words, working scripts can replace a dozen explanations. Leo will set up a code contribution site on Github. This is more work than it may seem, because malicious or buggy scripts can wreak havoc for users (imagine someone getting a thousand identical SMS messages over the course of a single hour, for instance), so contributions have to be vetted.

Some projects assign a knowledgeable person or two to create an outline, then ask community members to fill it in. I find this approach too restrictive. Having a huge unfilled structure is just depressing. And one has to grab the excitement of volunteers wherever it happens to land. Just asking them to document what they love about a project will get you more material than presenting them with a mandate to cover certain topics.

But then how do you get crucial features documented? Wait and watch forums for people discussing those features. When someone seems particularly knowledgeable and eager to help, ask him or her for a longer document that covers the feature. You then have to reward this person for doing the work, and a couple ways that make sense in this situation include:

  • Get an editor to tighten up the document and work with the author to make a really professional article out of it.

  • Highlight it on your web site and make sure people can find it easily. For many volunteers, seeing their material widely used is the best reward.

We also agreed that we should divide documentation into practical, how-to documents and conceptual documents. Users like to grab a hello-world document and throw together their first program. As they start to shape their own projects, they realize they don't really understand how the system fits together and that they need some background concepts. Here is where most software projects fail. They assume that the reader understands the reasoning behind the design and knows how best to use it.

Good conceptual documentation is hard to produce, partly because the lead developers have the concepts so deeply ingrained that they don't realize what it is that other people don't know. Breaking the problems down into small chunks, though, can make it easier to produce useful guides.

Like many software projects, VoIP Drupal documentation currently starts the reader off with a list of modules. The team members liked an idea of mine to replace these with brief tutorials or use cases. Each would start with a goal or question (what the reader wants to accomplish) and then introduce the relevant module. In general, given the flexibility of VoIP Drupal, we agreed we need a lot more "why and when" documentation.

Immediate preparations

Before we take on a major restructuring and expansion of documentation, though, we have a tight deadline for producing some key videos and documents. Leo is going to lead a development workshop at DrupalCon, and he has to determine the minimum documentation needed to make it a productive experience. He also wants to do a webinar on February 28 or 29, and a series of videos on basic topics such as installing VoIP Drupal, a survey of successful sites using it, and a nifty graphical interface called Visual VoIP Drupal. Visual VoIP Drupal, which will be released in a few weeks, is one of the new features Leo would like to promote in order to excite users. It lets a programmer select blocks and blend them into a script through a GUI, instead of typing all the code.

The next few weeks will bring a flurry of work to realize our vision.

February 03 2012

January 31 2012

Four short links: 31 January 2012

  1. The Sky is Rising -- TechDirt's Mike Masnick has written (and made available for free download) an excellent report on the entertainment industry's numbers and business models. Must read if you have an opinion on SOPA et al.
  2. Tennis Australia Exposes Match Analytics -- Served from IBM's US-based private cloud, the updated SlamTracker web application pulls together 39 million points of data collated from all four Grand Slam tournaments over the past seven years to provide insights into a player's style of play and progress. The analytics application also provides a player's likelihood of beating their opponent through each round of the two-week tournament and the 'key to the match' required for them to win. "We gave our data to IBM, said, 'Here we go, that's 10 years of scores and stats, matches and players'," said Samir Mahir, CIO at Tennis Australia. Data as way to engage fans. (via Steve O'Grady)
  3. Data Monday: Logins and Passwords (Luke Wroblewski) -- Password recovery is the number one request to help desks for intranets that don’t have single sign-on portal capabilities.
  4. QR Codes: Bad Idea or Terrible Idea? (Kevin Marks) -- People have a problem finding your URL. You post a QR Code. Now they have 2 problems. I prefer to think of QR codes as a prototype of what Matt Jones calls "the robot-readable world"--not so much the technology we really imagine we will be deploying when we build our science fictiony future.

December 15 2011

Creative Commons: Öffentliche Diskussion zu Version 4.0 startet

Im kommenden Jahr wird es eine neue Version der Creative Commons Public Licenses (CCPL), also der sechs Creative-Commons-Kernlizenzen geben (siehe Blog-Post auf creativecommons.org). Damit soll auf die Veränderungen des Netzes und seiner Protagonisten gegenüber dem Jahr 2007 reagiert werden, dem Geburtsjahr der aktuellen Version 3.0. Der Zeitabstand zwischen den Versionen hat sich jeweils rund verdoppelt. Mit der kommenden Version 4.0 versucht Creative Commons, einen Standard zu schaffen, der für eine Dekade oder länger tauglich sein soll – in der Zeitrechnung des Internet eine Ewigkeit.


Noch mehr wisdom of the crowd noch früher

Anders als bei allen vorherigen Versionierungen steht kein im kleinen Kreise geschriebener Entwurf am Anfang des Prozesses, sondern die offene und öffentliche Diskussion, die beim CC Global Summit im September in Warschau bereits angelaufen ist. Erst daraus wird Ende Februar 2012 der erste Entwurf entstehen, der dann noch einmal zur öffentlichen Überprüfung ins Netz gestellt werden wird. In der ersten Phase sollen vor allem Wünsche und Vorschläge gesammelt werden, was die Version 4.0 leisten oder anders angehen soll als die 3.0.

Die Diskussion dazu wird auf drei verschiedenen Kanälen ablaufen, vor allem über die Mailingliste cc-licenses@lists.ibiblio.org, aber auch im CC Wiki und dort noch einmal gesondert in Form einer Sand Box. Die Sand Box hat bereits die grobe Struktur einer CC-Lizenz und soll es ermöglichen, an dieser Struktur entlang Vorschläge zu sammeln, wie die Version 4.0 am besten aufgebaut und formuliert sein sollte.

Neuer Umgang mit Open Data? Feintuning bei kommerziell versus nicht-kommerziell?

Inhaltlich steht tatsächlich alles zur Diskussion. Sogar eine sprachliche Komplettüberholung in Richtung besserer Verständlichkeit des für Juristen gedachten rechtsverbindlichen Lizenztextes ist im Gespräch. Dass hier vieles möglich ist, hat das australische CC-Projekt bei der Portierung der Version 3.0 gezeigt.

Ein Hauptantrieb, überhaupt in die Versionierung einzusteigen, ist jedoch die rasant steigende Wichtigkeit von Open-Data-Initiativen. Gerade die europäischen Hüter großer Datenbestände tun sich oft schwer mit der Art und Weise, wie die CCPL 3.0 mit dem Sui-Generis-Datenbankenrecht umgeht. Dies hat schon wiederholt verhindert, dass Datenbestände freigegeben wurden, bzw. hat zu Insellösungen mit lediglich an den CCPL orientierten aber rechtlich nicht kompatiblen Lizenzen geführt.

Auch sonst ist die Kompatibilität verschiedener Lizenzsysteme untereinander eines der Top-Themen rund um 4.0. Oft steht und fällt die Kombinierbarkeit von Content im Netz mit der Frage, ob es um eine kommerzielle oder eine nicht-kommerzielle Nutzung geht. Hier enthalten die bisherigen Versionen der CC-Lizenzen eine Definition, die vielen zu ungenau ist. Ob es bei Version 4.0 eine Präzisierung geben wird, ist offen. Sowohl dafür als auch dagegen gibt es gute Argumente.

Mitmachen ist die Mühe wert

Noch nie war die Konstruktionsarbeit bei Creative Commons so partizipativ angelegt wie jetzt. Wer also Ideen oder auch nur vage Vorstellungen hat, was man an den CCPL wie und wofür verbessern könnte, sollte das jetzt in die Diskussion einbringen. Die CCPL 4.0 wird dadurch noch einmal besser sein und für noch mehr kreative Bewegung im Netz sorgen, von der letztlich alle etwas haben. Bis Ende Februar 2012 kann jede/r an den Grundfesten der zukünftigen CCPL mitarbeiten.

November 08 2011

Radar is now on Google+ (officially this time)

O'Reilly Radar on Google Plus
Screenshot of Radar's Google+ page.

If you're a Google+ user and a Radar reader — I'm guessing there's a lot of overlap in that Venn diagram — you might be interested in following Radar's new Google+ page.

We're working out a gameplan for our Google+ coverage that extends and enhances what we do here on the Radar website. In addition to +-specific content, we'll also be surfacing relevant material from our YouTube archive and passing along intriguing content we encounter in our web travels.

The thing I find most interesting about Google+ is its friction-free engagement. The sharing and commenting functions are dead simple, so I anticipate mining those tools to make Radar's Google+ presence a strong two-way channel.

As you can gather, we'll be experimenting with our Google+ offerings quite a bit. If you have things you'd like to see, please chime in through the comments on this post or visit Radar's Google+ page and chat with us there.

(FYI — we've also set up Google+ pages for O'Reilly Media, O'Reilly Tools of Change for Publishing, and Ignite. More to come ...)

Related:

October 21 2011

Wrap-up from FLOSS Manuals book sprint at Google

At several points during this week's documentation sprint at Google, I talked with the founder of FLOSS Manuals, Adam Hyde, who developed the doc sprint as it is practiced today. Our conversation often returned to the differences between the group writing experience we had this week and traditional publishing. The willingness of my boss at O'Reilly Media to send me to this conference shows how interested the company is learning what we might be able to take from sprints.

Some of the differences between sprints and traditional publishing are quite subtle. The collaborative process is obviously different, but many people outside publishing might not realize just how deeply the egoless collaboration of sprints flies in the face of the traditional publishing model. The reason is that publishing has long depended on the star author. In whatever way a person becomes this kind a star, whether by working his way up the journalism hierarchy like Thomas Friedman or bursting on the scene with a dazzling person story like Greg Mortenson (author of Three Cups of Tea), stardom is almost the only way to sell books in profitable numbers. Authors who use the books themselves to build stardom still need to keep themselves in the public limelight somehow. Without colorful personalities, the publishing industry needs a new way to make money (along with Hollywood, television, and pop music).

But that's not the end of differences. Publishers also need to promise a certain amount of content, whereas sprinters and other free documentation projects can just put out what they feel like writing and say, "If you want more, add it." Traditional publishing will alienate readers if books come out with certain topics missing. Furthermore, if a book lacks a popular topic that a competitor has, the competitor will trounce the less well-endowed book in the market. So publishers are not simply inventing needs to maintain control over the development effort. They're not exerting control just to tamp down on unauthorized distribution or something like that. When they sell content, users have expectations that publishers strive to meet, so they need strong control over the content and the schedule for each book.

But O'Reilly, along with other publishers across the industry, is trying to change expectations. The goal of comprehensiveness conflicts with another goal, timeliness, that is becoming more and more important. We're responding in three ways that both bring us closer to what FLOSS Manuals is doing: we put out "early releases" containing parts of books that are underway, we sign contracts for projects on limited topics that are very short by design, and we're experimenting with systems that are even closer to the FLOSS Manuals system, allowing authors to change a book at whim and publish a new version immediately.

Although FLOSS Manuals produces free books and gets almost none of its funding from sales (the funding comes from grants and from the sponsors of sprints), the idea of sprinting is still compatible with traditional publishing, in which sales are the whole point. Traditional publishers tend to give several thousand dollars to authors in the form of advances, and if the author takes several months to produce a book, we don't see the royalties that pay us back for that investment for a long time. Why not spend a few thousand dollars to bring a team of authors to a pleasant but distraction-free location (I have to admit that Google headquarters is not at all distraction-free) and pay for a week of intense writing?

Authors would probably find it much more appealing to take a one-week vacation and say good-bye to their families for this time than to spend months stealing time on evenings and weekends and apologizing for not being fully present.

The problem, as I explained in my first posting this week, is that you never quite know what you're going to get from a sprint. In addition, the material is still rough at the end of a week and has to absorb a lot of work to rise to the standards of professional publishing. Still, many technical publishers would be happy to get over a hundred pages of relevant material in a single week.

Publishers who fail to make documents free and open might be more disadvantaged when seeking remote contributions. Sprints don't get many contributions from people outside the room where it is conducted, but sometimes advice and support weigh in on some critical, highly technical point. The sprints I have participated in (sometimes remotely) benefited from answers that came out of the cloud to resolve difficult questions. For instance, one commenter on this week's KDE conference warned us we were using product names all wrong and had us go back through the book to make sure our branding was correct.

Will people offer their time to help authors and publishers develop closed books? O'Reilly has put books online during development, and random visitors do offer fixes and comments. There is some good will toward anyone who wants to offer guidance that a community considers important. But free, open documents are likely to draw even more help from crowdsourcing.

At the summit today, with the books all wrapped up and published, we held a feedback session. The organizers asked us our opinions on the sprint process, the writing tools, and how to make the sprint more effective. Our facilitator raised three issues that, once again, reminded me of the requirements of traditional publishing:

  • Taking long-term responsibility for a document. How does one motivate people to contribute to it? In the case of free software communities, they need to make updates a communal responsibility and integrate the document into their project life cycle just like the software.

  • Promoting the document. Without lots of hype, people will not notice the existence of the book and pick it up. Promotion is pretty much the same no matter how content is produced (social networking, blogging, and video play big roles nowadays), but free books are distinguished by the goal of sharing widely without concern for authorial control or payment. Furthermore, while FLOSS Manuals is conscious of branding, it does not use copyright or trademarks to restrict use of graphics or other trade dress.

  • Integrating a document into a community. This is related to both maintenance and promotion. But every great book has a community around it, and there are lots of ways people can use them in training and other member-building activities. Forums and feedback pages are also important.

Over the past decade, a system of information generation has grown up in parallel with the traditional expert-driven system. In the old system everyone defers to an expert, while in the new system the public combines its resources. In the old system, documents are fixed after publication, whereas in the new system they are fluid. The old system was driven by the author's ego and increasingly by the demand for generating money, whereas the new system has revenue possibilities but has a strong sense of responsibility for the welfare of communities.

Mixtures of grassroots content generation and unique expertise have existed (Homer, for instance) and more models will be found. Understanding the points of commonality between the systems will help us develop such models.

(All my postings from this sprint are listed in a bit.ly bundle.)

FLOSS Manuals books published after three-day sprint

The final day of the FLOSS Manuals documentation sprint at Google began with a bit of a reprieve from Sprintmeister Adam Hyde's dictum that we should do no new writing. He allowed us to continue work till noon, time that the KDE team spent partly in heated arguments over whether we had provided enough coverage of key topics (the KDE project architecture, instructions for filing bug reports, etc.), partly in scrutinizing dubious material the book had inherited from the official documentation, and (at least a couple of us) actually writing material for chapters that readers may or may not find useful, such as a glossary.

I worried yesterday that the excitement of writing a complete book would be succeeded by the boring work of checking flow and consistency. Some drudgery was involved, but the final reading allowed groups to revisit their ways of presenting concepts and bringing in the reader.

Having done everything I thought I could do for the KDE team, I switched to OpenStreetMap, who produced a short, nicely paced, well-illustrated user guide. I think it's really cool that Google, which invests heavily in its own mapping service, helps OpenStreetMap as well. (They are often represented in Google Summer of Code.)

After dinner we started publishing our books. The new publication process at FLOSS Manuals loads the books not only to the FLOSS Manuals main page but to Lulu for purchase.

Publishing books at doc sprint
Publishing books at doc sprint

Joining the pilgrimage that all institutions are making toward wider data use, FLOSS Manuals is exposing more and more of the writing process. As described by founder Adam Hyde in a blog posting today, Visualising your book, recently added tools that help participants and friends follow the progress of the book (you can view a list of chapters edited on an RSS feed, for instance) and get a sense of what was done. For instance, a timeline with circles representing chapter edits shows you which chapters had the most edits and when activity took place. (Pierre Commenge created the visualization for FLOSS Manuals.)

Participants at doc sprint
Participants at doc sprint

(All my postings from this sprint are listed in a href="https://bitly.com/bundles/praxagora/4">bit.ly bundle.)

October 20 2011

Day two of FLOSS Manuals book sprint at Google Summer of Code summit

We started the second day of the FLOSS Manuals sprint with a circle encounter where each person shared some impressions of the first day. Several reported that they had worked on wikis and other online documentation before, but discovered that doing a book was quite different (I could have told them that, of course). They knew that a book had to be more organized, and offer more background than typical online documentation. More fundamentally, they felt more responsibility toward a wider range of readers, knowing that the book would be held up as an authority on the software they worked on and cared so much about.

We noted how gratifying it was to get questions answered instantly and be able to go through several rounds of editing in just a couple minutes. I admitted that I had been infected with the enthusiasm of the KDE developers I was working with, but had to maintain a bit of critical distance, an ability to say, "Hey, you're telling me this piece of software is wonderful, but I find its use gnarly and convoluted."

As I explained in Monday's posting, all the writing had to fit pretty much into two days. Each of the four teams started yesterday by creating an outline, and I'm sure my team was not the only one to revise it constantly throughout the day.

Circle at beginning of the day
Circle at beginning of the day

Today, the KDE team took a final look at the outline and discussed everything we'd like to add to it. We pretty much finalized it early int the day and just filled in the blanks for the next eleven hours. I continued to raise flags about what I felt were insufficiently detailed explanations, and got impatient enough to write a few passages of my own in the evening.

Celebrating our approach to the end of the KDE writing effort
Celebrating our approach to the end of the KDE writing effort

The KDE book is fairly standard developer documentation, albeit a beginner's guide with lots of practical advice about working in the KDE environment with the community. As a relatively conventional book, it was probably a little easier to write (but also probably less fun) than the more high-level approaches taken by some other teams that were trying to demonstrate to potential customers that their projects were worth adopting. Story-telling will be hard to detect in the KDE book.

And we finished! Now I'm afraid we'll find tomorrow boring, because we won't be allowed (and probably won't need) to add substantial new material. Instead, we'll be doing things like checking everything for consistency, removing references to missing passages, adding terms to the glossary, and other unrewarding slogs through a document that is far too familiar to us already. The only difference between the other team members and me is that I may be assigned to do this work on some other project.

(All my postings from this sprint are listed in a bit.ly bundle.)

October 19 2011

Day one of FLOSS Manuals book sprint at Google Summer of Code summit

Four teams at Google launched into endeavors that will lead, less than 72 hours from now, to complete books on four open source projects (KDE, OpenStreetMap, OpenMRS, and Sahana Eden). Most participants were recruited on the basis of a dream and a promise, so going through the first third of our sprint was eye-opening for nearly everybody. Although I had participated in one sprint before on-site and two sprints remotely, I found that the modus operandi has changed so much during the past year of experimentation that I too had a lot to learn.

Our doc sprint coordinator, Adam Hyde, told each team to spend an hour making an outline. The team to which I was assigned, KDE, took nearly two, and part way through Adam came in to tell us to stop because we had enough topics for three days of work. We then dug in to filling in the outline through a mix of fresh writing and cutting and pasting material from the official KDE docs. The latter required a complete overhaul, and naturally proved often to be more than a year out of date.

KDE team at doc sprint
KDE team at doc sprint

The KDE team's focus on developer documentation spared them the open-ended discussions over scope that the other teams had to undergo. But at key points during the writing, we still were forced to examine passages that appeared too hurried and unsubstantiated, evidence of gaps in information. At each point we had to determine what the hidden topics were, and then whether to remove all references to them (as we did, for instance, on the topic of getting permission to commit code fixes) or to expand them into new chapters of their own (as we did for internationalization). The latter choice created a dilemma of its own, because none of the team members present had experience with internationalization, so we reached out and tried to contact remote KDE experts who could write the chapter.

The biggest kudos today go to Sahana Eden, I think. I reported yesterday that the team expressed deep difference of opinion about the audience they should address and how they should organize their sprint. Today they made some choices and got a huge amount of documentation down on the screen. Much of it was clearly provisional (they were boo'ed for including so many bulleted lists) but it was evidence of their thinking and a framework for further development.

Sahana team at doc sprint
Sahana team at doc sprint

My own team had a lot of people with serious jet lag, and we had some trouble going from 9:00 in the morning to 9:30 at night. But we created (or untangled, as the case may be) some 60 pages of text. We reorganized the book at least once per hour, a process that the FLOSS Manuals interface makes as easy as drag and drop. A good heuristic was to choose a section title for each group of chapters. If we couldn't find a good title, we had to break up the group.

The end of the day brought us to the half-way mark for writing. We ares told we need to complete everything at the end of the evening tomorrow and spend the final day rearranging and cleaning up text. More than a race against time, this is proving to be a race against complexity.

Topics for discussion at doc sprint
Topics for discussion at doc sprint

October 18 2011

FLOSS Manuals sprint starts at Google Summer of Code summit

Five days of intense book production kicked off today at the FLOSS Manuals sprint, hosted by Google. Four free software projects have each sent three to five volunteers to write books about the projects this week. Along the way we'll all learn about the group writing process and the particular use of book sprints to make documentation for free software.

I came here to provide whatever editorial help I can and to see the similarities and differences between conventional publishing and the intense community effort represented by book sprints. I plan to spend time with each of the four projects, participating in their discussions and trying to learn what works best by comparing what they bring in the way of expertise and ideas to their projects. All the work will be done out in the open on the FLOSS Manuals site for the summit, so you are welcome also to log in and watch the progress of the books or even contribute.

A book in a week sounds like a pretty cool achievement, whether for a free software projects or a publisher. In fact, the first day (today) and last day of the sprint are unconferences, so there are only three days for actual writing. The first hour tomorrow will be devoted to choosing a high-level outline for each project, and then they will be off and running.

And there are many cautions about trying to apply this model to conventional publishing. First, the books are never really finished at the end of the sprint, even though they go up for viewing and for sale immediately. I've seen that they have many rough spots, such as redundant sections written by different people on the same topic, and mistakes in cross-references or references to non-existent material. Naturally, they also need a copy-edit. This doesn't detract from the value of the material produced. It just means they need some straightening out to be considered professional quality.

Books that come from sprints are also quite short. I think a typical length is 125 pages, growing over time as follow-up sprints are held. The length also depends of course on the number of people working on the sprint. We have the minimum for a good sprint here at Google, because the three to five team members will be joined by one or two people like me who are unaffiliated.

Finally, the content of a sprint book is decided on an ad hoc basis. FLOSS Manuals founder Adam Hyde explained today that his view of outlining and planning has evolved considerably. He quite rationally assumed at first that very book should have a detailed outline before the sprint started. Then he found that one could not impose an outline on sprinters, but had to let them choose subjects they wanted to cover. Each sprinter brings certain passions, and in such an intense environment one can only go with the flow and let each person write what interests him or her. Somehow, the books pull together into a coherent product, but one cannot guarantee they'll have exactly what the market is asking for. I, in fact, was involved in the planning of a FLOSS Manuals sprint for the CiviCRM manual (the first edition of a book that is now in its third) and witnessed the sprinters toss out an outline that I had spent weeks producing with community leaders.

So a sprint is different in every way from a traditional published manual, and I imagine this will be true for community documentation in general.

The discussions today uncovered the desires and concerns of the sprinters, and offered some formal presentations to prepare us, we hope, for the unique experience of doing a book sprint. The concerns expressed by sprinters were pretty easy to anticipate. How does one motivate community members to write? How can a project maintain a book in a timely manner after it is produced? What is the role of graphics and multimedia? How does one produce multiple translations?

Janet Swisher, a documentation expert from Austin who is on the board of FLOSS Manuals, gave a presentation asking project leaders to think about basic questions such as why a user would use their software and what typical uses are. Her goal was to bring home the traditional lessons of good writing: empathy for a well-defined audience. "If I had a nickel for every web site I've visited put up by an open source project that doesn't state what the software is for..." she said. That's just a single egregious instance of the general lack of understanding of the audience that free software authors suffer from.

Later, Michael McAndrew of the CiviCRM project took us several steps further along the path, asking what the project leaders would enjoy documenting and "what would be insane to leave out." I sat with the group from Sahana to watch as they grappled with the pressures these questions created. This is sure one passionate group of volunteers, caring deeply about what they do. Splits appeared concerning how much time to devote to high-level concepts versus practical details, which audiences to serve, and what to emphasize in the outline. I have no doubt, however, listening to them listen to each other, that they'll have their plan after the first hour tomorrow and will be off and running.

September 23 2011

Top Stories: September 19-23, 2011

Here's a look at the top stories published across O'Reilly sites this week.

Cooking the data
Open data and transparency aren't enough: we need True Data, not Big Data, as well as regulators and lawmakers willing to act on it.

BuzzData: Come for the data, stay for the community
BuzzData looks to tap the gravitational pull of data, then keep people around through conversation and collaboration.

At its best, digital design is choreography
In this brief interview, Threepress Consulting owner Liza Daly tackles a question about formatting content for browser publishing. She says for design to succeed, authors, artists and developers must work together.

Five digital design ideas from Windows 8
Microsoft's Metro interface offers plenty for digital book designers to study. The best part? Whether or not Microsoft actually ships something that matches their demo, designers can benefit from the great thinking they've done.


The problem with deep discount ebook deals
Joe Wikert says publishers should move away from one-product deep discount campaigns and start thinking about how to build a much more extensive relationship with customers.




Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders. Save 20% on registration with the code AN11RAD.

July 28 2011

The future of community



This morning Jono Bacon from Canonical kicked off OSCON by talking about "The Future of Community", which he admitted was a vague and dangerous title to choose. People who try to predict the future tend to fail, but that didn't stop him. After musing about being chosen to give a keynote at OSCON he dove into his main point about how he feels that community management is at the beginning of a renaissance.

Historically, the first communities were human tribes. A lot of challenges faced early tribes: How do you feed tribe members? How do you keep tribes healthy? People didn't have community managers, they simply tried things and learned from what worked and what didn't work. Tribe members didn't set out to be community leaders, much like open source leaders didn't set out to be leaders. Open source hackers that succeed in creating valuable open source projects become leaders because of their efforts and the same was true for early tribes.

Jono went on not to predict the future, but to share an informed trend: Community management is at a renaissance. The renaissance connected the dark ages with the enlightenment and people started to educate themselves. People created a repeatable science: If this happens and then you take that action, you can expect a certain repeatable outcome. For instance, the concept of people getting sick, then taking pills and then getting well is a great example of repeatability.

We're seeing this happening with community management. We're seeing a profession of community management come about. More people than ever before are working professionally as community managers; these people are the connection points between communities and companies. When we have repeatable community experiences, we want others to be able to repeat these community developments!

Jono underscored the key lesson in his keynote: We're at the beginning of this community renaissance. We're going to see a repeatable body of knowledge that will allow us to push communities forward.



Related:

July 25 2011

OSCON subcultures

After a weekend at the href="http://www.communityleadershipsummit.com/">Community Leadership
Summit--check out the great href="http://communityleadershipsummit.wikia.com/wiki/2011/Notes">session
notes online--I have to leap into the largest and most diverse of
O'Reilly's conferences. These are great at bringing together different
types, and much of the excitement and pleasure comes from the banter
among people with very divergent experiences in computing. Here's a
typical bar conversation.

Hacker: We stopped testing our code long ago.
Instead we have a front-end wrapper that automatically compensates for
syntax errors, routing problems, and sluggish performance.

Entrepreneur: If your development process for the
front-end wrapper is just as casual as it is for your applications, I
can't see how you can control its operation.

Hacker: Doesn't matter. Our web visitors are
accustomed to having something new and different every time they come.

Veteran: You kids abuse all the protections we've
built into the stack over the years. I've been working on a protocol
that would require visitors to popular web sites to sign up for a
multicast group. That way, servers could reduce bandwidth by using
MPLS.

Hacker: I'll do you one better. When visitors
first hit one of our pages, we use JavaScript to download a
peer-to-peer call-back that transmits our content directly onto all
their friends' systems.

Entrepreneur: Won't that annoy mobile phone users
by maxing out their data plans?

Hacker: Sure it will. Our ultimate goal is to
break the cell phone providers' business model.

Veteran: You don't have to consider the
infrastructure; when we were young, I did. I was in grad school before
the DNS was invented. Students had to spend hours memorizing the IP
addresses of federal agencies.

Entrepreneur: I want my system to get only the
content from the web pages I visit and their affiliates. That may be
old-fashioned, but anything else plays havoc with SEO.

Veteran: I still remember the address of the mail
server at the Defense Intelligence Agency--

Hacker: Hey! Have you been taking notes on what
I've been saying?

Community Leadership Summit was wonderful, as always. More than a
hundred people, most connected to the computer field in some way but
everybody interested in how people tick, spent an intense two days and
evening together. About half the attendees were women. We went over
all the issues that these summits conventionally cover--how to engage
users, how to deal with disruptive people, dealing with forks in
source code--and a lot of fun, oddball sessions too. I got to lead a
pretty popular session comparing Saul Alinsky-style community
organizing with techniques used now to make online social networks
effective.

Much of the content can be found in blogs and other places, notably
(marketing plug coming up) href=""http://oreilly.com/catalog/9780596156718/>The Art of
Community
, written by Jono Bacon, the chief organizer of CLS.
So the process of sitting and engaging is at least as important as the
ideas and facts we exchange. CLS unconferences are already being held
outside OSCon, so there will be chances for more and more people to
flex their muscles as organizers of communities.

July 22 2011

Visualization of the Week: Mobile data redraws the map

New technologies may blur boundaries, but in many ways geography still dominates how our relationships are formed. Researchers at AT&T Labs Research, IBM Research, and MIT SENSEable City Laborator have analyzed mobile phone and SMS data to demonstrate how place continues to play an important role in our relationships.

Here's how the "The Connected States of America" project put mobile data to use:

Using millions of anonymized records of cell phone data, researchers were able to map the communities that people form themselves through personal interactions. The cell phone data included both calls and texts and was collected over a single month from residential and business users.

The analysis and accompanying visualization highlight the communities that are regional but that also stretch beyond official borders and across county and state lines. Some of the findings:

  • California, Illinois, and New Jersey are split on a north-south basis.
  • Pennsylvania is divided east-west.
  • Some communities merge several states: Louisiana-Mississippi, Alabama-Georgia, New England.
  • Some communication communities, such as Texas, actually match state borders quite closely.

The Connected States of America
Click to see the original graphic from "The Connected States of America" project. You can also check out more visuals and an interactive map.

Found a great visualization? Tell us about it

This post is part of an ongoing series exploring visualizations. We're always looking for leads, so please drop a line if there's a visualization you think we should know about.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD




Related:


Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl