Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 25 2012

Developer Week in Review: Oracle's big bet fails to pay off

I've been taking the opportunity this week to do some spring office cleaning. Unfortunately, I clean my home office infrequently enough that at a certain point, cleaning it becomes more an exercise in archeology than organization. There's nothing like finding a six-month-old check you never deposited to encourage more frequent cleaning.

The same can be said for code, of course. It's far too easy to let crufty code build up in an application, and then be faced with the mother of all refactoring efforts six months down the road, when your code finally reaches a critical mass of flaky behavior. It's worth the effort to continually refactor and improve your code, assuming you can convince your product management that quality is as important as new features.

Android is almost out of the woods

It wouldn't be a Week in Review without the latest in Godzilla vs. Gamera Oracle vs. Google. Things aren't looking all too sunny for Oracle at the moment, as the jury in the case just threw out all the patent-related claims in the lawsuit. This doesn't leave Oracle with much left on the plate, as the case now boils down to the question of whether the Java APIs are copyrightable. That's a matter the jury is deadlocked on.

Like all things legal, this is going to drag on for years as there are appeals and retrials and the like. But for the moment, it appears that Android is out of the woods, at least as far as the use of Java is concerned. Of course, there's still all those pesky International Trade Commission issues keeping many Android handsets waiting at the border, but that's a battle for another day ...

Scripters of the world, rejoice!

For Perl developers, a point release of the language is a major event, as it only occurs roughly once a year. This year's edition has just been released, and Perl 5.16 packs a ton of improvements (nearly 600,000 lines' worth!).

Since Perl is such a mature language, most of the changes are incremental. Probably the most significant is further enhancements in Unicode support. Nonetheless, there should be something useful for the serious Perl developer.

FreeBSD bids GCC farewell

As the licensing on the GCC compiler has become increasingly restrictive, some of us have been wondering when the fallout would start. Wait no longer: The FreeBSD team has ditched GCC for the more BSD-friendly licensing of Clang.

GCC has spent decades as the compiler of choice for just about everything, but recent changes in the GPL have made it less attractive to use, especially in commercial development. With the Apple-sponsored Clang compiler now seen as a viable (and perhaps even superior) alternative, with a much less restrictive license, the Free Software Foundation may need to decide if it would rather stand on principle, or avoid becoming marginalized.

Got news?

Please send tips and leads here.

OSCON 2012 — Join the world's open source pioneers, builders, and innovators July 16-20 in Portland, Oregon. Learn about open development, challenge your assumptions, and fire up your brain.

Save 20% on registration with the code RADAR20


Related:


May 04 2012

Developer Week in Review: Are APIs intellectual property?

Returning after a brief hiatus due to my annual spring head cold, welcome back to your weekly dose of all things programming. Last week, I was attending the Genomes, Environments and Traits conference (I'm a participant in the Personal Genome Project), when I got notified that WWDC registration had opened up. I ended up having to type in my credit card information on my iPhone while listening to the project organizers discuss what they were doing with the saliva I had sent them. The conference itself was very interesting (although I was coming down with the aforementioned cold, so I wasn't at the top of my game). The cost to sequence a genome is plummeting — it's approaching $1,000 a pop — and it has the potential to totally revolutionize how we think about health care.

It's also an interesting example of big data, but not how we normally think about it. An individual genome isn't all that big in the scheme of things (it's about 3GB uncompressed per genome), but there are huge computational challenges involved in relating individual variations in the genome to phenotype variations (in other words, figuring out what variations are responsible for traits or diseases).

While all the West Coast developers who slept through the WWDC registration period lick their wounds, here's the rest of the news.

APIs are copyrightable, unless they aren't?

These days, I feel like you need to consider a minor in law to go with your computer science degree. In the latest news from the front, we have conflicting opinions regarding the status of APIs. On the one hand, the judge in the Oracle versus Google lawsuit has instructed the jury they should assume that APIs are copyrightable. As the linked article discusses, this could have ominous implications for any third-party re-implementation of a programming language or other software that is not open source.

Over in Europe, however, a new ruling has stated that programming languages and computer functionality are not copyrightable. So, depending on which side of the ocean you live on, APIs are either open season, or off limits. No word yet as to the legal status of APIs on the Falkland Islands ...

Fluent Conference: JavaScript & Beyond — Explore the changing worlds of JavaScript & HTML5 at the O'Reilly Fluent Conference (May 29 - 31 in San Francisco, Calif.).

Save 20% on registration with the code RADAR20

Code to make your head hurt.

For those of you who like to celebrate the perversities of life, it's hard to beat the International Obfuscated C Competition, which just released its 2011 winners. For your viewing pleasure, we have programs that compute pi, chart histograms, and even judging programs for obfuscation, all written in a manner that will have code reviewers running to the toilet with terminal bouts of nausea.

And speaking of C ...

We tend to focus a lot of attention on emerging languages, partially because many of them have novel features, and partially because the grass is always greener in a different language. It's instructive to step back sometimes and take a look at what people are actually using. The latest TIOBE Programming Community Index, which measures how much code there is out there in each of the various languages, has a new top dog, and it's our old friend C. In fact, when you factor in C#, C++ and Objective-C, C-related languages pretty much own the category. Java has now fallen to the second position, and you have to go all the way down to sixth place to find a scripting language, PHP.

Importantly, all the hot new languages, like Erlang and Scala, don't even make the top 20, and you only need half-a-percentage point to get in that list. As much as we like the new darlings on the block, the old veterans still are where most of the action (and money) is.

Got news?

Please send tips and leads here.

Related:

Reposted bynunatak nunatak

April 24 2012

Fair use: A narrow, subjective, complicated safe haven for free speech

Questions of fair use continue to arise, with the most recent fair use judgment coming from a U.S. federal court in Nevada, which ruled that excerpting copyrighted materials — up to 10% of the original work in the case of a newspaper story — is fair use.

I reached out to Miles Feldman, co-chair of the litigation and intellectual property departments at the Los Angeles-based law firm of Raines Feldman LLP, for more on the topic of fair use. In the following interview, Feldman takes a look at factors courts consider, offers a few guidelines and best practices to follow, and highlights some fundamental problems with Creative Commons Licensing.

How is "fair use" defined and what is its legal purpose?

Miles Feldman: Basically, the fair use doctrine creates a narrow safe haven for authors to quote, comment on, or parody copyrighted material. It was built into our copyright laws to protect freedom of speech and our First Amendment rights.

Does the breadth of the fair use guidelines cause confusion?

Miles Feldman: There are four factors courts look at to determine if fair use applies:

  1. The nature of the work used
  2. The nature of the new work
  3. The amount of the original work used in the new work
  4. The effect on the market for the original work

These factors do involve some subjectivity. Uses that are more likely to be found to be fair are those that do not displace sales of the original work. For example, a parody or commentary of a book will not displace a sale of the original work. Where this gets much muddier is when authors create new works based on the original, claiming the new work is a parody of the original work. A notable example is the work "The Wind Done Gone," by Alice Randall, which was a reworking of the well-known "Gone with the Wind," by Margaret Mitchell, telling the story from the perspective of the slaves. "The Wind Done Gone" was held to be a fair use because it commented on the original and would not displace a purchase of the original. On the other hand, creating a sequel to a copyrighted book would not be fair use because one of the rights that an author has is to create derivative works based on their works of authorship.

What are some best practices people should follow to stay within the guidelines?

Miles Feldman: First, quote just enough needed to comment. Because the amount of earlier work that is used is a factor in the analysis, the less of that work that is incorporated into the subsequent work, the better. Further, authors should ask themselves if the use will have an impact on the market for the earlier work by displacing a sale. Of course, the safest course of action, but not really practical, is always to obtain a license before using any material that is protected by copyright.

The future of publishing has a busy schedule.
Stay up to date with Tools of Change for Publishing events, publications, research and resources. Visit us at oreilly.com/toc.

What are the most common fair use abuses?

Miles Feldman: The biggest would be using more of the copyrighted work than was necessary. For example, if you are going to report on the death of Michael Jackson, some footage of the star may be included in the story and would likely be considered fair use. However, if you broadcast the entire "Thriller" video in memoriam, that would likely be deemed infringement and the fair use defense would not apply.

What kinds of content aren't protected by copyright or subject to fair use?

Miles Feldman: Content that is older than 200 years. Such works are now likely in the public domain. Moreover, ideas are not protected by copyright. Rather, the expression of an idea is protected. Therefore, anything that is merely idea and not expression is not subject to fair use, but is free to be used nonetheless. In the same vein, stock literary devices that are common in most books are not protected by copyright. In addition, titles to literary works are not copyrightable.

How would someone know if something is in the public domain or not?

Miles Feldman: The first step is to determine when the work was initially created and when it was first published. If the work was published after 1923, it is very difficult to determine whether it is in the public domain or whether it is still protected by copyright. Recently, the Supreme Court made it even more difficult to determine whether works are in the public domain when it held that the copyrights on foreign works that had fallen into the public domain could be restored.

However, there are some good research tools to determine the status of works. For motion pictures, a good place to start is with a search of the United States Copyright Office database. For music compositions and sound recordings, ASCAP and BMI maintain records of the music publisher for a given work. The music publisher will list the work's original author and date of publication. Once an author who wishes to use the work determines when the work was originally published, he or she can determine whether the work is still protected. Finally, the Library of Congress can be a good research tool for determining when works were originally published.

What's your take on Creative Commons licensing?

Miles Feldman: Creative Commons was founded in 2001 as a clearinghouse for copyright licensing. Although widely regarded as a valiant effort, there are many critiques of the Creative Commons platform. First, there are many variations in the types of licenses and permissions granted by the website. Second, if an author chooses to allow his or her work to be licensed through Creative Commons and then realizes that the work has gained popularity and could be more profitable through traditional licensing channels, the author may remove it from Creative Commons, leaving those who relied on its licenses uncertain as to their rights. No court has ruled on whether an author who uses Creative Commons can reclaim full bundle of rights after others have licensed the work.

This interview was edited and condensed.

Associated photo on home and category pages: All Rights Reserved* by no3rdw, on Flickr

Related:

February 10 2012

Developer Week in Review: A pause to consider patents

This week, as I do occasionally, I want to focus in on one specific topic.

For regular readers, the topic of technology innovation and patents is nothing new; it's a problem that is frequently covered in this space. But this week, there were two important occurrences in the world of intellectual property that highlight just what a mess we've gotten ourselves into.

The first is an unexpected turn of events down in scenic Tyler, Texas, home of the world's most litigant-friendly patent infringement juries. How friendly? Well, biased enough that Eolas relocated its corporate HQ to Tyler just to be close to the courts. Eolas, as you may recall, is the pesky gadfly that's been buzzing around giants such as Microsoft for years, claiming broad patents over, well, the entire Internet. Rather than continuing a costly court battle it might lose in the end, the House of Redmond settled, and a host of other high-tech cash-cows followed suit.

US Patent 5,838,906As Eolas continued to threaten to sue the pants off everyone, a ragtag group of plucky companies like Adobe Systems, Google, Yahoo, Apple, eBay and Amazon.com said enough is enough. And this week in Tyler, following testimony by luminaries such as Sir Tim Berners-Lee, a jury agreed, invalidating the infamous '906 patent.

You'd think that this would make Google, one of the main defendants, a big hero and confirm its status as Not Evil. But in the very same week, Google refused to budge on its licensing requirements for patents it acquired from Motorola, patents that are required for any company that wants to play in the 3G cell phone space.

When a standard is adopted by governmental bodies (such as the FCC) or standards-setting bodies like IEEE, it should ideally be free of any intellectual property restraints. After all, that's the purpose of a standard: to provide a common framework that competing companies can use to produce interoperable products. Standards such as GSM and CDMA are why you can use your iPhone in Europe (if you're rich).

The problem is, most modern standards come with a bundle of patents attached to them. In the 3G space, Google (through the Motorola acquisition) and Samsung own a lot of them. As part of the standard-making process, these companies are supposed to agree to offer use of the patents under Fair, Reasonable and Non-Discriminatory (FRAND) license terms. The idea is that all companies using the standard pay the same license fees to the patent holders, so no one gets an advantage. The problem is, who decides what is Fair and Reasonable?

This is especially pernicious when the company licensing the patent is also a competitor in the space. Obviously, Samsung doesn't pay itself a license fee to use its patent, so it doesn't matter how expensive it makes the fee, as long as Samsung doesn't incur the wrath of the standard-setting body. In the case of Motorola/Google, the license fee is set at 2.25% of the total selling price of the phone (which would come to around $13.50 on a $600 iPhone). Apple, et al., are screaming to the moon that that kind of licensing is not in the spirit of FRAND, but it's up to groups such as the European standards body, ETSI, to determine if the patent holders are really playing fair.

Of course, Google has fallen victim to the same issues. Although it doesn't pay the piper directly, phone manufacturers using Android end up reportedly paying $5 per phone to Microsoft to avoid patent issues. It's worth noting, however, that at least Microsoft is using software-related patents that it claims Android infringes, not patents directly related to the underlying standards used by the phone.

There's a simple solution to this problem, of course, which is not to allow patent-encumbered technologies to become standards. The software world has (mostly) been free of this kind of nonsense, and it's a good thing. Can you imagine having to pay a license fee to use SOAP, or AJAX? The worrisome thing is that this could become the model used for software patents, and it would basically kill smaller companies trying to be innovative.

Oh, and before you count Eolas out of the game, remember that this is just a single trial it lost. It can try again with another jury and another set of companies. Unless the USPTO invalidates the underlying patent, Eolas is still out there, waiting to strike.

Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.

Save 20% on registration with the code RADAR20

Got news?

Please send tips and leads here.

Related:

Jury to Eolas: Nobody owns the interactive web

As Joe Mullin reported at Wired earlier tonight, a Texas jury has struck down a company's claim to own the interactive web. The decision in this case comes after more than a decade of legal wrangling that has drawn in some of the biggest technology companies and retailers in the world. As Timothy Lee observed at Ars Technica, Eolas, "a patent troll that has been shaking down technology companies for the better part of a decade, now faces the prospect of losing the patent."

It's a rare reversal of two software patents (7,599,985 and 5,838,906), that shouldn't have been granted in the first place. It's also an important victory for the open Internet.

As a result of the decision, the eight companies that were resisting the patent lawsuits won't have to pay anything to Eolas. If Google, YouTube, Yahoo, Amazon, Adobe, JC Penney, CDW Corp., and Staples had lost the patent infringement suit, they would have been subject to more than $600 million in damages.

The Eolas patent case represents one of the most infamous claims to ownership of the commons that grew up in universities, garages and labs in the early 1990s.

Here's a quick summary of the history: the '906 patent was applied for in 1994 and granted to Eolas in 1998. Eolas sued Microsoft in 1999. Microsoft lost that trial and settled with Eolas. The World Wide Web Consortium (W3C) and Microsoft both petitioned the U.S. Patent Office to reconsider patent. The Patent Office upheld it, both times.

The Eolas patent covers "embedded application" in a browser, a broad description of a function that was typical of client-server systems of the time. The patent was then used by Eolas founder Michael Doyle to make a broad claim about the invention of interactivity on the web, based upon a medical imaging application that enabled a user to manipulate images on a web browser with computation occurring in the background on a server.

The case appears to have turned on the demonstration of prior art by the defense. A computer science student at the University of California at Berkeley, Pei-Yuan Wei, testified during the trial that he had conceived of making interactive web features as early as 1991, including the creation of the Viola Web browser. Viola, first released in April 1992, was the first web browser with inline graphics, scripting, tables and a stylesheet. The web browser was in development at O'Reilly in 1992-1994. Another UC Berkeley student, Scott Silvey, testified that he had demonstrated such features to engineers at Sun Microsystems in 1993.

That testimony, when combined with that of web pioneers like Eric Bina, the co-founder of Netscape, and Dave Raggett, who invented the HTML "embed" tag, and Tim Berners-Lee, the inventor of the World Wide Web, was enough to convince this jury.

"It was ahead of its time," testified Berners-Lee. "The things Pei was doing would later be done in Java."

One interesting detail that emerged in the case was that the U.S. Patent Office didn't have access to the Internet in 1994 and was apparently forbidden from going on the Internet in 1997, which would make research into prior art in cyberspace somewhat of a challenge.

Patent trolls continue to be a major issue for software companies and the technology industry as a whole in 2012, as an episode of "This American Life" on when patents attack effectively communicated.

As Mike Masnick points out at TechDirt, while today was an important victory for the networked commons and civil society, Eolas still has a lot of settlement money in hand to pursue an appeal.

That said, the jury's decision to invalidate Eolas' claims of ownership regarding the basic technology that enables access to the interactive web means the company won't be suing anyone for a while.

Here's to the Open Web.

Related:

January 20 2012

November 23 2011

July 18 2011

Intellectual property gone mad

Friday night, I tweeted a link to a Guardian article stating that app developers were withdrawing apps from Apple's app store and Google's Android market (and presumably also Amazon's app store), because they feared becoming victims of a patent trolling lawsuit. That tweet elicited some interesting responses that I'd like to discuss.

The insurance solution?

One option might be to rely on the insurance industry to solve the problem. "Isn't this what insurance is supposed to be for? Couldn't all these developers set up a fund for their common defense?" wrote @qckbrnfx. An interesting idea, and one I've considered. But that's a cure that seems worse than the disease. First, it's not likely to be a cure. How many insurance companies actually defend their clients against an unreasonable lawsuit? They typically don't. They settle out of court and your insurance premium goes up.

@mikeloukides Isn't this what insurance is supposed to be for? Couldn't all these developers set up a fund for their common defense?less than a minute ago via Tweetbot for iPhone Favorite Retweet Reply

If you look at medical malpractice insurance, where unfounded malpractice claims are the equivalent to trolling, I would bet that the willingness of insurance companies to settle out of court increases trolling. An insurance solution to the problem of trolling would be, effectively, a tax on the software developers. And we would soon be in a situation where insurance companies were specifying who could develop software (after a couple of malpractice cases, a doctor becomes uninsurable, and he's effectively out of the business, regardless of the merits of those cases), what software they could develop, and so on. Percy Shelley once said that "poets are the unacknowledged legislators of the world." But my more cynical variation is that the insurance companies are the world's unacknowledged legislators. I don't want to see the software industry dancing to the insurance industry's tune. Some fear big government. I fear big insurance much more.

Fighting back?

There's a variant of the insurance solution that I like: @patentbuzz said: "Developers need to unite and crowdfund re-exam of obnoxious troll patents. Teach them a lesson." This isn't "insurance" in the classic risk-spreading sense: this is going on the offensive, and pooling funds to defend against trolling. I do not think it would take a lot of effort to make trolling (at least, the sort of low-level trolling that we're looking at here) unprofitable, and as soon as it becomes unprofitable, it will stop. Small-time app developers can't afford lawyers, which is precisely why trolling is so dangerous. But here's the secret: most patent trolls can't afford lawyers, either. They can afford enough lawyering to write a few cease and desist letters, and to settle out of court, but their funds would be exhausted fairly quickly if even a small percentage of their victims tried to fight back.

@mikeloukides Developers need to unite and crowdfund reexam of obnoxious troll patents. Teach them a lesson http://t.co/8wFkyFQless than a minute ago via web Favorite Retweet Reply

This is precisely where the big players need to get into the game. Apple has tried to give their app developers some legal cover, but as far as I know, they have not stepped in to pay for anyone's defense. Neither has Google. It's time for Apple and Google to step up to the plate. I am willing to bet that, if Apple or Google set up a defense fund, trolling would stop really quickly.

Blocking sale of patents?

A large part of the patent problem is that patents are transferable. @_philjohn asks "Do you think changing law to prevent transfer of patents could reduce the patent troll problem?" On one level, this is an attractive solution. But I'm wary: not about patent reform in itself (which is absolutely necessary), but because I've worked for a startup that went out of business. They had a small intellectual property portfolio, and the sale of that portfolio paid for my (substantial) unused vacation time. That's not how things are supposed to happen, but when startups go out of business, they don't always shut down nicely. It's worth asking what the cost would be if patents and other kinds of intellectual property were non-transferable. Would venture capitalists be less likely to invest, would startups fail sooner, if it were impossible to sell intellectual property assets? I suspect not, but it isn't a simple question.


A call to action

Patent and copyright law in the U.S. derives from the Constitution, and it's for a specific purpose: "To promote the progress of science and useful arts" (Article I, section 8). If app developers are being driven out of the U.S. market by patent controlling, patent law is failing in its constitutional goal; indeed, it's forcing "science and the useful arts" to take place elsewhere. That's a problem that needs to be addressed, particularly at a time when the software industry is one of the few thriving areas of the U.S. economy, and when startups (and in my book, that includes independent developers) drive most of the potential for job growth in the economy.

I don't see any relief coming from the patent system as it currently exists. The bigger question is whether software should be patentable at all. As Nat Torkington (@gnat) has reported, New Zealand's Parliament has a bill before it that will ban software patents, despite the lobbying of software giants in the U.S. and elsewhere. Still, at this point, significant changes to U.S. patent law belong in the realm of pleasant fantasy. Much as I would like to see it happen, I can't imagine Congress standing up to an onslaught of lobbyists paid by some of the largest corporations in the U.S.

One dimension of the problem is relatively simple: too many patent applications, too few patent office staff reviewing those applications, and not enough technical expertise on that staff to evaluate the applications properly. It doesn't help that patents are typically written to be as vague and broad as possible, without being completely meaningless. (As the staff tech writer at that startup, I had a hand in reviewing some of my former employer's patent applications). So you frequently can't tell what was actually patented, and an alleged "infringement" can take place that had little to do with the original invention. Tim O'Reilly (@timoreilly) suggested a return to the days when a patent application had to include the actual invention (for software, that could mean source code) being patented. This would reduce much of the ambiguity in what was actually patented, and might prevent some kinds of abuse. Whatever form it takes, better scrutiny on the part of the patent office would be a big help. But is that conceivable in these days of government spending cuts and debt ceilings? Larger filing fees, to support the cost of more rigorous examination, is probably a non-starter, given the current allergy to anything that looks like a "tax." However, inadequate review of patent applications effectively imposes a much larger (and unproductive) tax on the small developers who can least afford it.

If we can't rely on the patent office to do a better job of reviewing patents, the task falls to the Apples and Googles of the world — the deep-pocketed players who rely on small developers — to get into the game and defend their ecosystems. But though that's a nice idea, there are many reasons to believe it will never happen, not the least of which is that the big players are too busy suing each other.

Apple and Google, are you listening? Your communities are at stake. Now's the time to show whether you really care about your developers.

Crowdfunding the defense of small developers may be the best solution for the immediate problem. Is this a viable Kickstarter project? It probably would be the largest project Kickstarter has ever attempted. Would a coalition of patent attorneys be willing to be underpaid while they contribute to the public good? I'd be excited to see such a project start. This could also be a project for the EFF. The EFF has the expertise, they list "innovation" and "fair use" among their causes, and they talk explicitly about trolling on their intellectual property page. But they've typically involved themselves in a smaller number of relatively high-profile cases. Are they willing to step in on a larger (or smaller, as the case may be) scale?

None of these solutions addresses the larger problems with patents and other forms of intellectual property, but perhaps we're better off with baby steps. Even the baby steps aren't simple, but it's time to start taking them.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD




Related:


    June 22 2011

    Developer Week in Review: Start your lawyers!

    Summer is here, so it's time to hit the beach and soak up some sun. You know, sun? That bright yellow ball that blinds you whenever you go out for Doritos and Mountain Dew in the middle of a 48-hour hackathon? I'm told it's actually quite pleasant to be around, once you get acclimated to it. Still, probably better to stay inside, avoid the evil day star, and see what's been happening in the World of Geek this week.

    Get your lawsuits

    Samsung and AppleIn the latest chapter of "As the Smartphone Turns," Samsung has accused Apple of fathering an illegitimate child with it when Samsung had amnesia, gotten as a result of being hit on the head by an old Motorola bag-phone while trying to save RIM from ending up destitute on the street.

    Not really, but the realities of Samsung v. Apple are almost as bizarre. This week, a US district judge told Samsung that, no, you don't get to see previews of the iPad 3 and iPhone 5. This comes as Samsung continues to be Apple's largest supplier of semiconductor technologies. There must be some awesome screens set up to let Apple shovel money into Samsung's bank account while at the same time suing them.

    Also in "Intellectual Property Gone Wild" news this week, Oracle is evidently asking for (cue Carl Sagan voice) billiuns and billiuns of dollars as penalties in their Java suit against Google, which means that Google might actually need to clean out the petty cash drawer and make a trip to the bank. And Apple has paid off Nokia to settle a long-running patent suit between the two companies. And BitTorrent came under attack this week when they were sued for violating a "submarine" patent on file distribution granted in 2007. Litigation, the growth sector of the American economy!

    In related news, I got a notice this week that my own trademark application will be approved in three months if no one objects. Watch out world, I'm gonna have some IP soon, and I'm not afraid to use it!

    Please remember to stretch before logging into your PC

    Folks have been hacking the Kinect for a while now, hooking it up to all sorts of esoteric devices that aren't XBoxen (and just what is the group noun for an XBox? A Lanparty of XBoxes?). Now Microsoft has decided to make Kinect hacks officially supported, at least if you run Windows. With the release of the Kinect SDK for Windows, developers can finally make desktop users flail around awkwardly, just like their gaming counterparts.

    With the release of the SDK, Windows hackers will gain access to a powerful vision recognition system, and it will be interesting to see what the first third-party Windows applications to come out will look like. Somehow, I suspect it'll have something to do with porn ...

    OSCON 2011 — Join today's open source innovators, builders, and pioneers July 25-29 as they gather at the Oregon Convention Center in Portland, Ore.

    Save 20% on registration with the code OS11RAD

    Where were you when the IPv6 turned on?

    The one-day IPv6 lovefest earlier this month didn't seem to break anything significant, but on the other hand, it didn't seem to do much to promote the adoption of IPv6 either. Unless you happen to be one of the 12 people on the planet whose ISP allocates and routes IPv6, the only way to know that anything had happened at all was if you had an IPv6 tunnel set up with a broker such as Electric Hurricane.

    With the IPv4 space "officially" exhausted, you'd expect there would be more urgency about this issue, but business seems to be proceeding according to the normal human emergency protocol (that's the one where you ignore a problem until it becomes a crisis, then run around like a chicken with it's head cut off). In the meanwhile, there are still quite a few active class A subnets lying around, each with 16 million addresses (here's a list). One must wonder how long it will be before pressure starts to be applied on entities such as HP (which owns two!) to start freeing them up for the good of the net.

    Got news?

    Please send tips and leads here.



    Related:

    June 03 2011

    Should the patent office open its internal guidelines to the public?

    Anyone following policy issues around technological innovation has noticed the power and scope of patents expanding over time. For instance, most people are aware of the Supreme Court's decision to allow the patenting of genes. Computer experts are more concerned about the decisions to patent software. Many forces contribute to the expanding reach of the patent system over time, and to understand them better I recommend a thoughtful, readable summary by law professor Melissa F. Wasserman.

    Wasserman argues that the patent office, the appeals court that reviews its decisions, and even Congress have incentives to keep expanding patents. Her anecdotes strike home and her reasoning is lucid, although of course we lack experimental methods for testing her hypotheses. (That is, we can't prove that patent examiners or courts were biased by looking at statistics.) I think you'll find her article quite readable, with most of fussy legal language relegated to the footnotes. (I heard about the article thanks to an email from Harvard Law School's Petrie-Flom Center for Health Law Policy Biotechnology and Bioethics.)

    As a simple example of the bias toward extending patents, consider that nobody ever appeals a patent examiner's decision to grant a patent, but aggrieved applicants often appeal decisions to deny a patent. And defending the decision to deny a patent costs the patent office a lot of money, which it can't make up from fees. Because the appeals court hears of dubious decisions only when a patent it denied, it has no opportunity to say, "Woah there, stop expanding the patent system."

    But it gets even worse. Wasserman offers several subtle reasons why having a denial reversed hurts the patent office, whereas it hardly ever suffers if a patent is successfully challenged years later.

    One of the most interesting observations in the paper--which Wasserman makes briefly in passing, on page 14--is that the administrators of the patent office provide guidance to examiners in a number of internal memos that are never exposed to the public. Here is a cause for open government advocates: show us the memos that contain criteria for approving or denying patents!

    Wasserman is not unsympathetic to the patent office. On the contrary, she takes raises the question above the usual cries of "poor, overworked examiners" or "corporate-friendly, biased judges" and finds systemic reasons for today's patent bloat. These range from making it easier to challenge a patent right at the start to overhauling the funding of the patent office so it gets the support it needs both for approving and denying patents.

    June 02 2011

    Developer Week in Review: The other shoe drops on iOS developers

    Bags packed? Check! Ticket printed? Check! "I (Heart) Steve" T-shirt worn? Check! Yes, it's that time of year, when the swallows return to Capistrano the developers return to San Francisco for WWDC. I'll be there Sunday to Saturday, so keep an eye out for me and maybe we can get a beer or something.

    But even as we await the release of Lion, iOS 5 and iCloud, the world continues to turn.

    Well, so much for Apple's big umbrella

    App store screenshotLast week, iOS developers everywhere breathed a sigh of relief as Apple stepped up to the plate, and said that they considered their developer community to be covered under Apple's existing licensing agreement with patent holding company Lodsys. Lodsys, evidently, had a difference of opinion on the subject. This leaves the lucky seven developers who got hit with the first round of lawsuits with an interesting choice. Do they settle with Lodsys, perhaps paying out many times what they have brought in as income from their apps, or do they fight and face expensive legal fees and a lawsuit that could drag on for years?

    Android developers shouldn't gloat too much at the misfortune of their iPhone counterparts, since Lodsys is asserting that two of their patents cover Android apps as well. Apple and Google are going to have to take things up another notch, and offer free legal services to their developers, or things could get quite messy, quite fast.

    OSCON 2011 — Join today's open source innovators, builders, and pioneers July 25-29 as they gather at the Oregon Convention Center in Portland, Ore.

    Save 20% on registration with the code OS11RAD

    OpenOffice finds a home at Apache

    Oracle, as part of their ongoing shedding of all of their Sun acquisitions, had promised earlier in the year that OpenOffice would be given to some third party at some point. Well, that third party is Apache. Oracle will be donating the source code to Apache, where it will become an incubator project. For developers who have be interested in poking around with the guts of OpenOffice (or extending the functionality), but were leery of Oracle holding the strings, this announcement should eliminate any doubts. Statements from The Document Foundation (who split off a fork of OpenOffice) were guarded, but it seems like there's hope of reuniting the code streams, and avoiding yet another case of parallel development of the same "product."

    Java rant of the week: Interface madness

    As I am wont to do from time to time, I'd like to take a moment today to rant about a coding abuse that I see more and more frequently. That abuse would be the indiscriminate use of interfaces in front of implementing classes, usually with a factory. There are certainly places where the interface/factory pattern makes sense, such as when you genuinely do have multiple implementations of something that you want to be able to swap out easily

    However, far too often, I see factories and interfaces used between classes simply because "we might" want to someday put something else in there. I recently saw an implementation of a servlet that called for authentication of the request. There's only one implemented version of the authentication code, and no real plans to make another. But still, there were Foo and FooImpl files sitting right there (there was probably a FooFactory somewhere, I didn't go looking ...)

    Unneeded interfaces are not only wasted code, they make reading and debugging the code much more difficult, because they break the link between the call and the implementation. The only way to find the implementing code is to look for the factory, and see what class is being provisioned to implement the interface. If you're really lucky, the factory gets the class name from a property file, so you have to look another level down.

    There's no excuse for doing this. It's anti-agile, and the refactor cost once you do genuinely do have a second version, and need an interface, is relatively low. End of rant.

    Got news?

    Please send tips and leads here.


    Related:

    May 27 2011

    May 18 2011

    Developer Week in Review: Buying a lawsuit with an in-app purchase

    Hello, and welcome to another fun-filled week of frolic and mayhem in the software industry. We'll get right to the news, but first this short commercial message.

    Do you suffer from the heartbreak of buffer overruns? Has your SQL been injected? Do you stay awake at night because of cross-site scripting attacks? If so, try new Hackitol Plus, now available in convenient 8-hour strength. Don't let poorly secured applications keep you from leading the life you want to have. Note: Side effects may include nausea, heart palpitations, and the inability to use Flash or Facebook. Consult your doctor if you are currently developing in JavaScript.

    And now, back to our program.

    iPhone developers ask for whom the suit trolls

    The continued three-ring-circus that is software intellectual property continued to roll right along last week, with a group of iPhone app developers the latest to feel the sting. Lodsys sent legal nasty-grams to a number of developers who were taking advantage of the evidently patented idea of doing in-app purchases. This has evidently led Apple to put some new iPhone apps, which use the feature, on hold.

    Interestingly, Lodsys claims that Apple, among others (including Microsoft and Google) already licenses the patent, but that it doesn't extend to developers using Apple's in-app function. That's going to be an interesting argument to watch play out. Does that mean if Apple licensed a technology to render an iOS control, and developers use that control in their applications, they'd need to get a license as well?

    Apart from being a headache for both Apple and the developer community, there could be other far-reaching ramifications. For example, would Steam's in-game purchasing of weapons and clothing be subject to the same patent? Until Congress or the courts step in and stop the madness, it's anyone's guess.

    OSCON 2011 — Join today's open source innovators, builders, and pioneers July 25-29 as they gather at the Oregon Convention Center in Portland, Ore.

    Save 20% on registration with the code OS11RAD

    Mono strikes out on its own

    MonoAs previously reported, Novell's new overlords (that would be Attachmate, which still sounds like some kind of "As Seen On TV" product to me) gave the Mono developers their walking papers last week. Now Mono guru Miguel De Icaza has formed a new company to pick up the pieces. The company, called Xamarin (which sounds like a prescription sleeping aid to me), will offer commercial Mono support, as well as .NET tools for Android and iOS.

    Knit One, perl 5.14

    Perl 6 may be languishing out there with "Duke Nukem Forever," but there's still new perl to be had. This week, perl 5.14 hit the streets. Improved Unicode support seems to be a major thrust of the release (click here for all the gripping details.)

    For those of us who grew up (professionally, at least) with perl in our toolbag, it's good to see continued active development on the language. While I may not pull that particular tool out as often as I used to, I still find myself writing the occasional script to grovel over a file and pull out the golden nuggets I need.

    Got news?

    Please send tips and leads here.



    Related:


    March 02 2011

    Software patents, prior art, and revelations of the Peer to Patent review

    A href="http://us1.campaign-archive1.com/?u=33d934c165e69e4b507504c2b&id=8771dc3ae5&e=77c352ede8#mctoc1">report
    from the Peer to Patent initiative shows
    that the project is having salutary effects on the patent system.
    Besides the greater openness that Peer to Patent promotes in
    evaluating individual patent applications, it is creating a new
    transparency and understanding of the functioning of the patent system
    as a whole. I'll give some background to help readers understand the
    significance of Manny Schecter's newsletter item, which concerns prior
    art that exists outside of patents. I'll add my own comments about
    software patents.


    Let's remind ourselves of the basic rule of patenting: no one should
    get a patent for something that was done before by someone else. Even
    if you never knew that some guy on the other side of the world thought
    of adding a new screw to add to a boiler, you can't get a patent on
    the idea of adding a screw in that place for that purpose. The other
    guy's work is called prior art, and such prior art can be
    found in all kinds of places: marketing brochures, academic journals,
    or actual objects that operate currently or operated any time in the
    past. For software (which is of particular interest to most readers
    of this blog), prior art could well be source code.

    Now for the big lapse at the U.S. Patent Office: they rarely look for
    prior art out in the real world. They mostly check previously granted
    U.S. patents--a pretty narrow view of technology. And that has
    seriously harmed software patenting.

    Software was considered a form of thinking rather than as a process or
    machine up until the early 1980s, and therefore unpatentable. Patents
    started to be granted on software in the United States in the early
    1980s and took off in a big way in the 1990s. (A useful href="http://www.bitlaw.com/software-patent/history.html">history has
    been put up by Bitlaw. This sudden turn meant that patent
    examiners were suddenly asked to evaluate applications in a field
    where there were no patents previously. So of course they couldn't
    find prior art. It would have been quixotic in any case to expect
    examiners--allowed less than 20 hours per patent--to learn a new field
    of software and go out among the millions of lines of code to search
    for examples of what they were being asked to grant patents for.

    In many parts of the world, software is still considered unsuitable
    for patenting, but it's worth noting that the European Union has been
    handing out patents on software without acknowledging them as such,
    because a hard-fought battle among free software advocates has kept
    software officially unpatentable.

    In the U.S., patents have been handed out right and left for two
    decades now, so the prior art does exist within patents on software.
    But that even makes things worse. First, the bad patents handed out
    over the initial decades continues to weigh down software with
    lawsuits that lack merit. Second, the precedent of so many unmerited
    patents gives examiners the impression that it's OK to grant patents
    on the same kinds of overly broad and obvious topics now.

    Now to Schecter's article. He says the patent office has long
    acknowledged that they look mostly to patents for prior art, but they
    won't admit that this is a problem. One has to prove to them that
    there is important prior art out in the field, and that this prior art
    can actually lead to the denial of applications.

    And Peer to Patent has accomplished that. From Schecter:

    Approximately 20% of patent applications in the pilot were rejected in
    view of prior art references submitted through Peer To Patent, and
    over half of the references applied by examiners as grounds for those
    rejections were non-patent prior art.

    The discussion over the patent process, which has progressed so
    painfully slowly over many years, now takes a decisive step forward.
    Prior art in the field should be taken into account during the process
    of examining patents. The next question is how.

    Peer to Patent and related efforts such as href="http://www.articleonepartners.com/">Article One Partners
    offer a powerful step toward a solution. Much of the tinkering
    proposed in current debates, such as the number of patent examiners,
    the damages awarded for infringement, and so forth (a bill was
    debated in the Senate today, I've heard), will accomplish much less to
    cut down the backlog of 700,000 applications and improve outcomes than
    we could achieve through serious involvement of public input.

    I am not a zealot on the subject of software patents. I've read a lot
    of patent applications and court rulings about patents (see, for
    instance, my href="http://www.praxagora.com/andyo/article/patent_bilski_aftermath.html">
    analysis of the Bilski decision) and explored the case for
    software patents sympathetically in href="http://radar.oreilly.com/archives/2007/09/three_vantage_p.html">another
    article. But I have to come down on the side of position that
    software and business processes, like other areas of pure human
    thought, have no place in the patent system.

    Maybe Rivest, Shamir, and Adleman deserved their famous href="http://www.google.com/patents?vid=4405829">patent (now
    expired) on public-key cryptography--that was a huge leap of thought
    making a historic change in how computers are used in the world. But
    the modern patents I've seen are nothing like the RSA algorithm. They
    represent cheap patches on tired old practices. Proponents of software
    patents may win their battle in the halls of power, but they have lost
    their argument on the grounds of the patents to which their policy has
    led. Sorry, there's just too much crap out there.

    February 04 2011

    Four short links: 4 February 2011

    1. Access to Knowledge in the Age of Intellectual Property (MIT Press) -- with essays by knowledgeable folks such as Yochai Benkler, Larry Lessig, and Jo Walsh. Available as open access (free) ebook as well as paper. I love it that we can download these proper intellectuals' intellectual property. (via BoingBoing)
    2. AwesomeChartJS -- Apache-licensed Javascript library for charting. (via Hacker News)
    3. Be Open from Day One -- advice from Karl Fogel (author of the excellent Producing Open Source Software, which O'Reilly publishes) for projects that think they may some day be open source: f you’re running a government software project and you plan to make it open source eventually, then just make it open source from the beginning. Waiting will only create more work. (via timoreilly on Twitter)
    4. MALLET -- open source (CPL-licensed) Java-based package for statistical natural language processing, document classification, clustering, topic modeling, information extraction, and other machine learning applications to text.

    July 21 2010

    January 06 2010

    December 24 2009

    Peer to Patent Australia recruits volunteer prior art searchers

    The

    Peer to Patent
    project has already earned its place in history. It was explicitly
    cited as inspiration for the open government initiative in the Obama
    administration, which recently released a comprehensive directive
    (available as a
    PDF)
    covering federal agencies. The founder of the project, law professor
    Beth Noveck, began implementation of the directive as Deputy CTO in
    the US government. But I've been wondering, along with many other
    people, where Peer to Patent itself is going.

    It's encouraging to hear that a new pilot has started in Australia and
    has gathered a small community of volunteer patent art seekers. You
    can check out the

    official site

    and its

    Wikipedia page
    .
    Because Australia is much smaller in population than the US and sees
    much less patent activity, the scope of the pilot is smaller but seems
    to be chugging along nicely.

    The pilot started on December 9 and plans to run for six months,
    offering 40 patents for review in the areas of software and business
    methods (the same ones as the US Peer to Patent project). Among
    participating patent applicants are IBM, General Electric,
    Hewlett-Packard, Yahoo!, CSIRO, and Aristocrat. Right now, 15 patents
    are posted, each has at least one volunteer reviewer, and one boasts
    two suggestions for potential prior art.

    Professor Brian Fitzgerald of the Queensland University of Technology,
    the Project Leader of Peer to Patent Australia, says, "Peer to Patent
    allows people from anywhere to plug into the patent examination
    process and to add what value they can. And from what we have seen in
    the US, it works: examiners are relying on the Peer to Patent prior
    art notifications. Our aim is to help build an international platform
    for the project as well as embed its benefits within the Australian
    patent system. We ask you to join the Australian project and help
    contribute to the development of Peer to Patent on a worldwide basis."

    While the U.S. pilot is undergoing evaluation, Peer to Patent's
    executive directory Mark Webbink says, "Signs are good for a potential
    restart of the program some time in 2010. Dave Kappos, the Under
    Secretary of Commerce and Director of the USPTO, has long been a
    supporter of Peer to Patent, and the prior art contributions appear to
    be proving useful. The worldwide economy produced some drag on program
    expansion when the UK Intellectual Property Office delayed its
    anticipated pilot. However, the Japan Patent Office, which previously
    ran its own peer review pilot, now appears interested in expanding its
    program. IP Australia and Queensland University of Technology are to
    be commended for moving on the pilot so quickly." Brian Fitzgerald
    says that China and other Asian countries are watching Japan and
    Australia with interest.

    I have followed Peer to Patent since fairly early drafts of the
    proposal, have written about it frequently, and believe it is both
    viable and necessary. The recent ruling against Microsoft Office shows
    that patents in software, at least, are way out of control. Prior art
    cannot in itself solve a broken system, but a robust examination
    process can at least make applicants think twice about trying to exert
    ownership over routine concepts such as separating a document's markup
    from its content. (That's the purpose of markup in the first place.)
    Incidentally, Australia has its own version of the famous

    Bilski patent case
    ,
    Grant v Commissioner of Patents.

    In fact, the progress Peer to Patent has made in many countries proves
    my faith in it. Just think about the inertia of government agencies
    and the impenetrability of both the individual patent application and
    the patent process as a whole. Who would imagine, putting all those
    barriers together, that Peer to Patent could have accomplished so much
    already?

    We're not on Internet time here, but on policy time. Peer to Patent is
    still a baby, and with enough care and feeding it can thrive and grow
    strong.

    December 17 2009

    The Best and the Worst Tech of the Decade

    With only a few weeks left until we close out the 'naughts and move into the teens, it's almost obligatory to take a look back at the best and not-so-best of the last decade. With that in mind, I polled the O'Reilly editors, authors, Friends, and a number of industry movers and shakers to gather nominations. I then tossed them in the trash and made up my own compiled them together and looked for trends and common threads. So here then, in no particular order, are the best and the worst that the decade had to offer.

    The Best

    AJAX - It's hard to remember what life was like before Asynchronous JavaScript and XML came along, so I'll prod your memory. It was boring. Web 1.0 consisted of a lot of static web pages, where every mouse click was a round trip to the web server. If you wanted rich content, you had to embed a Java applet in the page, and pray that the client browser supported it.

    Without the advent of AJAX, we wouldn't have Web 2.0, GMail, or most of the other cloud-based web applications. Flash is still popular, but especially with HTML 5 on the way, even functionality that formerly required a RIA like Flash or Silverlight can now be accomplished with AJAX.

    Twitter - When they first started, blogs were just what they said, web logs. In other words, a journal of interesting web sites that the author had encountered. These days, blogs are more like platforms for rants, opinions, essays, and anything else on the writer's mind. Then along came Twitter. Sure, people like to find out what J-Lo had for dinner, but the real power of the 140 character dynamo is that it has brought about a resurgence of real web logging. The most useful tweets consist of a Tiny URL and a little bit of context. Combine that with the use of Twitter to send out real time notices about everything from breaking news to the current specials at the corner restaurant, and it's easy to see why Twitter has become a dominant player.

    Ubiquitous WiFi: I want you to imagine you're on the road in the mid-90s. You get to your hotel room, and plop your laptop on the table. Then you get out your handy RJ-11 cord, and check to see if the hotel phone has a data jack (most didn't), or if you'll have to unplug the phone entirely. Then you'd look up the local number for your ISP, and have your laptop dial it, so you could suck down your e-mail at an anemic 56K.

    Now, of course, WiFi is everywhere. You may end up having to pay for it, but fast Internet connectivity is available everywhere from your local McDonalds to your hotel room to an airport terminal. Of course, this is not without its downsides, since unsecured WiFi access points have led to all sorts of security headaches, and using an open access point is a risky proposition unless your antivirus software is up to date, but on the whole, ubiquitous WiFi has made the world a much more connected place.

    Phones Get Smarter: In the late 90s, we started to see the first personal digital assistants emerge, but this has been the decade when the PDA and the cell phone got married and had a baby called the smartphone. Palm got the ball rolling with the Treos about the same time that Windows Mobile started appearing on phones, and RIM's Blackberry put functional phones in the hands of business, but it was Apple that took the ball and ran for the touchdown with the iPhone. You can argue if the droid is better than the 3GS or the Pre, but the original iPhone was the game-changer that showed what a smartphone really could do, including the business model of the App Store,

    The next convergence is likely to be with Netbooks, as more and more of the mini-laptops come with 3G service integrated in them, and VoIP services such as Skype continue to eat into both landline and cellular business.

    The Maker Culture: There's always been a DIY underground, covering everything from Ham radio to photography to model railroading. But the level of cool has taken a noticeable uptick this decade, as cheap digital technology has given DIY a kick in the pants. The Arduino lets anyone embed control capabilities into just about anything you can imagine, amateur PCB board fabrication has gone from a messy kitchen sink operation to a click-and-upload-your-design purchase, and the 3D printer is turning the Star Trek replicator into a reality.

    Manufacturers cringe in fear as enterprising geeks dig out their screwdrivers. The conventional wisdom was that as electronics got more complex, the "no user serviceable parts" mentality would spell the end of consumer experimentation. But instead, the fact that everything is turning into a computer meant that you could take a device meant for one thing, and reprogram it to do something else. Don't like your digital camera's software? Install your own! Turn your DVR into a Linux server.

    Meanwhile, shows like Mythbusters and events like Maker Faire have shown that hacking hardware can grab the public's interest, especially if there are explosions involved.

    Open Source Goes Mainstream: Quick! Name 5 open source pieces of software you might have had on your computer in 1999. Don't worry I'll wait...

    How about today? Firefox is an easy candidate, as are Open Office, Chrome, Audacity, Eclipse (if you're a developer), Blender, VLC, and many others. Many netbooks now ship with Linux as the underlying OS. Open Source has gone from a rebel movement to part of the establishment, and when you combine increasing end user adoption with the massive amounts of FLOSS you find on the server side, it can be argued that it is the 800 pound Gorilla now.

    As Gandhi said, "First they ignore you, then they laugh at you, then they fight you, then you win." When even Microsoft is releasing Open Source code, you know that you're somewhere between the fight and win stages.

    Bountiful Resources: 56K modems, 20MB hard drives, 640K of RAM, 2 MHz processors. You don't have to go far back in time for all of these to represent the state of the art. Now, of course, you would have more than that in a good toaster...

    Moore's Law continues to drive technology innovation at a breakneck pace, and it seems that related technologies like storage capacity and bandwidth are trying to follow the same curve. Consider that AT&T users gripe about the iPhone's 5GB/month bandwidth cap, a limit that would have taken 10 solid days of transferring to achieve with a dialup connection.

    My iPhone has 3,200 times the storage of the first hard drive I ever owned, and the graphics card on my Mac Pro has 16,000 times the memory of my first computer. We can now do amazing things in the palm of our hands, things that would have seemed like science fiction in 1999.

    The Worst

    SOAP: The software industry has been trying to solve the problem of making different pieces of software talk to each other since the first time there were two programs on a network, and they still haven't gotten it right. RPC, CORBA, EJB, and now SOAP now litter the graveyard of failed protocol stacks.

    SOAP was a particularly egregious failure, because it was sold so heavily as the final solution to the interoperatibility problem. The catch, of course, was that no two vendors implemented the stack quite the same way, with the result that getting a .NET SOAP client to talk to a Java server could be a nightmare. Add in poorly spec'd out components such as web service security, and SOAP became useless in many cases. And the WSDL files that define SOAP endpoints are unreadable and impossible to generate by hand (well, not impossible, but unpleasant in the extreme.)

    Is it any wonder that SOAP drove many developers into the waiting arms of more useable data exchange formats such as JSON?

    Intellectual Property Wars: How much wasted energy has been spent this decade by one group of people trying to keep another group from doing something with their intellectual property, or property they claim was theirs? DMCA takedowns, Sony's Rootkit debacle, the RIAA suing grandmothers, SCO, patent trolls, 09F911029D74E35BD84156C5635688C0, Kindles erasing books, deep packet inspection, Three Strikes laws, the list goes on and on and on...

    At the end of the day, the movie industry just had their best year ever, Lady Gaga seems to be doing just fine and Miley Cyrus isn't going hungry, and even the big players in the industry are getting fed up sufficiently with the Trolls to want patent reform. The iTunes store is selling a boatload of music, in spite of abandoning DRM, so clearly people will continue to pay for music, even if they can copy it from a friend.

    Unfortunately, neither the RIAA nor the MPAA is going gently into that good night. If anything, the pressure to create onerous legislation has increased in the past year. Whether this is a last gasp or a retrenchment will only be answered in time.

    The Cult of Scrum: If Agile is the teachings of Jesus, Scrum is every abuse ever perpetrated in his name. In many ways, Scrum as practiced in most companies today is the antithesis of Agile, a heavy, dogmatic methodology that blindly follows a checklist of "best practices" that some consultant convinced the management to follow.

    Endless retrospectives and sprint planning sessions don't mean squat if the stakeholders never attend them, and too many allegedly Agile projects end up looking a lot like Waterfall projects in the end. If companies won't really buy into the idea that you can't control all three variables at once, calling your process Agile won't do anything but drive your engineers nuts.

    The Workplace Becomes Ubiquitous: What's the first thing you do when you get home at night? Check your work email? Or maybe you got a call before you even got home. The dark side of all that bandwidth and mobile technology we enjoy today is that you can never truly escape being available, at least until the last bar drops off your phone (or you shut the darn thing off!)

    The line between the workplace and the rest of your life is rapidly disappearing. When you add in overseas outsourcing, you may find yourself responding to an email at 11 at night from your team in Bangalore. Work and leisure is blurring together into a gray mélange of existence. "Do you live to work, or work to live," is becoming a meaningless question, because there's no difference.

    So what do you think? Anything we missed? Hate our choices? With us 100 percent? Let us know in the comments section below.

    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!

    Schweinderl