Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

September 19 2013

Four short links: 20 September 2013

  1. Researchers Can Slip an Undetectable Trojan into Intel’s Ivy Bridge CPUs (Ars Technica) — The exploit works by severely reducing the amount of entropy the RNG normally uses, from 128 bits to 32 bits. The hack is similar to stacking a deck of cards during a game of Bridge. Keys generated with an altered chip would be so predictable an adversary could guess them with little time or effort required. The severely weakened RNG isn’t detected by any of the “Built-In Self-Tests” required for the P800-90 and FIPS 140-2 compliance certifications mandated by the National Institute of Standards and Technology.
  2. rethinkdbopen-source distributed JSON document database with a pleasant and powerful query language.
  3. Teach Kids Programming — a collection of resources. I start on Scratch much sooner, and 12+ definitely need the Arduino, but generally I agree with the things I recognise, and have a few to research …
  4. Raspberry Pi as Ad-Blocking Access Point (AdaFruit) — functionality sadly lacking from my off-the-shelf AP.

September 05 2013

July 04 2013

October 22 2012

Four short links: 22 October 2012

  1. jq — command-line tool for JSON data.
  2. GAFFTA — Gray Area Foundation For The Arts. Non-profit running workshops and building projects around technology-driven arts. (via Roger Dennis)
  3. Power Pwn — looks like a power strip, is actually chock-full of pen-testing tools, WiFi, bluetooth, and GSM. Beautifully evil. (via Jim Stogdill)
  4. Open Access Week — this week is Open Access week, raising awareness of the value of ubiquitous access to scientific publishing. (via Fabiana Kubke)

August 14 2012

Shrinking and stretching the boundaries of markup

It’s easy to forget that XML started out as a simplification process, trimming SGML into a more manageable and more parseable specification. Once XML reached a broad audience, of course, new specifications piled on top of it to create an ever-growing stack.

That stack, despite solving many problems, brings two new issues: it’s bulky, and there are a lot of problems that even that bulk can’t solve.

Two proposals at last week’s Balisage markup conference examined different approaches to working outside of the stack, though both were clearly capable of working with that stack when appropriate.

Shrinking to MicroXML

John Cowan presented on MicroXML, a project that started with a blog post from James Clark and has since grown to be a W3C Community Group. Cowan has been maintaining the Editor’s Draft of MicroXML, as well as creating a parser library for it, MicroLark. Uche Ogbuji, also chair of the W3C Community Group, has written a series of articles (1 2) about it.

Cowan’s talk was a broad overview of practical progress on a subject that had once been controversial. Even in the early days of XML 1.0, there were plenty of people who thought that it was still too large a subset of SGML, and the Common XML I edited was one effort to describe how practitioners typically used less than the XML made available. The subset discussion has come up repeatedly, with a proposal from Tim Bray and others over the years. In an age where JSON has replaced many data-centric uses of XML, there is less pressure to emphasize programming needs (data types, for example), but still demand for simplifying to a cleaner document model.

MicroXML is both a syntax — compatible with XML 1.0 — and a model, kept as absolutely simple as possible. In many ways, they’re trying to make syntax decisions based on their impact on the model.

Ideally, Cowan said, the model would just be element nodes (with attribute maps) containing other element nodes and content. That focus on a clean model means, for example, that while you can declare XML namespaces in MicroXML, the namespace declarations are just attributes. The model doesn’t reflect namespace URIs, and applications have to do that processing. Similarly, whitespace handling is simplified, and attributes become a simple unordered map of key names to values. The syntax allows comments, but they are discarded in the model. Processing instructions remain a question, because they would complicate the model substantially, but the XML declaration and CDATA sections would go. Empty element tags are in, for now …

Some of the pieces that raised controversy with questions were the current proposal to limit element and attribute names to the ASCII character set, and the demise of processing instructions. I mostly heard cheers for the end of draconian error handling, though there were memories of the bug compatibility wars of an earlier age reminding the audience that it could in fact be worse, or weirder.

There may be, as Cowan noted “no consensus on a single conversion path” to or from JSON, but MicroXML takes some steps in that direction, suggesting that JSONML should be able to support round-tripping of the MicroXML model, and JSONx could work for JSON to XML.

While I suspect that MicroXML has a bright future ahead of it in the document space, it seems unlikely to take much territory back from JSON in the data space. MicroXML doesn’t seem to be aiming at JSON at all, however.

Overlapping information and hierarchical tools

Very few programmers want to think about overlapping data structures. In most computing cases — bad pointers? — overlapping data structures are a complex mess. However, they’re extremely common in human communications. LMNL (Layered Markup and Annotation Language), itself a decade-long conversation that has suffered badly from decaying links, has always been an outsider in the normally hierarchical markup conversation. There may be conflicts between XML trees and JSON maps, but both of those become uncomfortable when they have to represent an overlapped structure.

Wendell Piez examined the challenges of processing overlapping markup — LMNL — with tools that expect XML’s neat hierarchies. Obviously, feeding this

[excerpt
[source [date}1915{][title}The Housekeeper{]]
[author
[name}Robert Frost{]
[dates}1874-1963{]] }
[s}[l [n}144{n]}He manages to keep the upper hand{l]
[l [n}145{n]}On his own farm.{s] [s}He's boss.{s] [s}But as to hens:{l]
[l [n}146{n]}We fence our flowers in and the hens range.{l]{s]
{excerpt]

into an XML parser would just produce error messages, even if those tags were angle brackets. Piez compiles that into XML, which represents the content separately from the specified ranges. It is, of course, not close to pretty, but it is processable. At the show, he demonstrated using this to create an annotated HTML format as well as a format that handles overlap gracefully: graphics, using SVG (or here as a PNG if your browser doesn’t like SVG).

Should you be paying attention to LMNL? If your main information concerns fit in neat hierarchies or clean boxes, probably not. If your challenges include representing human conversations, or other data forms where overlap happens, you may find these components critical, even though making them work with existing toolsets is difficult.

August 09 2012

Applying markup to complexity

When XML exploded onto the scene, it ignited visions of magical communications, simplified document storage, and a whole new wave of application capabilities. Reality has proved calmer, with competition from JSON and other formats tackling a wide variety of problems, while the biggest of the big data problems have such volume that adding markup seems likely to create new problems.

However, at the in-progress Balisage conference, it’s clear that markup remains really good at solving a middle category of problems, where its richer structures can shine without creating headaches of volume or complication. In the past, Balisage often focused on hard problems most people didn’t yet have, but this year’s program tackles challenges that more developers are encountering as their projects grow in complexity.

XML and JSON

JSON gave programmers much of what they wanted: a simple format for shuttling (and sometimes storing) loosely structured data. Its simpler toolset, freed of a heritage of document formats and schemas, let programmers think less about information formats and more about the content of what they were sending.

Developers using XML, however, have found themselves cut off from that data flow, spending a lot of time creating ad hoc toolsets for consuming and creating JSON in otherwise XML-centric toolchains. That experience is leading toward experiments with more formal JSON integration in XQuery and XSLT — and raising some difficult questions about XML itself.

XML and JSON look at data through different lenses. XML is a tree structure of elements, attributes, and content, while JSON is arrays, objects, and values. Element order matters by default in XML, while JSON is far less ordered and contains many more anonymous structures. A paper by Mary Holstege focused primarily on possibilities type introspection in XQuery, but her talk also ventured into how that might help address the challenges presented by the different types in JSON.

Eric van der Vlist, while recognizing that XSLT 3.0 is taking some steps to integrate JSON, reported on broader visions of an XML/JSON “chimera”, though he hoped to come up with something more elegant than the legendary but impractical creature. After his talk, he also posted some broader reflections on a data model better able to accomodate both XML and JSON expectations.

Jonathan Robie reflected on the growing importance of JSON (and his own initial reluctance to take it seriously) as semi-structured data takes over the many tasks it can solve easily. He described XML as shining at handling complex documents and the XML toolset as excellent support for a “hub format,” but also thought that the XML toolchain needs something like JSON. He compared the proposed XSLT 3.0 features for handling maps with JSONiq, and agreed with Holstege and van der Vlist that different expectations about the importance of order created the largest mismatches.

Hans-Jurgen Rennau had probably the most optimistic take, describing a Unified Document Language – not a markup syntax, but a model that could accomodate varied approaches to representing data. His proposal did include concrete syntax for representing this model in XML documents, as well as a description of alternate markup styles that help represent the model beyond XML.

I don’t expect that any of these proposals, even when and if they are widely implemented, will immediately grab the attention of people happily using JSON. In the short term they will serve primarily as bridges for data, helping XML and JSON systems co-exist. In the longer term, however, they may serve as bridges between the cultures of the two formats. Both approaches have their limitations. XML is cumbersome in many cases, while JSON is less pleasantly capable of representing ordered document structures.

JSON freed web developers to create much more complex applications with data formats that feel less complicated. As developers grow more and more ambitious, however, they may find themselves moving back into complex situations where the XML toolkit looks more capable of handling information without the overhead of vast quantities of custom code. If that toolkit supports their existing formats, mixing and matching should be easier.

Metadata, content and design

Markup and data types are themselves metadata, providing information about the data they encapsulate, but Balisage and its predecessor conferences have often focused on metadata structures at higher levels — the Semantic Web, RDF, Topic Maps, and OWL. So far, this year’s talks have been cautious about making big metadata promises.

Kurt Cagle gave the only talk on a subject that once seemed to dominate the conference, ontologies and tools for managing them. His metadata stack was large, and changing near the end of the work to include SPARQL over HTTP. If Semantic Web technologies can settle into the small and focused groove Cagle described, it seems like they might find a comfortable niche in web infrastructure rather than attempting to conquer it.

Diane Kennedy discussed the PRISM Source Vocabulary, an effort similar in its focus on applying technology to solve a set of problems for a particularly difficult context. The technology in the talk was unsurprising, but the social context was difficult, describing a missionary effort, to bring metadata ideas from a very “content first” crowd to magazines, a very “design first” crowd. Multiple delivery platforms, notably the iPad, have made design first communities more willing to consider at least a subset of metadata and markup technology.

Markup and programming language boundaries

While designers are historically a difficult crowd to sell semantic markup, programmers have been a skeptical audience about the value of markup — especially when “you got your markup in my programming language.” There are, of course, many programmers attending and speaking at Balisage, but the boundaries between people who care primarily about the data and those who care primarily about the processing are a unique and ever-changing combination of blurry and (cutting) sharp.

A number of speakers described themselves as “not programmers” and their work as “not programming” despite the data processing work they were clearly doing. Ari Nordstrom opened his talk on moving as much coding as possible to XML by discussing his differences with the C# culture he works with. In another talk, Yves Marcous said “I am not a programmer” only to be told by the audience immediately “Yes, you are!”

XML’s document-centric approach to the world seems to drive people toward declarative and functional programming styles. That is partly a side-effect of looking at so many documents that it becomes convenient to turn programs into documents, an angle that is very hard to explain to those outside the document circle. However, the strong tendencies toward functional programming emerge, I suspect, from the headaches of processing markup in “traditional” object-oriented or imperative programming. The Document Object Model, long the most-criticized aspect of JavaScript, exemplifies this mismatch (compounded by a mismatch between Java and JavaScript object models, of course). As jQuery and many other developers know, navigating a document tree through declarative CSS selectors is much easier.

Steven Pemberton’s talk on serialization and abstraction examined these kinds of questions in the context of form development. Managing user interactions with forms has long been labor-intensive, with developers creating ever-more complex (and often ever-more fragile) tools for forms-based validation and interactivity. Pemberton described how decisions made early in the development of a programming discipline can leave lingering and growing costs as approaches that seemed to work for simple cases grow under the pressure of increasing requirements. The XForms work attempts to leave the growing JavaScript snarl for a more-manageable declarative approach, but has not succeeded in changing web forms culture so far.

Jorge Luis Williams and David Cramer, both of Rackspace, found a different target for a declarative approach, mixing documentation into their code for validating RESTful services. The divide between REST and other web service approaches isn’t quite the same as the declarative / imperative divide, but I still felt a natural complement between the validation approach they were using and the underlying services they were testing.

A series of talks Tuesday afternoon poked and prodded at how markup could provide services to programming languages, exploring the boundaries between them. Economist Matthew McCormick discussed a system that provided documentation and structure to libraries of mathematical functions written in a variety of programming languages. Markup served as glue between the libraries, describing common functionality. David Lee wanted a toolset that would let him extract documentation from code — not just the classic JavaDoc extraction, but compilers reporting much of their internal information about variables in a processable markup format.

Norm Walsh started in different territory, discussing the challenges of creating compact non-markup formats for XML vocabularies, but wound up in a similar place. Attempting to shift a vocabulary from an XML format to a C-like syntax introduces dissonance even as it reduces verbosity. After noting the unusual success of the RELAX NG compact syntax and the simplicity that made it possible, he showed some of his own work on creating a compact syntax for XProc, declared it flawed, and showed a shift toward more programming-like approaches that eased some of the mismatch.

If you’re a programmer reading this, you may be wondering why these boundaries should matter to you. These frontiers tend to get explored from the markup side, and it’s possible that this work doesn’t solve a problem for you now. As conference chair Tommie Usdin put it in her welcome, however, Balisage is a place to “be exposed to some things that matter to other people but not to you — or at least not to you right now.”

July 28 2011

Visualizing structural change

My ongoing series about the elmcity project has shown a number of ways in which I invite participants to think like the web. One of the principles I try to illustrate by example is:

3. Know the difference between structured and unstructured data

Participants learn that calendars on web pages and in PDF files don't syndicate around the web, but calendars in the structured iCalendar format do. They also learn a subtler lesson about structured data. Curators of elmcity hubs manage the settings for their hubs, and for their feeds, by tagging Delicious bookmarks using a name=value syntax that enables those bookmarks to work as associative arrays (also known as dictionaries, hashtables, and mappings). For example, here's a picture of the bookmark that defines the settings for the Honolulu hub:

The bookmark's tags represent these attributes:

contact: elmcity@alohavibe.com
facebook: yes
header_image: no
radius: 300
title: Hawaii Events courtesy of AlohaVIBE.com
twitter: alohavibe
tz: hawaiian
where: honolulu,hi

Curators use Delicious to declare these sets of attributes, but the elmcity service doesn't always retrieve them from Delicious. Instead it syncs the data to Azure tables and blobs. When it needs to use one of these associative arrays it fetches an XML chunk from an Azure table or a JSON blob from the Azure blob store. Both arguably qualify as NoSQL mechanisms but I prefer to define things according to what they are instead of what they're not. To me these are just ways to store and retrieve associative arrays.

Visualizing change history for elmcity metadata

Recently I've added a feature that enables curators to review the changes they've made to the metadata for their hubs and feeds. The other day, for example, I made two changes to the Keene hub's registry. I added a new feed, and I added a tag to an existing feed. You can see both changes highlighted in green on this change history page. A few hours later I renamed the tag I'd added. That change shows up in yellow here. On the following day I deleted three obsolete feeds. That change shows up in yellow here and in red here.

These look a lot like Wikipedia change histories, or the "diffs" that programmers use to compare versions of source files. But Wikipedia histories and version control diffs compare unstructured texts. When you change structured data you can, at least in theory, visualize your changes in more focused ways.

One of the great ironies of software development is that although computer programs are highly structured texts, we treat them just like Wikipedia articles when we compare versions. I've had many discussions about this over the years with my friend Greg Wilson, proprietor of the Software Carpentry project. We've always hoped that mainstream version control systems would become aware of the structure of computer programs. So far we've been disappointed, and I guess I can understand why. Old habits run deep. I am, after all, writing these words in a text editor that emulates emacs.

Strata Conference New York 2011, being held Sept. 22-23, covers the latest and best tools and technologies for data science -- from gathering, cleaning, analyzing, and storing data to communicating data intelligence effectively.

Save 20% on registration with the code STN11RAD

Maybe, though, we can make a fresh start as the web of data emerges. The lingua franca of the data web was going to be XML, now the torch has passed to JSON (JavaScript Object Notation), both are used widely to represent all kinds of data structures whose changes we might want to visualize in structured ways.

The component at the heart of the elmcity's new change visualizer is a sweet little all-in-one-page web app by Tom Robinson. It's called JSON Diff. To try it out in a simple way, let's use this JSON construct:

{
  "Evernote" :
   {
   "InstalledOn":"3/19/2008",
   "Version": "3.0.0.539"
   },
  "Ghostery IE Plugin" :
   {
   "InstalledOn":"7/9/2011",
   "Version": "2.5.2.0"
   }
}

These are a couple of entries from the Programs and Features applet in my Windows Control Panel. If my system were taking JSON snapshots of those entries whenever they changed, and if I were later to upgrade the Ghostery plugin to (fictitious future version) 2.6.1.0, I could see this JSON Diff report:

You can try it yourself at this JSON Diff URL. Or if you're running Internet Explorer, which the original JSON Diff doesn't support, you can copy that JSON chunk and paste it into one of the earlier examples. The elmcity adaptation of JSON Diff, which uses JQuery to abstract browser differences, does work with IE.

It's worth noting that the original JSON Diff has the remarkable ability to remember any changes you make by dynamically tweaking its URL. It does so by tacking your JSON fragments onto the end of the URL after the hash symbol (#) as one long fragment identifier! The elmcity version sacrifices that feature in order to avoid running into browser-specific URL-length limits, and because it works with a server-side companion that can feed it data. But it's cool to see how a self-contained single-page-web app can deliver what is, in effect, a web service.

What changed, and when?

A key question, in many contexts, is: "What changed, and when?" In the Heavy Metal Umlaut screencast I animated the version history of a Wikipedia page. It was a fascinating exercise that sparked ideas about tools that would automate the process. Those tools haven't arrived yet. We could really use them, and not just for Wikipedia. In law and in journalism the version control discipline practiced by many (but not all!) programmers is tragically unknown. In these and in other fields we should expect at least what Wikipedia provides -- and ideally better ways to visualize textual change histories.

But we can also expect more. Think about the records that describe the status of your health, finances, insurance policies, vehicles, and computers. Or the products and personnel of companies you work for or transact with. Or the policies of governments you elect. All these records can be summarized by key status indicators that are conceptually just sets of name/value pairs. If the systems that manage these records could produce timestamped JSON snapshots when indicators change, it would be much easier to find out what changed, and when.



Related:



July 20 2011

Four short links: 20 July 2011

  1. Random Khan Exercises -- elegant hack to ensure repeatability for a user but difference across users. Note that they need these features of exercises so that they can perform meaningful statistical analyses on the results.
  2. Float, the Netflix of Reading (Wired) -- an interesting Instapaper variant with a stab at an advertising business model. I would like to stab at the advertising business model, too. What I do like is that it's trying to do something with the links that friends tweet, an unsolved problem for your humble correspondent. (via Steven Levy
  3. JSON Parser Online -- nifty web app for showing JSON parses. (via Hilary Mason)
  4. Facebook and the Epiphanator (NY Magazine) -- Paul Ford has a lovely frame through which to see the relationship between traditional and social media. So it would be easy to think that the Whole Earthers are winning and the Epiphinators are losing. But this isn't a war as much as a trade dispute. Most people never chose a side; they just chose to participate. No one joined Facebook in the hope of destroying the publishing industry.

December 03 2010

Four short links: 3 December 2010

  1. Data is Snake Oil (Pete Warden) -- data is powerful but fickle. A lot of theoretically promising approaches don't work because there's so many barriers between spotting a possible relationship and turning it into something useful and actionable. This is the pin of reality which deflates the bubble of inflated expectations. Apologies for the camel's nose of rhetoric poking under the metaphoric tent.
  2. XML vs the Web (James Clark) -- resignation and understanding from one of the markup legends. I think the Web community has spoken, and it's clear that what it wants is HTML5, JavaScript and JSON. XML isn't going away but I see it being less and less a Web technology; it won't be something that you send over the wire on the public Web, but just one of many technologies that are used on the server to manage and generate what you do send over the wire. (via Simon Willison)
  3. Understanding Pac Man Ghost Behaviour -- The ghosts’ AI is very simple and short-sighted, which makes the complex behavior of the ghosts even more impressive. Ghosts only ever plan one step into the future as they move about the maze. Whenever a ghost enters a new tile, it looks ahead to the next tile that it will reach, and makes a decision about which direction it will turn when it gets there. Really detailed analysis of just one component of this very successful game. (via Hacker News)
  4. The Full Stack (Facebook) -- we like to think that programming is easy. Programming is easy, but it is difficult to solve problems elegantly with programming. I like to think that a CS education teaches you this kind of "full stack" approach to looking at systems, but I suspect it's a side-effect and not a deliberate output. This is the core skill of great devops: to know what's happening up and down the stack so you're not solving a problem at level 5 that causes problems at level 3.

December 17 2009

The Best and the Worst Tech of the Decade

With only a few weeks left until we close out the 'naughts and move into the teens, it's almost obligatory to take a look back at the best and not-so-best of the last decade. With that in mind, I polled the O'Reilly editors, authors, Friends, and a number of industry movers and shakers to gather nominations. I then tossed them in the trash and made up my own compiled them together and looked for trends and common threads. So here then, in no particular order, are the best and the worst that the decade had to offer.

The Best

AJAX - It's hard to remember what life was like before Asynchronous JavaScript and XML came along, so I'll prod your memory. It was boring. Web 1.0 consisted of a lot of static web pages, where every mouse click was a round trip to the web server. If you wanted rich content, you had to embed a Java applet in the page, and pray that the client browser supported it.

Without the advent of AJAX, we wouldn't have Web 2.0, GMail, or most of the other cloud-based web applications. Flash is still popular, but especially with HTML 5 on the way, even functionality that formerly required a RIA like Flash or Silverlight can now be accomplished with AJAX.

Twitter - When they first started, blogs were just what they said, web logs. In other words, a journal of interesting web sites that the author had encountered. These days, blogs are more like platforms for rants, opinions, essays, and anything else on the writer's mind. Then along came Twitter. Sure, people like to find out what J-Lo had for dinner, but the real power of the 140 character dynamo is that it has brought about a resurgence of real web logging. The most useful tweets consist of a Tiny URL and a little bit of context. Combine that with the use of Twitter to send out real time notices about everything from breaking news to the current specials at the corner restaurant, and it's easy to see why Twitter has become a dominant player.

Ubiquitous WiFi: I want you to imagine you're on the road in the mid-90s. You get to your hotel room, and plop your laptop on the table. Then you get out your handy RJ-11 cord, and check to see if the hotel phone has a data jack (most didn't), or if you'll have to unplug the phone entirely. Then you'd look up the local number for your ISP, and have your laptop dial it, so you could suck down your e-mail at an anemic 56K.

Now, of course, WiFi is everywhere. You may end up having to pay for it, but fast Internet connectivity is available everywhere from your local McDonalds to your hotel room to an airport terminal. Of course, this is not without its downsides, since unsecured WiFi access points have led to all sorts of security headaches, and using an open access point is a risky proposition unless your antivirus software is up to date, but on the whole, ubiquitous WiFi has made the world a much more connected place.

Phones Get Smarter: In the late 90s, we started to see the first personal digital assistants emerge, but this has been the decade when the PDA and the cell phone got married and had a baby called the smartphone. Palm got the ball rolling with the Treos about the same time that Windows Mobile started appearing on phones, and RIM's Blackberry put functional phones in the hands of business, but it was Apple that took the ball and ran for the touchdown with the iPhone. You can argue if the droid is better than the 3GS or the Pre, but the original iPhone was the game-changer that showed what a smartphone really could do, including the business model of the App Store,

The next convergence is likely to be with Netbooks, as more and more of the mini-laptops come with 3G service integrated in them, and VoIP services such as Skype continue to eat into both landline and cellular business.

The Maker Culture: There's always been a DIY underground, covering everything from Ham radio to photography to model railroading. But the level of cool has taken a noticeable uptick this decade, as cheap digital technology has given DIY a kick in the pants. The Arduino lets anyone embed control capabilities into just about anything you can imagine, amateur PCB board fabrication has gone from a messy kitchen sink operation to a click-and-upload-your-design purchase, and the 3D printer is turning the Star Trek replicator into a reality.

Manufacturers cringe in fear as enterprising geeks dig out their screwdrivers. The conventional wisdom was that as electronics got more complex, the "no user serviceable parts" mentality would spell the end of consumer experimentation. But instead, the fact that everything is turning into a computer meant that you could take a device meant for one thing, and reprogram it to do something else. Don't like your digital camera's software? Install your own! Turn your DVR into a Linux server.

Meanwhile, shows like Mythbusters and events like Maker Faire have shown that hacking hardware can grab the public's interest, especially if there are explosions involved.

Open Source Goes Mainstream: Quick! Name 5 open source pieces of software you might have had on your computer in 1999. Don't worry I'll wait...

How about today? Firefox is an easy candidate, as are Open Office, Chrome, Audacity, Eclipse (if you're a developer), Blender, VLC, and many others. Many netbooks now ship with Linux as the underlying OS. Open Source has gone from a rebel movement to part of the establishment, and when you combine increasing end user adoption with the massive amounts of FLOSS you find on the server side, it can be argued that it is the 800 pound Gorilla now.

As Gandhi said, "First they ignore you, then they laugh at you, then they fight you, then you win." When even Microsoft is releasing Open Source code, you know that you're somewhere between the fight and win stages.

Bountiful Resources: 56K modems, 20MB hard drives, 640K of RAM, 2 MHz processors. You don't have to go far back in time for all of these to represent the state of the art. Now, of course, you would have more than that in a good toaster...

Moore's Law continues to drive technology innovation at a breakneck pace, and it seems that related technologies like storage capacity and bandwidth are trying to follow the same curve. Consider that AT&T users gripe about the iPhone's 5GB/month bandwidth cap, a limit that would have taken 10 solid days of transferring to achieve with a dialup connection.

My iPhone has 3,200 times the storage of the first hard drive I ever owned, and the graphics card on my Mac Pro has 16,000 times the memory of my first computer. We can now do amazing things in the palm of our hands, things that would have seemed like science fiction in 1999.

The Worst

SOAP: The software industry has been trying to solve the problem of making different pieces of software talk to each other since the first time there were two programs on a network, and they still haven't gotten it right. RPC, CORBA, EJB, and now SOAP now litter the graveyard of failed protocol stacks.

SOAP was a particularly egregious failure, because it was sold so heavily as the final solution to the interoperatibility problem. The catch, of course, was that no two vendors implemented the stack quite the same way, with the result that getting a .NET SOAP client to talk to a Java server could be a nightmare. Add in poorly spec'd out components such as web service security, and SOAP became useless in many cases. And the WSDL files that define SOAP endpoints are unreadable and impossible to generate by hand (well, not impossible, but unpleasant in the extreme.)

Is it any wonder that SOAP drove many developers into the waiting arms of more useable data exchange formats such as JSON?

Intellectual Property Wars: How much wasted energy has been spent this decade by one group of people trying to keep another group from doing something with their intellectual property, or property they claim was theirs? DMCA takedowns, Sony's Rootkit debacle, the RIAA suing grandmothers, SCO, patent trolls, 09F911029D74E35BD84156C5635688C0, Kindles erasing books, deep packet inspection, Three Strikes laws, the list goes on and on and on...

At the end of the day, the movie industry just had their best year ever, Lady Gaga seems to be doing just fine and Miley Cyrus isn't going hungry, and even the big players in the industry are getting fed up sufficiently with the Trolls to want patent reform. The iTunes store is selling a boatload of music, in spite of abandoning DRM, so clearly people will continue to pay for music, even if they can copy it from a friend.

Unfortunately, neither the RIAA nor the MPAA is going gently into that good night. If anything, the pressure to create onerous legislation has increased in the past year. Whether this is a last gasp or a retrenchment will only be answered in time.

The Cult of Scrum: If Agile is the teachings of Jesus, Scrum is every abuse ever perpetrated in his name. In many ways, Scrum as practiced in most companies today is the antithesis of Agile, a heavy, dogmatic methodology that blindly follows a checklist of "best practices" that some consultant convinced the management to follow.

Endless retrospectives and sprint planning sessions don't mean squat if the stakeholders never attend them, and too many allegedly Agile projects end up looking a lot like Waterfall projects in the end. If companies won't really buy into the idea that you can't control all three variables at once, calling your process Agile won't do anything but drive your engineers nuts.

The Workplace Becomes Ubiquitous: What's the first thing you do when you get home at night? Check your work email? Or maybe you got a call before you even got home. The dark side of all that bandwidth and mobile technology we enjoy today is that you can never truly escape being available, at least until the last bar drops off your phone (or you shut the darn thing off!)

The line between the workplace and the rest of your life is rapidly disappearing. When you add in overseas outsourcing, you may find yourself responding to an email at 11 at night from your team in Bangalore. Work and leisure is blurring together into a gray mélange of existence. "Do you live to work, or work to live," is becoming a meaningless question, because there's no difference.

So what do you think? Anything we missed? Hate our choices? With us 100 percent? Let us know in the comments section below.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl