Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 17 2014

Four short links: 17 February 2014

  1. imsg — use iMessage from the commandline.
  2. Facebook Data Science Team Posts About Love — I tell people, “this is what you look like to SkyNet.”
  3. A System for Detecting Software Plagiarism — the research behind the undergraduate bete noir.
  4. 3D GIFs — this is awesome because brain.

November 04 2013

Software, hardware, everywhere

Real and virtual are crashing together. On one side is hardware that acts like software: IP-addressable, controllable with JavaScript APIs, able to be stitched into loosely-coupled systems—the mashups of a new era. On the other is software that’s newly capable of dealing with the complex subtleties of the physical world—ingesting huge amounts of data, learning from it, and making decisions in real time.

The result is an entirely new medium that’s just beginning to emerge. We can see it in Ars Electronica Futurelab’s Spaxels, which use drones to render a three-dimensional pixel field; in Baxter, which layers emotive software onto an industrial robot so that anyone can operate it safely and efficiently; in OpenXC, which gives even hobbyist-level programmers access to the software in their cars; in SmartThings, which ties Web services to light switches.

The new medium is something broader than terms like “Internet of Things,” “Industrial Internet,” or “connected devices” suggest. It’s an entirely new discipline that’s being built by software developers, roboticists, manufacturers, hardware engineers, artists, and designers.

Ten years ago, building something as simple as a networked thermometer required some understanding of electrical engineering. Now it’s a Saturday-afternoon project for a beginner. It’s a shift we’ve already seen in programming, where procedural languages have become more powerful and communities have arisen to offer free help with programming problems. As the blending of hardware and software continues, the physical world will become democratized: the ranks of people who can address physical challenges from lots of different backgrounds will swell.

The outcome of all of this combining and broadening, I hope, will be a world that’s safer, cleaner, more efficient, and more accessible. It may also be a world that’s more intrusive, less private, and more vulnerable to ill-intentioned interference. That’s why it’s crucial that we develop a strong community from the new discipline.

Solid, which Joi Ito and I will present on May 21 and 22 next year, will bring members of the new discipline together to discuss this new medium at the blurred line between real and virtual. We’ll talk about design beyond the computer screen; software that understands and controls the physical world; new hardware tools that will become the building blocks of the connected world; frameworks for prototyping and manufacturing that make it possible for anyone to create physical devices; and anything else that touches both the concrete and abstract worlds.

Solid’s call for proposals is open to the public, as is the call for applications to the Solid Fellowships—a new program that comes with a stipend, a free pass to Solid, and help with travel expenses for students and independent innovators.

The business implications of the new discipline are just beginning to play out. Software companies are eyeing hardware as a way to extend their offerings into the physical world—think, for instance, of Google’s acquisition of Motorola and its work on a driverless car—and companies that build physical machines see software as a crucial component of their products. The physical world as a service, a business model that’s something like software as a service, promises to upend the way we buy and use machines, with huge implications for accessibility and efficiency. These types of service frameworks, along with new prototyping tools and open-source models, are making hardware design and manufacturing vastly easier.

A few interrelated concepts that I’ve been thinking about as we’ve sketched out the idea for Solid:

  • APIs for the physical world. Abstraction, modularity, and loosely-coupled services—the characteristics that make the Web accessible and robust—are coming to the physical world. Open-source libraries for sensors and microcontrollers are bringing easy-to-use and easy-to-integrate software interfaces to everything from weather stations to cars. Networked machines are defining a new physical graph, much like the Web’s information graph. These models are starting to completely reorder our physical environment. It’s becoming easier to trade off functionalities between hardware and software; expect the proportion of intelligence residing in software to increase over time.
  • Manufacturing made frictionless. Amazon’s EC2 made it possible to start writing and selling software with practically no capital investment. New manufacturing-as-a-service frameworks bring the same approach to building things, making factory work fast and capital-light. Development costs are plunging, and it’s becoming easier to serve niches with specialized hardware that’s designed for a single purpose. The pace of innovation in hardware is increasing as the field becomes easier for entrepreneurs to work in and financing becomes available through new platforms like Kickstarter. Companies are emerging now that will become the Amazon Web Services of manufacturing.
  • Software intelligence in the physical world. Machine learning and data-driven optimization have revolutionized the way that companies work with the Web, but the kind of sophisticated knowledge that Amazon and Netflix have accumulated has been elusive in the offline world. Hardware lets software reach beyond the computer screen to bring those kinds of intelligence to the concrete world, gathering data through networked sensors and exerting real-time control in order to optimize complicated systems. Many of the machines around us could become more efficient simply through intelligent control: a furnace can save oil when software, knowing that homeowners are away, turns down the thermostat; a car can save gas when Google Maps, polling its users’ smartphones, discovers a traffic jam and suggests an alternative route—the promise of software intelligence that works above the level of a single machine. The Internet stack now reaches all the way down to the phone in your pocket, the watch on your wrist, and the thermostat on your wall.
  • Every company is a software company. Software is becoming an essential component of big machines for both the builders and the users of those machines. Any company that owns big capital machines needs to get as much out of them as possible by optimizing their operation with software, and any company that builds machines must improve and extend them with layers of software in order to be competitive. As a result, a software startup with promising technology might just as easily be bought by a big industrial company as by a Silicon Valley software firm. This has important organizational, cultural, and competency impact.
  • Complex systems democratized. The physical world is becoming accessible to innovators at every level of expertise. Just as it’s possible to build a Web page with only a few hours’ learning, it’s becoming easier for anyone to build things, whether electronic or not. The result: realms like the urban environment that used to be under centralized control by governments and big companies are now open to innovation from anyone. New economic models and communities will emerge in the physical world just as they’ve emerged online in the last twenty years.
  • The physical world as a service. Anything from an Uber car to a railroad locomotive can be sold as a service, provided that it’s adequately instrumented and dispatched by intelligent software. Good data from the physical world brings about efficient markets, makes cheating difficult, and improves quality of service. And it will revolutionize business models in every industry as service guarantees replace straightforward equipment sales. Instead of just selling electricity, a utility could sell heating and cooling—promising to keep a homeowner’s house at 70 degrees year round. That sales model could improve efficiency and quality of life, bringing about incentive for the utility to invest in more efficient equipment and letting it take advantage of economies of scale.
  • Design after the screen. Our interaction with software no longer needs to be mediated through a keyboard and screen. In the connected world, computers gather data through multiple inputs outside of human awareness and intuit our preferences. The software interface is now a dispersed collection of conventional computers, mobile phones, and embedded sensors, and it acts back onto the world through networked microcontrollers. Computing happens everywhere, and it’s aware of physical-world context.
  • Software replaces physical complexity. A home security system is no longer a closed network of motion sensors and door alarms; it’s software connected to generic sensors that decides when something is amiss. In 2009, Alon Halevy, Peter Norvig, and Fernando Pereira wrote that having lots and lots of data can be more valuable than having the most elegant model. In the connected world, having lots and lots of sensors attached to some clever software will start to win out over single-purpose systems.

These are some rough thoughts about an area that we’ll all spend the next few years trying to understand. This is an open discussion, and we welcome thoughts on it from anyone.

May 09 2013

Where will software and hardware meet?

I’m a sucker for a good plant tour, and I had a really good one last week when Jim Stogdill and I visited K. Venkatesh Prasad at Ford Motor in Dearborn, Mich. I gave a seminar and we talked at length about Ford’s OpenXC program and its approach to building software platforms.

The highlight of the visit was seeing the scale of Ford’s operation, and particularly the scale of its research and development organization. Prasad’s building is a half-mile into Ford’s vast research and engineering campus. It’s an endless grid of wet labs like you’d see at a university: test tubes and robots all over the place; separate labs for adhesives, textiles, vibration dampening; machines for evaluating what’s in reach for different-sized people.

Prasad explained that much of the R&D that goes into a car is conducted at suppliers–Ford might ask its steel supplier to come up with a lighter, stronger alloy, for instance–but Ford is responsible for integrative research: figuring out how to, say, bond its foam insulation onto that new alloy.

In our more fevered moments, we on the software side of things tend to foresee every problem being reduced to a generic software problem, solvable with brute-force computing and standard machinery. In that interpretation, a theoretical Google car operating system–one that would drive the car and provide Web-based services to passengers–could commoditize the mechanical aspects of the automobile. If you’re not driving, you don’t care much about how the car handles; you just want a comfortable seat, functional air conditioning, and Web connectivity for entertainment. A panel in the dashboard becomes the only substantive point of interaction between a car and its owner, and if every car is running Google’s software in that panel, then there’s not much left to distinguish different makes and models.

When’s the last time you heard much of a debate on Dell laptops versus HP? As long it’s running the software you want, and meets minimum criteria for performance and physical quality, there’s not much to distinguish laptop makers for the vast majority of users. The exception, perhaps, is Apple, which consumers do distinguish from other laptop makers for both its high-quality hardware and its unique software.

That’s how I start to think after a few days in Mountain View. A trip to Detroit pushes me in the other direction: the mechanical aspects of cars are enormously complex. Even incremental changes take vast re-engineering efforts. Changing the shape of a door sill to make a car easier to get into means changing a car’s aesthetics, its frame, the sheet metal that gets stamped to make it, the wires and sensors embedded in it, and the assembly process that puts it together. Everything from structural integrity to user experience needs to be carefully checked before a thousand replicates start driving out of Ford’s plants every day.

So, when it comes to value added, where will the balance between software and machines emerge? Software companies and industrial firms might both try to shift the balance by controlling the interfaces between software and machines: if OpenXC can demonstrate that it’s a better way to interact with Ford cars than any other interface, Ford will retain an advantage.

As physical things get networked and instrumented, software can make up a larger proportion of their value. I’m not sure exactly where that balance will arise, but I have a hard time believing in complete commoditization of the machines beneath the software.

See our free research report on the industrial internet for an overview of the ways that software and machines are coming together.

April 26 2013

The makers of hardware innovation

Chris Anderson wrote Makers and went from editor-in-chief of Wired to CEO of 3D Robotics, making his hobby his side job and then making it his main job.

A new executive at Motorola Mobility, a division of Google, said that Google seeks to “googlify” hardware. By that he meant that devices would be inexpensive, if not free, and that the data created or accessed by them would be open. Motorola wants to build a truly hackable cellphone, one that makers might have ideas about what to do with it.

Regular hardware startup meetups, which started in San Francisco and New York, are now held in Boston, Pittsburgh, Austin, Chicago, Dallas and Detroit. I’m sure there are other American cities. Melbourne, Stockholm and Toronto are also organizing hardware meetups. Hardware entrepreneurs want to find each other and learn from each other.

Hardware-oriented incubators and accelerators are launching on both coasts in America, and in China.

The market for personal 3D printers and 3D printing services has really taken off. 3D printer startups continue to launch, and all of them seem to have trouble keeping up with demand. MakerBot is out raising money. Shapeways raised $30 million in a new round of financing announced this week.

Makers are discovering that the Raspberry PI, developed for educational uses, can fit into some interesting commercial niches.

The marketing-friendly phrase, “Internet of Things,” is beginning to mean something, with new boards such as Pinoccio and Electric Imp.

Design software is getting better, and less expensive, if not free, although the developers of TinkerCad announced that they were abandoning it.

And an 11-year old maker, Super Awesome Sylvia, was recognized at the White House Science Fair, exhibiting a watercolor robot that will soon be a kit sold through Evil Mad Science.

“Hardware is the new software” reported Wired and the New York Times. Joi Ito of the MIT Media Lab said it was one of the top trends to watch in 2013.

This year’s Hardware Innovation Workshop, held May 14-15 at the College of San Mateo in San Mateo, Calif., during the week leading up to Maker Faire Bay Area, will provide a deep dive into the new world of hardware startups. You’ll learn what VCs are thinking about hardware startups, which startups got funding and why. You’ll meet dozens of newly formed startups that haven’t launched yet. You’ll also learn from maker case studies and from the founders of hardware incubators.

Among our speakers are:

  • Chris Anderson, CEO of 3D Robotics and founder of DIY Drones
  • Massimo Banzi, co-founder of Arduino
  • Robert Faludi, collaborative strategy leader at Digi International
  • Bunnie Huang, co-founder of Chumby
  • Ben Kaufman, founder and CEO of Quirky
  • Scott Miller, CEO and co-founder of Dragon Innovation
  • Alice Taylor, founder, Makie Lab
  • John Park, COO/GM, AQS
  • Carl Bass, CEO of Autodesk
  • Ted Hall, CEO of ShopBot

Learn more about the Hardware Innovation Workshop. O’Reilly Radar readers can register using the code “RADAR13″ and save $100.

April 18 2013

Geheimnisvolle Sitzung im Bundestag zum Export von Überwachungstechnologie

Am Mittwoch fand im Bundestag eine Anhörung statt, in der die Bundesregierung unter Tagesordnungspunkt 1 über „aktuelle abrüstungspolitische Entwicklungen”, unter Tagesordnungspunkt 2 über den Export deutscher Überwachungssoftware berichtete. Viel mehr als die Tagesordnung (PDF) im Unterausschusses „Abrüstung, Rüstungskontrolle und Nichtverbreitung” war nicht öffentlich, auch Abgeordnete konnten „nur mit Sicherheitsüberprüfung” teilnehmen. 

Zum anschließenden Gespräch war Christian Mihr, Geschäftsführer von Reporter ohne Grenzen geladen. Wir dokumentieren hier die schriftliche Stellungnahme von Reporter ohne Grenzen zur Sitzung. Darin heißt es:

Deutsche Überwachungssoftware und -infrastruktur werden in die ganze Welt exportiert, darunter auch in Staaten mit einem zweifelhaften Ruf in Bezug auf Pressefreiheit und andere Menschenrechte. IT-basierte Überwachungstechnologie kann Festplatten von Computern durchsuchen, verschlüsselte E-Mails mitlesen sowie Kamera und Mikrofon eines Computers oder eines Handys aus der Ferne aktivieren.

(…)

Eine zielgenaue Regulierung der Exporte ist dringend angebracht und kann ohne negative Auswirkungen auf freie Meinungsäußerung umgesetzt werden. Auch die Verfügbarkeit von Software, etwa zur Umgehung von Zensur, für Endanwender und Firmen wird nicht beeinträchtigt.

Die Bundesregierung bleibe untätig, den Export von Überwachungstechnologie zu kontrollieren:

Nach deutschem Recht wäre eine Aufnahme von Überwachungsinfrastruktur in die Anlage AL zur Außenwirtschaftsverordnung ein denkbares Kontrollinstrument. Darüber hinaus sollte die Bundesregierung unverzüglich klarstellen, ob und in welchem Ausmaß in der Vergangenheit Hermes-Exportbürgschaften für die Lieferung von Überwachungstechnologie vergeben wurden.

(…)

Die umfassendste, aus Sicht von Reporter ohne Grenzen beste, Lösung wäre eine Integration von Überwachungstechnologie in das Wassenaar-Abkommen für Exportkontrolle. Verschiedenen Quellen zufolge liegen bereits verhandelte und abgestimmte Texte vor, die einen Einschluss von Überwachungstechnologie ermöglichen würden. Diese Texte könnte die Bundesregierung kurzfristig implementieren, um so Druck auf die internationalen Partner auszuüben.

Hier die gesamte Stellungnahme als PDF.

April 17 2013

Four short links: 17 April 2013

  1. Computer Software Archive (Jason Scott) — The Internet Archive is the largest collection of historical software online in the world. Find me someone bigger. Through these terabytes (!) of software, the whole of the software landscape of the last 50 years is settling in. (And documentation and magazines and …). Wow.
  2. 7 in 10 Doctors Have a Self-Tracking Patientthe most common ways of sharing data with a doctor, according to the physicians, were writing it out by hand or giving the doctor a paper printout. (via Richard MacManus)
  3. opsmezzo — open-sourced provisioning tools from the Nodejitsu team. (via Nuno Job)
  4. Hacking Secret Ciphers with Pythonteaches complete beginners how to program in the Python programming language. The book features the source code to several ciphers and hacking programs for these ciphers. The programs include the Caesar cipher, transposition cipher, simple substitution cipher, multiplicative & affine ciphers, Vigenere cipher, and hacking programs for each of these ciphers. The final chapters cover the modern RSA cipher and public key cryptography.

March 31 2013

Zu Ostern Zeit für Dueck: “Vernetzte Welten – Traum oder Alptraum”

Zu Ostern mal den Horizont erweitern. Gunther Dueck hielt 2011 einen Vortrag zum Thema “Vernetzte Welten – Traum oder Alptraum”. Das inzwischen verrentete Enfant Terrible von IBM, der “in ganz Heidelberg den höchsten Wert an naivem Optimismus” hat, erzählt von lauter Sachen die es schon gibt. Oft schon für wenige, aber nicht für alle, und das ist ein zentraler Unterschied in der Bewertung. Von Early Adopters bis hin zur Großmutter mit dem iPad, wohin soll sich unsere Gesellschaft entwickeln? Hier gehts zum Video

March 27 2013

Let’s do this the hard way

Recent discoveries of security vulnerabilities in Rails and MongoDB led me to thinking about how people get to write software.

In engineering, you don’t get to build a structure people can walk into without years of study. In software, we often write what the heck we want and go back to clean up the mess later. It works, but the consequences start to get pretty monumental when you consider the network effects of open source.

cartoon-37304_640cartoon-37304_640You might think it’s a consequence of the tools we use—running fast and loose with scripting languages. I’m not convinced. Unusually among computer science courses, my alma mater taught us programming 101 with Ada. Ada is a language that more or less requires a retinal scan before you can use the compiler. It was a royal pain to get Ada to do anything you wanted: the philosophical inverse of Perl or Ruby. We certainly came up the “hard way.”

I’m not sure that the hard way was any better: a language that protects you from yourself doesn’t teach you much about the problems you can create.

But perhaps we are in need of an inversion of philosophy. Where Internet programming is concerned, everyone is quick to quote Postel’s law: “Be conservative in what you do, be liberal in what you accept from others.”

The fact of it is that being liberal in what you accept is really hard. You basically have two options: look carefully for only the information you need, which I think is the spirit of Postel’s law, or implement something powerful that will take care of many use cases. This latter strategy, though seemingly quicker and more future-proof, is what often leads to bugs and security holes, as unintended applications of powerful parsers manifest themselves.

My conclusion is this: use whatever language makes sense, but be systematically paranoid. Be liberal in what you accept, but conservative about what you believe.

The coming of the industrial internet

The big machines that define modern life — cars, airplanes, furnaces, and so forth — have become exquisitely efficient, safe, and responsive over the last century through constant mechanical refinement. But mechanical refinement has its limits, and there are enormous improvements to be wrung out of the way that big machines are operated: an efficient furnace is still wasteful if it heats a building that no one is using; a safe car is still dangerous in the hands of a bad driver.

It is this challenge that the industrial internet promises to address by layering smart software on top of machines. The last few years have seen enormous advances in software and computing that can handle gushing streams of data and build nuanced models of complex systems. These have been used effectively in advertising and web commerce, where data is easy to gather and control is easy to exert, and marketers have rejoiced.

Thanks to widespread sensors, pervasive networks, and standardized interfaces, similar software can interact with the physical world — harvesting data, analyzing it in context, and making adjustments in real-time. The same data-driven approach that gives us dynamic pricing on Amazon and customized recommendations on Foursquare has already started to make wind turbines more efficient and thermostats more responsive. It may soon obviate humans as drivers and help blast furnaces anticipate changes in electricity prices.

Electric furnace circa 1941

An electric furnace at the Allegheny Ludlum Steel Corp. in Brackenridge, Pa. Circa 1941.
Photo via: Wikimedia Commons.

Those networks and standardized interfaces also make the physical world broadly accessible to innovative people. In the same way that Google’s Geocoding API makes geolocation available to anyone with a bit of general programming knowledge, Ford’s OpenXC platform makes drive-train data from cars available in real-time to anyone who can write a few basic scripts. That model scales: Boeing’s 787 Dreamliner uses modular flight systems that communicate with each other over something like an Ethernet, with each component presenting an application programming interface (API) by which it can be controlled. Anyone with a brilliant idea for a new autopilot algorithm could (in theory) implement it without particular expertise in, say, jet engine operation.

For a complete description of the industrial internet, see our new research report on the topic. In short, we foresee that the industrial internet* will:

  • Draw data from wide sensor networks and optimize systems in real-time.
  • Replace both assets and labor with software intelligence.
  • Bring the software industry’s rapid development and short upgrade cycles to big machines.
  • Mask the underlying complexity of machines behind web-like APIs, making it possible for innovators without specialized training to contribute improvements to the way the physical world works.
  • Create a valuable flow of data that makes decision making easier and more accurate for the operators of big machines as well as for their clients and suppliers.

Our report draws on interviews with industry experts and software innovators to articulate a vision for the coming-together of software and machines. Download the full report for free here, and also read O’Reilly’s ongoing coverage of the industrial internet at oreil.ly/industrial-internet.

* We have adapted our style over the course of our industrial internet investigation. We now use lowercase internet to refer generically to a group of interconnected networks, and uppercase Internet to refer to the public Internet, which includes the World Wide Web.


This is a post in our industrial internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

January 19 2013

Redigi: Download-Trödelmarkt will in Europa starten

Der Download-Marktplatz Redigi will bis Ende März in Europa starten. Das berichtet die Financial Times.

Weiterlesen

January 17 2013

The software-enabled cars of the near-future (industrial Internet links)

OpenXC (Ford Motor) — Ford has taken a significant step in turning its cars into platforms for innovative developers. OpenXC goes beyond the Ford Developer Program, which opens up audio and navigation features, and lets developers get their hands on drivetrain and auto-body data via the on-board diagnostic port. Once you’ve built the vehicle interface from open-source parts, you can use outside intelligence — code running on an Android device — to analyze vehicle data.

Of course, as outside software gets closer to the drivetrain, security becomes more important. OpenXC is read-only at the moment, and it promises “proper hardware isolation to ensure you can’t ‘brick’ your $20,000 investment in a car.”

Still, there are plenty of sophisticated data-machine tieups that developers could build with read-only access to the drivetrain: think of apps that help drivers get better fuel economy by changing their acceleration or, eventually, apps that optimize battery cycles in electric vehicles.

Drivers with Full Hands Get a Backup: The Car (New York Times) — John Markoff takes a look at automatic driver aides — tools like dynamic cruise control and collision-avoidance warnings that represent something of a middle ground between driverless cars and completely manual vehicles. Some features like these have been around for years, many of them using ultrasonic proximity sensors. But some of these are special, and illustrative of an important element of the industrial Internet: they rely on computer vision like Google’s driverless car. Software is taking over some kinds of machine intelligence that had previously resided in specialized hardware, and it’s creating new kinds of intelligence that hadn’t existed in cars at all.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

Related:

January 11 2013

The future of programming

Programming is changing. The PC era is coming to an end, and software developers now work with an explosion of devices, job functions, and problems that need different approaches from the single machine era. In our age of exploding data, the ability to do some kind of programming is increasingly important to every job, and programming is no longer the sole preserve of an engineering priesthood.

Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.

Is your next program for one of these?
Photo credit: Steve Lodefink/Flickr.

Over the course of the next few months, I’m looking to chart the ways in which programming is evolving, and the factors that are affecting it. This article captures a few of those forces, and I welcome comment and collaboration on how you think things are changing.

Where am I headed with this line of inquiry? The goal is to be able to describe the essential skills that programmers need for the coming decade, the places they should focus their learning, and differentiating between short term trends and long term shifts.

Distributed computing

The “normal” environment in which coding happens today is quite different from that of a decade ago. Given targets such as web applications, mobile and big data, the notion that a program only involves a single computer has disappeared. For the programmer, that means we must grapple with problems such as concurrency, locking, asynchronicity, network communication and protocols. Even the most basic of web programming will lead you to familiarity with concepts such as caching.

Because of these pressures we see phenomena at different levels in the computing stack. At a high level, cloud computing seeks to mitigate the hassle of maintaining multiple machines and their network; at the application development level, frameworks try to embody familiar patterns and abstract away tiresome detail; and at the language level, concurrency and networking computing is made simpler by the features offered by languages such as Go or Scala.

Device computing

Look around your home. There are processors and programming in most every electronic device you have, which certainly puts your computer in a small minority. Not everybody will be engaged in programming for embedded devices, but many developers will certainly have to learn what it is to develop for a mobile phone. And in the not so distant future carsdrones, glasses and smart dust.

Even within more traditional computing, the rise of the GPU array as an advanced data crunching coprocessor needs non-traditional programming. Different form factors require different programming approaches. Hobbyists and prototypers alike are bringing hardware to life with Arduino and Processing.

Languages and programmers must respond to issues previously the domain of specialists, such as low memory and CPU speeds, power consumption, radio communication, hard and soft real-time requirements.

Data computing

The prevailing form of programming today, object orientation, is generally hostile to data. Its focus on behavior wraps up data in access methods, and wraps up collections of data even more tightly. In the mathematical world, data just is, it has no behavior, yet the rigors of C++ or Java require developers to worry about how it is accessed.

As data and its analysis grow in importance, there’s a corresponding rise in use and popularity of languages that treat data as a first class citizen. Obviously, statistical languages such as R are rising on this tide, but within general purpose programming there’s a bias to languages such as Python or Clojure, which make data easier to manipulate.

Democratized computing

More people than ever are programming. These smart, uncounted, accidental developers wrangle magic in Excel macros, craft JavaScript and glue stuff together with web services such as IFTTT or Zapier. Quite reasonably, they know little about software development, and aren’t interested in it either.

However, many of these casual programmers will find it easy to generate a mess and get into trouble, all while only really wanting to get things done. At best, this is annoying, at worst, a liability for employers. What’s more, it’s not the programmer’s fault.

How can providers of programmable environments serve the “accidental developer” better? Do we need new languages, better frameworks in existing languages? Is it an educational concern? Is it even a problem at all, or just life?

There are hints towards a different future from Bret Victor’s work, and projects such as Scratch and Light Table.

Dangerous computing

Finally, it’s worth examining the house of cards we’re building with our current approach to software development. The problem is simple: the brain can only fit so much inside it. To be a programmer today, you need to be able to execute the program you’re writing inside your head.

When the problem space gets too big, our reaction is to write a framework that makes the problem space smaller again. And so we have operating systems that run on top of CPUs. Libraries and user interfaces that run on top of operating systems. Application frameworks that run on top of those libraries. Web browsers that run on top of those. JavaScript that runs on top of browsers. JavaScript libraries that run on top of JavaScript. And we know it won’t stop there.

We’re like ambitious waiters stacking one teacup on top of the other. Right now, it looks pretty wobbly. We’re making faster and more powerful CPUs, but getting the same kind of subjective application performance that we did a decade ago. Security holes emerge in frameworks that put large numbers of systems at risk.

Why should we use computers like this, simultaneously building a house of cards and confining computing power to that which the programmer can fit in their head? Is there a way to hit reset on this view of software?

Conclusion

I’ll be considering these trends and more as I look into the future of programming. If you have experience or viewpoints, or are working on research to do things radically differently, I’d love to hear from you. Please leave a comment on this article, or get in touch.

The future of programming

Programming is changing. The PC era is coming to an end, and software developers now work with an explosion of devices, job functions, and problems that need different approaches from the single machine era. In our age of exploding data, the ability to do some kind of programming is increasingly important to every job, and programming is no longer the sole preserve of an engineering priesthood.

Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.

Is your next program for one of these?
Photo credit: Steve Lodefink/Flickr.

Over the course of the next few months, I’m looking to chart the ways in which programming is evolving, and the factors that are affecting it. This article captures a few of those forces, and I welcome comment and collaboration on how you think things are changing.

Where am I headed with this line of inquiry? The goal is to be able to describe the essential skills that programmers need for the coming decade, the places they should focus their learning, and differentiating between short term trends and long term shifts.

Distributed computing

The “normal” environment in which coding happens today is quite different from that of a decade ago. Given targets such as web applications, mobile and big data, the notion that a program only involves a single computer has disappeared. For the programmer, that means we must grapple with problems such as concurrency, locking, asynchronicity, network communication and protocols. Even the most basic of web programming will lead you to familiarity with concepts such as caching.

Because of these pressures we see phenomena at different levels in the computing stack. At a high level, cloud computing seeks to mitigate the hassle of maintaining multiple machines and their network; at the application development level, frameworks try to embody familiar patterns and abstract away tiresome detail; and at the language level, concurrency and networking computing is made simpler by the features offered by languages such as Go or Scala.

Device computing

Look around your home. There are processors and programming in most every electronic device you have, which certainly puts your computer in a small minority. Not everybody will be engaged in programming for embedded devices, but many developers will certainly have to learn what it is to develop for a mobile phone. And in the not so distant future carsdrones, glasses and smart dust.

Even within more traditional computing, the rise of the GPU array as an advanced data crunching coprocessor needs non-traditional programming. Different form factors require different programming approaches. Hobbyists and prototypers alike are bringing hardware to life with Arduino and Processing.

Languages and programmers must respond to issues previously the domain of specialists, such as low memory and CPU speeds, power consumption, radio communication, hard and soft real-time requirements.

Data computing

The prevailing form of programming today, object orientation, is generally hostile to data. Its focus on behavior wraps up data in access methods, and wraps up collections of data even more tightly. In the mathematical world, data just is, it has no behavior, yet the rigors of C++ or Java require developers to worry about how it is accessed.

As data and its analysis grow in importance, there’s a corresponding rise in use and popularity of languages that treat data as a first class citizen. Obviously, statistical languages such as R are rising on this tide, but within general purpose programming there’s a bias to languages such as Python or Clojure, which make data easier to manipulate.

Democratized computing

More people than ever are programming. These smart, uncounted, accidental developers wrangle magic in Excel macros, craft JavaScript and glue stuff together with web services such as IFTTT or Zapier. Quite reasonably, they know little about software development, and aren’t interested in it either.

However, many of these casual programmers will find it easy to generate a mess and get into trouble, all while only really wanting to get things done. At best, this is annoying, at worst, a liability for employers. What’s more, it’s not the programmer’s fault.

How can providers of programmable environments serve the “accidental developer” better? Do we need new languages, better frameworks in existing languages? Is it an educational concern? Is it even a problem at all, or just life?

There are hints towards a different future from Bret Victor’s work, and projects such as Scratch and Light Table.

Dangerous computing

Finally, it’s worth examining the house of cards we’re building with our current approach to software development. The problem is simple: the brain can only fit so much inside it. To be a programmer today, you need to be able to execute the program you’re writing inside your head.

When the problem space gets too big, our reaction is to write a framework that makes the problem space smaller again. And so we have operating systems that run on top of CPUs. Libraries and user interfaces that run on top of operating systems. Application frameworks that run on top of those libraries. Web browsers that run on top of those. JavaScript that runs on top of browsers. JavaScript libraries that run on top of JavaScript. And we know it won’t stop there.

We’re like ambitious waiters stacking one teacup on top of the other. Right now, it looks pretty wobbly. We’re making faster and more powerful CPUs, but getting the same kind of subjective application performance that we did a decade ago. Security holes emerge in frameworks that put large numbers of systems at risk.

Why should we use computers like this, simultaneously building a house of cards and confining computing power to that which the programmer can fit in their head? Is there a way to hit reset on this view of software?

Conclusion

I’ll be considering these trends and more as I look into the future of programming. If you have experience or viewpoints, or are working on research to do things radically differently, I’d love to hear from you. Please leave a comment on this article, or get in touch.

December 18 2012

Interoperating the industrial Internet

One of the most interesting points made in GE’s “Unleashing the Industrial Internet” event was GE CEO Jeff Immelt’s statement that only 10% of the value of Internet-enabled products is in the connectivity layer; the remaining 90% is in the applications that are built on top of that layer. These applications enable decision support, the optimization of large scale systems (systems “above the level of a single device,” to use Tim O’Reilly’s phrase), and empower consumers.

Given the jet engine that was sitting on stage, it’s worth seeing how far these ideas can be pushed. Optimizing a jet engine is no small deal; Immelt said that the engine gained an extra 5-10% efficiency through software, and that adds up to real money. The next stage is optimizing the entire aircraft; that’s certainly something GE and its business partners are looking into. But we can push even harder: optimize the entire airport (don’t you hate it when you’re stuck on a jet waiting for one of those trucks to push you back from the gate?). Optimize the entire air traffic system across the worldwide network of airports. This is where we’ll find the real gains in productivity and efficiency.

So it’s worth asking about the preconditions for those kinds of gains. It’s not computational power; when you come right down to it, there aren’t that many airports, aren’t that many flights in the air at one time. There are something like 10,000 flights in the air at one time, worldwide; and in these days of big data, and big distributed systems, that’s not a terribly large number. It’s not our ability to write software; there would certainly be some tough problems to solve, but certainly nothing as difficult as, say, searching the entire web and returning results in under a second.

But there is one important prerequisite for software that runs above the level of a single machine, and that’s interoperability. That’s something the inventors of the Internet understood early on; nothing became a standard unless at least two independent, interoperable implementations existed. The Interop conference didn’t start as a trade show, it started as a technical exercise where everyone brought their experimental hardware and software and worked on it until it played well together.

If we’re going to build useful applications on top of the industrial Internet, we must ensure, from the start, that the components we’re talking about interoperate. It’s not just a matter of putting HTTP everywhere. Devices need common, interoperable data representations. And that problem can’t be solved just by invoking XML: several years of sad experience has proven that it’s certainly possible to be proprietary under the aegis of “open” XML standards.

It’s a hard problem, in part because it’s not simply technical. It’s also a problem of business culture, and the desire to extract as much monetary value from your particular system as possible. We see the consumer Internet devolving into a set of walled gardens, with interoperable protocols but license agreements that prevent you from moving data from one garden into another. Can the industrial Internet do better? It takes a leap of faith to imagine manufacturers of industrial equipment practicing interoperability, at least in part because so many manufacturers have already developed their own protocols and data representations in isolation. But that’s what our situation demands. Should a GE jet engine interoperate with a jet engine from Pratt and Whitney? What would that mean, what efficiencies in maintenance and operations would that entail? I’m sure that any airline would love a single dashboard that would show the status of all its equipment, regardless of vendor. Should a Boeing aircraft interoperate with Airbus and Bombardier in a system to exchange in-flight data about weather and other conditions? What if their flight computers were in constant communication with each other? What would that enable? Leaving aviation briefly: self-driving cars have the potential to be much safer than human-driven cars; but they become astronomically safer if your Toyota can exchange data directly with the BMW coming in the opposite direction. (“Oh, you intend to turn left here? Your turn signal is out, by the way.”)

Extracting as much value as possible from a walled garden is false optimization. It may lead you to a local maximum in profitability, but it leaves the biggest gains, the 90% that Immelt talked about in his keynote, behind. Tim O’Reilly has talked about the “clothesline paradox“: if you dry your clothes on a clothesline, the money you save doesn’t disappear from the economy, even though it disappears from the electric company’s bottom line. The economics of walled gardens is the clothesline paradox’s evil twin. Building a walled garden may increase local profitability, but prevents larger gains, Immelt’s 90% gains in productivity, from existing. They never reach the economy.

Can the industrial Internet succeed in breaking down walled gardens, whether they arise from business culture, legacy technology, or some other source? That’s a hard problem. But it’s the problem the industrial Internet must solve if it is to succeed.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

November 30 2012

To eat or be eaten?

One of Marc Andreessen’s many accomplishments was the seminal essay “Why Software is Eating the World.” In it, the creator of Mosaic and Netscape argues for his investment thesis: everything is becoming software. Music and movies led the way, Skype makes the phone company obsolete, and even companies like Fedex and Walmart are all about software: their core competitive advantage isn’t driving trucks or hiring part-time employees, it’s the software they’ve developed for managing their logistics.

I’m not going to argue (much) with Marc, because he’s mostly right. But I’ve also been wondering why, when I look at the software world, I get bored fairly quickly. Yeah, yeah, another language that compiles to the JVM. Yeah, yeah, the Javascript framework of the day. Yeah, yeah, another new component in the Hadoop ecosystem. Seen it. Been there. Done that. In the past 20 years, haven’t we gained more than the ability to use sophisticated JavaScript to display ads based on a real-time prediction of the user’s next purchase?

When I look at what excites me, I see a much bigger world than just software. I’ve already argued that biology is in the process of exploding, and the biological revolution could be even bigger than the computer revolution. I’m increasingly interested in hardware and gadgetry, which I used to ignore almost completely. And we’re following the “Internet of Things” (and in particular, the “Internet of Very Big Things”) very closely. I’m not saying that software is irrelevant or uninteresting. I firmly believe that software will be a component of every (well, almost every) important new technology. But what grabs me these days isn’t software as a thing in itself, but software as a component of some larger system. The software may be what makes it work, but it’s not about the software.

A dozen or so years ago, people were talking about Internet-enabled refrigerators, a trend which (perhaps fortunately) never caught on. But it led to an interesting exercise: thinking of the dumbest device in your home, and imagine what could happen if it was intelligent and network-enabled. My furnace, for example: shortly after buying our house, we had the furnace repairman over 7 times during the month of November. And rather than waiting for me to notice that the house was getting cold at 2AM, it would have been nice for a “smart furnace” to notify the repairman, say “I’m broken, and here’s what’s probably wrong.” (The Nest doesn’t do that, but with a software update it probably could.)

The combination of low-cost, small-footprint computing (the BeagleBone, Raspberry Pi, and the Arduino), along with simple manufacturing (3D printing and CNC machines), and inexpensive sensors (for $150, the Kinect packages a set of sensors that until recently would easily have cost $10,000) means that it’s possible to build smart devices that are much smaller and more capable than anything we could have built back when we were talking about smart refrigerators. We’ve seen Internet-enabled scales, robotic vacuum cleaners, and more is on the way.

At the other end of the scale, GE’s “Unleashing the Industrial Internet” event had a fully instrumented network-capable jet engine on stage, with dozens of sensors delivering realtime data about the engine’s performance. That data can be used for everything from performance optimization to detecting problems. In a panel, Tim O’Reilly asked Matt Reilly of Accenture “do you want more Silicon Valley on your turf?” and his immediate reply was “absolutely.”

Even in biology: synthetic biology is basically nothing more than programming with DNA, using a programming language that we don’t yet understand and for which there is still no “definitive guide.” We’re only beginning to get to the point where we can reliably program and build “living software,” but we are certainly going to get there. And the consequences will be profound, as George Church has pointed out.

I’m not convinced that software is going to eat everything. I don’t see us living in a completely virtual world, mediated completely by browsers and dashboards. But I do see everything eating software: software will be a component of everything we do or buy, from our clothing to our food. Why is the FitBit a separate device? Why not integrate it into your shoes? Can we imagine cookies that incorporate proteins that have been engineered to become unappealing when we’ve eaten too much? Yes, we can, though we may not be happy about that. Seriously, I’ve had discussions about genetically engineered food that would diagnose diseases and turn different colors in your digestive track to indicate cancer and other conditions. (You can guess how you read the results).

Andreessen is certainly right in his fundamental argument that software has disrupted, and will continue to disrupt, just about every industry on the planet. He pointed to health care and education as the next industries to be disrupted; and we’re certainly seeing that, with Coursera and Udacity in education, and conferences like StrataRx in health care. We just need to push his conclusion farther. Is a robotic car a case of software eating the driver, or of the automobile eating software? You tell me. At the Industrial Internet event, Andreessen was quoted as saying “We only invest in hardware/software hybrids that would collapse if you pulled the software out.” Is an autonomous car something that would collapse if you pulled the software out? The car is still drivable. In any case, my “what’s the dumbest device in the house” exercise is way too limiting. When are we going to build something that we can’t now imagine, that isn’t simply an upgrade of what we already have? What would it mean for our walls and floors, or our plumbing, to be intelligent? At the other extreme, when will we build devices where we don’t even notice that they’ve “eaten” software? Again, Matt Reilly: “It will be about flights that are on time, luggage that doesn’t get lost.”

In the last few months, I’ve seen a number of articles on the future of venture investment. Some argue that it’s too easy and inexpensive to look for “the next Netscape,” and as a consequence, big ambitious projects are being starved. It’s hard for me to accept that. Yes, there’s a certain amount of herd thinking in venture capital, but investors also know that when everyone is doing the same thing, they aren’t going to make any money. Fred Wilson has argued that momentum is moving from consumer Internet to enterprise software, certainly a market that is ripe for disruption. But as much as I’d like to see Oracle disrupted, that still isn’t ambitious enough.

Innovation will find the funds that it needs (and it isn’t supposed to be easy). With both SpaceX and Tesla Motors, Elon Musk has proven that it’s possible for the right entrepreneur to take insane risks and make headway. Of course, neither has “succeeded,” in the sense of a lucrative IPO or buyout. That’s not the point either, since being an entrepreneur is all about risking failure. Neither SpaceX nor Tesla are Facebook-like “consumer web” startups, nor even enterprise software startups or education startups. They’re not “software” at all, though they’ve both certainly eaten a lot of software to get where they are. And that leads to the most important question:

What’s the next big thing that’s going to eat software?

Related:

October 09 2012

Tracking Salesforce’s push toward developers

SalesforceSalesforceHave you ever seen Salesforce’s “no software” graphic? It’s the word “software” surrounded by a circle with a red line through it. Here’s a picture of the related (and dancing) “no software” mascot.

Now, if you consider yourself a developer, this is a bit threatening, no? Imagine sitting at a Salesforce event in 2008 in Chicago while Salesforce.com’s CEO, Marc Benioff, swiftly works an entire room of business users into an anti-software frenzy. I was there to learn about Force.com, and I’ll summarize the message I understood four years ago as “Not only can companies benefit from Salesforce.com, they also don’t have to hire developers.”

The message resonated with the audience. Salesforce had been using this approach for a decade: Don’t buy software you have to support, maintain, and hire developers to customize. Use our software-as-a-service (SaaS) instead.  The reality behind Salesforce’s trajectory at the time was that it too needed to provide a platform for custom development.

Salesforce’s dilemma: They needed developers

This “no software” message was enough for the vast majority of the small-to-medium-sized business (SMB) market, but to engage with companies at the largest scale, you need APIs and you need to be able to work with developers. At the time, in 2008, Salesforce was making moves toward the developer community. First there was Apex, then there was Force.com.

In 2008, I evaluated Force.com, and while capable, it didn’t strike me as something that would appeal to most developers outside of existing Salesforce customers.  Salesforce was aiming at the corporate developers building software atop competing stacks like Oracle.  While there were several attempts to sell it as such, it wasn’t a stand-alone product or framework.  In my opinion, no developer would assess Force.com and opt to use it as the next development platform.

This 2008 TechCrunch article announcing the arrival of Salesforce’s Developer-as-a-Service (DaaS) platform serves as a reminder of what Salesforce had in mind. They were still moving forward with an anti-software message for the business while continuing to make moves into the developer space. Salesforce built a capable platform. Looking back at Force.com, it felt more like an even more constrained version of Google App Engine. In other words, capable and scalable, but at the time a bit constraining for the general developer population. Don’t get me wrong: Force.com wasn’t a business failure by any measure; they have an impressive client list even today, but what they didn’t achieve was traction and awareness among the developer community.

2010: Who bought Heroku? Really?

When Salesforce.com purchased Heroku, I was initially surprised. I didn’t see it coming, but it made perfect sense after the fact. Heroku is very much an analog to Salesforce for developers, and Heroku brought something to Salesforce that Force.com couldn’t achieve: developer authenticity and community.

Heroku, like Salesforce, is the opposite of shrink-wrapped software. There’s no investment in on-premise infrastructure, and once you move to Heroku — or any other capable platform-as-a-service (PaaS), like Joyent — you question why you would ever bother doing without such a service.  As a developer, once you’ve made the transition to never again having to worry about running “yum update” on a CentOS machine or worry about kernel patches in a production deployment, it is difficult to go back to a world where you have to worry about system administration.

Yes, there are arguments to be made for procuring your own servers and taking on the responsibility for system administration: backups, worrying about disks, and the 100 other things that come along with owning infrastructure. But those arguments really only start to make sense once you achieve the scale of a large multi-national corporation (or after a freak derecho in Reston, Va).

Jokes about derecho-prone data centers aside, this market is growing up, and with it we’re seeing an unbeatable trend of obsoleting yesterday’s system administrator.  With capable options like Joyent and Heroku, there’s very little reason (other than accounting) for any SMB to own hardware infrastructure. It’s a large capital expense when compared to the relatively small operational expense you’ll throw down to run a scalable architecture on a Heroku or a Joyent.

Replace several full-time operations employees with software-as-a-service; shift the cost to developers who create real value; insist on a proper service-level agreement (SLA); and if you’re really serious about risk mitigation, use several platforms at once.  Heroku is exactly the same play Salesforce made for customer relationship management (CRM) over the last decade, except this time they are selling PostgreSQL and a platform to run applications.

Heroku is closer to what Salesforce was aiming for with Force.com.  Here are two things to note about this Heroku acquisition almost two years after the fact:

  1. Force.com was focused on developers — a certain kind of developer in the “enterprise.” While it was clear that Salesforce had designs on using Force.com to expand the market, the existing marketing and product management function at Salesforce ended up creating something that was way too connected to the existing Salesforce brand, along with its anti-developer message.
  2. Salesforce.com’s developer-focused acquisitions are isolated from the Salesforce.com brand (on purpose?) — When Benioff discussed Heroku at Dreamforce, he made it clear that Heroku would remain “separate.”  While it is tough to know how “separate” Heroku remains, the brand has changed very little, and I think this is the important thing to note two years after the fact. Salesforce understands that they need to attract independent developers and they understand that the Salesforce brand is something of a “scarecrow” for this audience. They invested an entire decade in telling businesses that software isn’t necessary, and senior management is too smart to confuse developers with that message.

Investments point toward greater developer outreach: Sauce Labs

This brings me to Sauce Labs — a company that recently raised $3 million from “Triage Ventures as well as [a] new investor Salesforce.com,” according to Jolie O’Dell at VentureBeat.

Sauce Labs provides a hosted web testing platform.  I’ve used it for some one-off testing jobs, and it is impressive.  You can spin up a testing machine in about two minutes from an array of operating systems, mobile devices, and browsers, and then run a test script either in Selenium or WebDriver. The platform can be used for acceptance testing, and Jason Huggins and John Dunham’s emphasis of late has been mobile testing.  Huggins supported the testing grid at Google, and he started Selenium while at ThoughtWorks in 2004. By every measure, Sauce Labs is a developer’s company as much as Heroku.

Sauce Labs, like Heroku before it, also satisfies the Salesforce.com analogy perfectly. Say I have a company that develops a web site. Well, if I’m deploying this application to a platform like Joyent or Heroku continuously, I also need to be able to support some sort of continuous automated testing system. If I need to test on an array of browsers and devices, would I procure the necessary hardware infrastructure to set up my own testing infrastructure, or … you can see where I’m going.

I also think I can see where Salesforce is going. They didn’t acquire Sauce Labs, but this investment is another data point and another view into what Salesforce.com is paying attention to. I think it has taken them 4-5 years, but they are continuing a push toward developers. Heroku stood out from the list of Salesforce acquisitions: It wasn’t CRM, sales, or marketing focused; it was a pure technology play. Salesforce’s recent investments, from Sauce Labs to Appirio to Urban Airship, suggest that Salesforce is become more relevant to the individual developer who is uninterested in Salesforce’s other product offerings.

Some random concluding conjecture

Although I think it would be too expensive, I wonder what would happen if Salesforce acquired GitHub. GitHub just received an unreal investment ($100M), so I just don’t see it happening. But if you were to combine GitHub, Heroku, and Sauce Labs into a single company, you’d have a one-stop shop for the majority of development and production infrastructure that people are paying attention to.  Add an Atlassian to the mix, and it would be tough to avoid that company.

This is nothing more than conjecture, but I do get the sense that there has been an interesting shift happening at Salesforce ever since 2008. I think the next few years are going to see even more activity.

July 30 2012

Open source won

I heard the comments a few times at the 14th OSCON: The conference has lost its edge. The comments resonated with my own experience — a shift in demeanor, a more purposeful, optimistic attitude, less itching for a fight. Yes, the conference has lost its edge, it doesn’t need one anymore.

Open source won. It’s not that an enemy has been vanquished or that proprietary software is dead, there’s not much regarding adopting open source to argue about anymore. After more than a decade of the low-cost, lean startup culture successfully developing on open source tools, it’s clearly a legitimate, mainstream option for technology tools and innovation.

And open source is not just for hackers and startups. A new class of innovative, widely adopted technologies has emerged from the open source culture of collaboration and sharing — turning the old model of replicating proprietary software as open source projects on its head. Think GitHub, D3, Storm, Node.js, Rails, Mongo, Mesos or Spark.

We see more enterprise and government folks intermingling with the stalwart open source crowd who have been attending OSCON for years. And, these large organizations are actively adopting many of the open source technologies we track, e.g., web development frameworks, programming languages, content management, data management and analysis tools.

We hear fewer concerns about support or needing geek-level technical competency to get started with open source. In the Small and Medium Business (SMB) market we see mass adoption of open source for content management and ecommerce applications — even for self-identified technology newbies.

MySQL appears as popular as ever and remains open source after three years of Oracle control and Microsoft is pushing open source JavaScript as a key part of its web development environment and more explicit support for other open source languages. Oracle and Microsoft are not likely to radically change their business models, but their recent efforts show that open source can work in many business contexts.

Even more telling:

  • With so much of the consumer web undergirded with open source infrastructure, open source permeates most interactions on the web.
  • The massive, $100 million, GitHub investment validates the open collaboration model and culture — forking becomes normal.

What does winning look like? Open source is mainstream and a new norm — for startups, small business, the enterprise and government. Innovative open source technologies creating new business sectors and ecosystems (e.g., the distribution options, tools and services companies building around Hadoop). And what’s most exciting is the notion that the collaborative, sharing culture that permeates the open source community spreads to the enterprise and government with the same impact on innovation and productivity.

So, thanks to all of you who made the open source community a sustainable movement, the ones who were there when … and all the new folks embracing the culture. I can’t wait to see the new technologies, business sectors and opportunities you create.

Related:

July 08 2012

Wochenrückblick: Softwarelizenzen, ACTA, Leistungsschutzrecht

Der EuGH entscheidet: Auch per Download gekaufte Softwarelizenzen dürfen weiterverkauft werden, das europäische Parlament kippt ACTA, das Leistungsschutz

Weiterlesen

May 21 2012

Wochenrückblick: Urheberrechtsdebatte, Button-Lösung, Cloud-Sicherheit

Im Urheberrechtsstreit konstatiert SPD-Netzpolitiker Klingbeil „Versäumnisse“ der Politik, die Button-Lösung gegen Kostenfallen im Netz gilt

Weiterlesen

May 19 2012

Finally, mystery of the famous faces of art may be revealed

Californian university project will use facial recognition software to identify subjects of paintings

A Californian university has won funding to use advanced facial recognition technology to try to solve the mysteries of some of the world's most famous works of art.

Professor Conrad Rudolph said the idea for the experiment came from watching news and detective shows such as CSI which had a constant theme of using advanced computers to recognise unknown faces from murder victims to wanted criminals.

Rudolph, professor of medieval art history at the University of California at Riverside, realised he might be able to apply that cutting-edge forensic science to some of the oldest mysteries in art: identifying the real people in paintings such as Vermeer's Girl with a Pearl Earring, Hals's The Laughing Cavalier or thousands of other portraits and busts where the identity of the subject has been lost. Work on the project should begin within a month or so.

Police and forensic scientists can use facial recognition software that identifies individuals by measuring certain key features. For example, it might measure the distance between someone's eyes or the gap between their mouth and their nose. In real life such measurements should be almost as unique as a fingerprint. Rudolph is hoping that the same might be true of portraiture, whether it is a sculpted bust or a painting.

To start with, his team will use facial recognition software on death masks of known individuals and then compare them to busts and portraits. If the software can find a match where Rudolph and his team know one exists, then it shows the technique works and can be used on unknown subjects to see if it can match them up with known identities.

The identity of the subjects of some of the most famous pictures in the world are unknown, including Girl with a Pearl Earring, the 17th-century portrait that inspired a film starring Scarlett Johansson. The Imagined Lives exhibition now running at London's National Portrait Gallery features portraits of 14 unknown subjects. Many of those paintings were once thought to be of historical figures such as Elizabeth I, but the identities are now disputed. The truth behind several paintings of Shakespeare – such as the Chandos portrait and the Cobbe portrait – has also been much disputed. It is possible facial recognition software could help solve these mysteries.

To be identified, the subject of a portrait would need to be matched to the identity of another named person in a separate picture. But Rudolph has some tricks up his sleeve. He believes that another forensic technique – whereby an "ageing" programme is run on a subject – could also help solve art mysteries. In fighting crime the software is usually used to produce "adult" pictures of children who have been missing for many years. But it could see if the Girl with a Pearl Earring had been painted again as a much older woman, whose identity might be known.

Away from the high-profile cases there are a legion of other unknown subjects that might be more easily identified. In many works from before the 19th century wealthy patrons often inserted themselves, their families or friends and business associates into crowd scenes.

Facial recognition technology could be used to identify some of these people from already known works and thus provide insight into personal, political and business relationships of the day. In other cases families in wealthy homes commissioned busts of relatives that were often sold when estates went bankrupt or families declined.

The new technique could identify many of these people by linking the busts to known portraits. "These are historical documents and they can teach us things. Works of art can show us political connections or business links. It opens up a whole new window into the past," Rudolph said.

In order to transfer the process to analysing faces in works of art, some technical issues will need to be overcome. Portraits are in two dimensions and are also an artistic interpretation rather than a definitive likeness. In some cases, the painter might have simply not been very accurate, or attempted to flatter a subject, which would make recognition more difficult.

"It is different using this on art rather than an actual human," said Rudolph, "But we are trying to test the limits of the technology now and then who knows what advances may happen in the future? This is a fast-moving field."


guardian.co.uk © 2012 Guardian News and Media Limited or its affiliated companies. All rights reserved. | Use of this content is subject to our Terms & Conditions | More Feeds


Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl