Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 08 2014

Four short links: 7 February 2014

  1. 12 Predictions About the Future of Programming (Infoworld) — not a bad set of predictions, except for the inane “squeezing” view of open source.
  2. Conceal (Github) — Facebook Android tool for apps to encrypt data and large files stored in public locations, for example SD cards.
  3. Dreamliner Softwareall three of the jet’s navigation computers failed at the same time. “The cockpit software system went blank,” IBN Live, an Indian television station, reported. The Internet of Rebooting Things.
  4. Contiki — open source connective OS for IoT.
  5. November 01 2013

    Four short links: 1 November 2013

    1. Analogy as the Core of Cognition (YouTube) — a Douglas Hofstadter lecture at Stanford.
    2. Why Isn’t Programming Futuristic? (Ian Bicking) — delicious provocations for the future of programming languages.
    3. Border Check — visualisation of where your packet go, and the laws they pass through to get there.
    4. Pi Noir — infrared Raspberry Pi camera board. (via DIY Drones)

    July 31 2013

    Four short links: 31 July 2013

    1. How to Easily Resize and Cache Images for the Mobile Web (Pete Warden) — I set up a server running the excellent ImageProxy open-source project, and then I placed a Cloudfront CDN in front of it to cache the results. (a how-to covering the tricksy bits)
    2. Google’s Position on Net Neutrality Changes? (Wired) — At issue is Google Fiber’s Terms of Service, which contains a broad prohibition against customers attaching “servers” to its ultrafast 1 Gbps network in Kansas City. Google wants to ban the use of servers because it plans to offer a business class offering in the future. [...] In its response [to a complaint], Google defended its sweeping ban by citing the very ISPs it opposed through the years-long fight for rules that require broadband providers to treat all packets equally.
    3. The Future of Programming (Bret Victor) — gorgeous slides, fascinating talk, and this advice from Alan Kay: I think the trick with knowledge is to “acquire it, and forget all except the perfume” — because it is noisy and sometimes drowns out one’s own “brain voices”. The perfume part is important because it will help find the knowledge again to help get to the destinations the inner urges pick.
    4. psd.rb — Ruby code for reading PSD files (MIT licensed).

    July 02 2013

    April 16 2013

    What is probabilistic programming?

    Probabilistic programming languages are in the spotlight. This is due to the announcement of a new DARPA program to support their fundamental research. But what is probabilistic programming? What can we expect from this research? Will this effort pay off? How long will it take?

    A probabilistic programming language is a high-level language that makes it easy for a developer to define probability models and then “solve” these models automatically. These languages incorporate random events as primitives and their runtime environment handles inference. Now, it is a matter of programming that enables a clean separation between modeling and inference. This can vastly reduce the time and effort associated with implementing new models and understanding data. Just as high-level programming languages transformed developer productivity by abstracting away the details of the processor and memory architecture, probabilistic languages promise to free the developer from the complexities of high-performance probabilistic inference.

    What does it mean to perform inference automatically? Let’s compare a probabilistic program to a classical simulation such as a climate model. A simulation is a computer program that takes some initial conditions such as historical temperatures, estimates of energy input from the sun, and so on, as an input. Then it uses the programmer’s assumptions about the interactions between these variables that are captured in equations and code to produce forecasts about the climate in the future. Simulations are characterized by the fact that they only run in one direction: forward, from causes to hypothesized effects.

    A probabilistic program turns this around. Given a universe of possible interactions between different elements of the climate system and a collection of observed data, we could automatically learn which interactions are most effective in explaining the observations — even if these interactions are quite complex. How does this work? In a nutshell, the probabilistic language’s runtime environment runs the program both forward and backward. It runs forward from causes to effects (data) and backward from the data to the causes. Clever implementations will trade off between these directions to efficiently home in on the most likely explanations for the observations.

    PP Figure.002PP Figure.002

    Better climate models are but one potential application of probabilistic programming. Other models include: shorter and more humane clinical trials with fewer unneeded side effects and more accurate outcomes; machine perception that transcends the capabilities of the now-ubiquitous quadcopters and even Google’s self-driving cars; and “nervous systems” that fuse data from massively distributed and noisy sensor networks to better understand both the natural world and artificial environments.

    Of course, any technology this general carries a lot of uncertainty around its development path and eventual impact. So much depends on complex interactions with other technology threads and, ultimately, social factors and regulation. With all possible humility, here is one sample from the predictive distribution, conditioned on what we know so far:

    • Phase I — Probabilistic programming will transform the practice of data science by unifying anecdotal reasoning with more reliable statistical approaches. If data science is first and foremost about telling stories, then probabilistic programming is in many ways the perfect tool. Practitioners will be able to leverage the persuasive power of narrative, while staying on firm quantitative ground.
    • Phase II — Practitioners will really start to push the boundaries of modeling in fundmental ways in order to address many applications that don’t fit well into the current machine learning, text mining, or graph analysis paradigms. Many real-world datasets are a mixture of tabular, relational, textual, geospatial, audiovisual, and other data types. Probabilistic programs can weave all of these pieces together in natural ways. Current solutions that claim to integrate heterogeneous data typically do so by beating it all into a similar form, losing much of the underlying structure along the way.
    • Phase III — Probabilistic programming will push well into territory that is universally recognized as artificial intelligence. As we’re often reminded, intelligent systems are very application-specific. Good chess algorithms are unlike Google’s self-driving car, which is totally different from IBM’s Watson. But probabilistic programs can be layered and modularized, with subsystems that specialize in particular problem domains, but embedded in a shared fabric that recognizes the current context and brings appropriate modeling subsystems to bear.

    What will it take to make all this real? The conceptual underpinnings of probabilistic programming languages are well in hand, thanks to trailblazing work by research groups at MIT, UMass Amherst, Microsoft Research, Harvard, and elsewhere. The core challenge at this point is developing performant inference engines that can efficiently solve the very wide range of models that these languages can express. We’ll also need new debugging, optimization, and visualization tools to help developers get the most from these systems.

    This story will take years to play out in full, but I expect we’ll see real progress over the next three to four years. I’m excited.

    Want to learn more? BUGS is a probabilistic programming language originally developed by statisticians more than 20 years ago. While it has a number of limitations around expressivity and dataset size, it’s a great way to get your feet wet. Also check out Rob Zinkov’s tutorial post, which includes examples of several models. Church is the most ambitious probabilistic programming language. Don’t miss the tutorials, though it may not be the most accessible or practical option until the inference engine and toolset mature. For that reason, factorie might be a better bet in the short term, especially if you like Scala, or Microsoft Research’s infer.net with C# and F# bindings. The proceedings from a recent academic workshop provide a great snapshot of the field as of late 2012. Finally, this video from a long-defunct startup that I co-founded contains one stab at explaining many of the concepts underlying probabilistic programming referred to under the more general term probabilistic computing:

    January 11 2013

    The future of programming

    Programming is changing. The PC era is coming to an end, and software developers now work with an explosion of devices, job functions, and problems that need different approaches from the single machine era. In our age of exploding data, the ability to do some kind of programming is increasingly important to every job, and programming is no longer the sole preserve of an engineering priesthood.

    Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.

    Is your next program for one of these?
    Photo credit: Steve Lodefink/Flickr.

    Over the course of the next few months, I’m looking to chart the ways in which programming is evolving, and the factors that are affecting it. This article captures a few of those forces, and I welcome comment and collaboration on how you think things are changing.

    Where am I headed with this line of inquiry? The goal is to be able to describe the essential skills that programmers need for the coming decade, the places they should focus their learning, and differentiating between short term trends and long term shifts.

    Distributed computing

    The “normal” environment in which coding happens today is quite different from that of a decade ago. Given targets such as web applications, mobile and big data, the notion that a program only involves a single computer has disappeared. For the programmer, that means we must grapple with problems such as concurrency, locking, asynchronicity, network communication and protocols. Even the most basic of web programming will lead you to familiarity with concepts such as caching.

    Because of these pressures we see phenomena at different levels in the computing stack. At a high level, cloud computing seeks to mitigate the hassle of maintaining multiple machines and their network; at the application development level, frameworks try to embody familiar patterns and abstract away tiresome detail; and at the language level, concurrency and networking computing is made simpler by the features offered by languages such as Go or Scala.

    Device computing

    Look around your home. There are processors and programming in most every electronic device you have, which certainly puts your computer in a small minority. Not everybody will be engaged in programming for embedded devices, but many developers will certainly have to learn what it is to develop for a mobile phone. And in the not so distant future carsdrones, glasses and smart dust.

    Even within more traditional computing, the rise of the GPU array as an advanced data crunching coprocessor needs non-traditional programming. Different form factors require different programming approaches. Hobbyists and prototypers alike are bringing hardware to life with Arduino and Processing.

    Languages and programmers must respond to issues previously the domain of specialists, such as low memory and CPU speeds, power consumption, radio communication, hard and soft real-time requirements.

    Data computing

    The prevailing form of programming today, object orientation, is generally hostile to data. Its focus on behavior wraps up data in access methods, and wraps up collections of data even more tightly. In the mathematical world, data just is, it has no behavior, yet the rigors of C++ or Java require developers to worry about how it is accessed.

    As data and its analysis grow in importance, there’s a corresponding rise in use and popularity of languages that treat data as a first class citizen. Obviously, statistical languages such as R are rising on this tide, but within general purpose programming there’s a bias to languages such as Python or Clojure, which make data easier to manipulate.

    Democratized computing

    More people than ever are programming. These smart, uncounted, accidental developers wrangle magic in Excel macros, craft JavaScript and glue stuff together with web services such as IFTTT or Zapier. Quite reasonably, they know little about software development, and aren’t interested in it either.

    However, many of these casual programmers will find it easy to generate a mess and get into trouble, all while only really wanting to get things done. At best, this is annoying, at worst, a liability for employers. What’s more, it’s not the programmer’s fault.

    How can providers of programmable environments serve the “accidental developer” better? Do we need new languages, better frameworks in existing languages? Is it an educational concern? Is it even a problem at all, or just life?

    There are hints towards a different future from Bret Victor’s work, and projects such as Scratch and Light Table.

    Dangerous computing

    Finally, it’s worth examining the house of cards we’re building with our current approach to software development. The problem is simple: the brain can only fit so much inside it. To be a programmer today, you need to be able to execute the program you’re writing inside your head.

    When the problem space gets too big, our reaction is to write a framework that makes the problem space smaller again. And so we have operating systems that run on top of CPUs. Libraries and user interfaces that run on top of operating systems. Application frameworks that run on top of those libraries. Web browsers that run on top of those. JavaScript that runs on top of browsers. JavaScript libraries that run on top of JavaScript. And we know it won’t stop there.

    We’re like ambitious waiters stacking one teacup on top of the other. Right now, it looks pretty wobbly. We’re making faster and more powerful CPUs, but getting the same kind of subjective application performance that we did a decade ago. Security holes emerge in frameworks that put large numbers of systems at risk.

    Why should we use computers like this, simultaneously building a house of cards and confining computing power to that which the programmer can fit in their head? Is there a way to hit reset on this view of software?

    Conclusion

    I’ll be considering these trends and more as I look into the future of programming. If you have experience or viewpoints, or are working on research to do things radically differently, I’d love to hear from you. Please leave a comment on this article, or get in touch.

    The future of programming

    Programming is changing. The PC era is coming to an end, and software developers now work with an explosion of devices, job functions, and problems that need different approaches from the single machine era. In our age of exploding data, the ability to do some kind of programming is increasingly important to every job, and programming is no longer the sole preserve of an engineering priesthood.

    Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.

    Is your next program for one of these?
    Photo credit: Steve Lodefink/Flickr.

    Over the course of the next few months, I’m looking to chart the ways in which programming is evolving, and the factors that are affecting it. This article captures a few of those forces, and I welcome comment and collaboration on how you think things are changing.

    Where am I headed with this line of inquiry? The goal is to be able to describe the essential skills that programmers need for the coming decade, the places they should focus their learning, and differentiating between short term trends and long term shifts.

    Distributed computing

    The “normal” environment in which coding happens today is quite different from that of a decade ago. Given targets such as web applications, mobile and big data, the notion that a program only involves a single computer has disappeared. For the programmer, that means we must grapple with problems such as concurrency, locking, asynchronicity, network communication and protocols. Even the most basic of web programming will lead you to familiarity with concepts such as caching.

    Because of these pressures we see phenomena at different levels in the computing stack. At a high level, cloud computing seeks to mitigate the hassle of maintaining multiple machines and their network; at the application development level, frameworks try to embody familiar patterns and abstract away tiresome detail; and at the language level, concurrency and networking computing is made simpler by the features offered by languages such as Go or Scala.

    Device computing

    Look around your home. There are processors and programming in most every electronic device you have, which certainly puts your computer in a small minority. Not everybody will be engaged in programming for embedded devices, but many developers will certainly have to learn what it is to develop for a mobile phone. And in the not so distant future carsdrones, glasses and smart dust.

    Even within more traditional computing, the rise of the GPU array as an advanced data crunching coprocessor needs non-traditional programming. Different form factors require different programming approaches. Hobbyists and prototypers alike are bringing hardware to life with Arduino and Processing.

    Languages and programmers must respond to issues previously the domain of specialists, such as low memory and CPU speeds, power consumption, radio communication, hard and soft real-time requirements.

    Data computing

    The prevailing form of programming today, object orientation, is generally hostile to data. Its focus on behavior wraps up data in access methods, and wraps up collections of data even more tightly. In the mathematical world, data just is, it has no behavior, yet the rigors of C++ or Java require developers to worry about how it is accessed.

    As data and its analysis grow in importance, there’s a corresponding rise in use and popularity of languages that treat data as a first class citizen. Obviously, statistical languages such as R are rising on this tide, but within general purpose programming there’s a bias to languages such as Python or Clojure, which make data easier to manipulate.

    Democratized computing

    More people than ever are programming. These smart, uncounted, accidental developers wrangle magic in Excel macros, craft JavaScript and glue stuff together with web services such as IFTTT or Zapier. Quite reasonably, they know little about software development, and aren’t interested in it either.

    However, many of these casual programmers will find it easy to generate a mess and get into trouble, all while only really wanting to get things done. At best, this is annoying, at worst, a liability for employers. What’s more, it’s not the programmer’s fault.

    How can providers of programmable environments serve the “accidental developer” better? Do we need new languages, better frameworks in existing languages? Is it an educational concern? Is it even a problem at all, or just life?

    There are hints towards a different future from Bret Victor’s work, and projects such as Scratch and Light Table.

    Dangerous computing

    Finally, it’s worth examining the house of cards we’re building with our current approach to software development. The problem is simple: the brain can only fit so much inside it. To be a programmer today, you need to be able to execute the program you’re writing inside your head.

    When the problem space gets too big, our reaction is to write a framework that makes the problem space smaller again. And so we have operating systems that run on top of CPUs. Libraries and user interfaces that run on top of operating systems. Application frameworks that run on top of those libraries. Web browsers that run on top of those. JavaScript that runs on top of browsers. JavaScript libraries that run on top of JavaScript. And we know it won’t stop there.

    We’re like ambitious waiters stacking one teacup on top of the other. Right now, it looks pretty wobbly. We’re making faster and more powerful CPUs, but getting the same kind of subjective application performance that we did a decade ago. Security holes emerge in frameworks that put large numbers of systems at risk.

    Why should we use computers like this, simultaneously building a house of cards and confining computing power to that which the programmer can fit in their head? Is there a way to hit reset on this view of software?

    Conclusion

    I’ll be considering these trends and more as I look into the future of programming. If you have experience or viewpoints, or are working on research to do things radically differently, I’d love to hear from you. Please leave a comment on this article, or get in touch.

    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!

    Schweinderl