Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 16 2013

Three organizations pressing for change in society’s approach to computing

Taking advantage of a recent trip to Washington, DC, I had the privilege of visiting three non-profit organizations who are leaders in the application of computers to changing society. First, I attended the annual meeting of the Association for Computing Machinery’s US Public Policy Council (USACM). Several members of the council then visited the Open Technology Institute (OTI), which is a section of New America Foundation (NAF). Finally, I caught the end of the first general-attendance meeting of the Open Source Initiative (OSI).

In different ways, these organizations are all putting in tremendous effort to provide the benefits of computing to more people of all walks of life and to preserve the vigor and creativity of computing platforms. I found out through my meetings what sorts of systemic change is required to achieve these goals and saw these organizations grapple with a variety of strategies to get there. This report is not a statement from any of these groups, just my personal observations.

USACM Public Policy Council

The Association for Computing Machinery (ACM) has been around almost as long as electronic computers: it was founded in 1948. I joined in the 1980s but was mostly a passive recipient of information. Although last week’s policy meeting was the first I had ever attended at ACM, many of the attendees were familiar to me from previous work on either technical or policy problems.

As we met, open data was in the news thanks to a White House memo reiterating its call for open government data in machine-readable form. Although the movement for these data sets has been congealing from many directions over the past few years, USACM was out in front back in early 2009 with a policy recommendation for consumable data.

USACM weighs in on such policy issues as copyrights and patents, accessibility, privacy, innovation, and the various other topics on which you’d expect computer scientists to have professional opinions. I felt that the group’s domestic scope is oddly out of sync with the larger ACM, which has been assiduously expanding overseas for many years. A majority of ACM members now live outside the United States.

In fact, many of today’s issues have international reach: cybersecurity, accessibility, and copyright, to name some obvious ones. Although USACM has submitted comments on ACTA and the Trans-Pacific Partnership, they don’t maintain regular contacts work with organizations outside the country. Perhaps they’ll have the cycles to add more international connections in the future. Eugene Spafford, security expert and current chair of the policy committee, pointed out that many state-level projects in the US would be worth commenting on as well.

It’s also time to recognize that policy is made by non-governmental organizations as well as governments. Facebook and Google, for example, are setting policies about privacy. The book The End of Power: From Boardrooms to Battlefields and Churches to States, Why Being In Charge Isn’t What It Used To Be by Moisés Naím claims that power is becoming more widely distributed (not ending, really) and that a bigger set of actors should be taken into account by people hoping to effect change.

USACM represents a technical organization, so it seeks to educate policy decision-makers on issues where there is an intersection of computing technology and public policy. Their principles derive from the ACM Code of Ethics and Professional Conduct, which evolved from input by many ACM members and the organization’s experience. USACM papers usually focus on pointing out the technical implications of legislative or regulatory choices.

When the notorious SOPA and PIPA bills came up, for instance, the USACM didn’t issue the kind of blanket condemnation many other groups put out, supported by appeals to vague concepts such as freedom and innovation. Instead, they put the microscope to the bills’ provisions and issued brief comments about negative effects on the functioning of the Internet, with a focus on DNS. Spafford commented, “We also met privately with Congressional staff and provided tutorials on how DNS and similar mechanisms worked. That helped them understand why their proposals would fail."

Open Technology Institute

NAF is a flexible and innovative think tank proposing new strategies for dozens of national and international issues. Mostly progressive, in my view, it is committed to considering a wide range of possible solutions and finding rational solutions that all sides can accept. On computing and Internet issues, it features the Open Technology Institute, a rare example of a non-profit group that is firmly engaged in both technology production and policy-making. This reflects the multi-disciplinary expertise of OTI director Sascha Meinrath.

Known best for advocating strong policies to promote high-bandwidth Internet access, the OTI also concerns itself with the familiar range of policies in copyright, patents, privacy, and security. Google executive chairman Eric Schmidt is chair of the NAF board, and Google has been generous in its donations to NAF, including storage for the massive amounts of data the OTI has collected on bandwidth worldwide through its Measurement Lab, or M-Lab.

M-Lab measures Internet traffic around the world, using crowdsourcing to produce realistic reports about bandwidth, chokepoints, and other aspects of traffic. People can download the M-Lab tools to check for traffic shaping by providers and other characteristic of their connection, and send results back to M-Lab for storage. (They now have 700 terabytes of such data.) Other sites offer speed testing for uploads and downloads, but M-Lab is unique in storing and visualizing the results. The FCC, among others, has used M-Lab to determine the uneven progress of bandwidth in different regions. Like all OTI software projects, Measurement Lab is open source software.

Open Source Initiative

For my last meeting of the day, I dropped by for the last few sessions of Open Source Initiative’s Open Source Community Summit and talked to Deborah Bryant, Simon Phipps, and Bruno Souza. OSI’s recent changes represent yet another strategy for pushing change in the computer field.

OSI is best known for approving open source licenses and seems to be universally recognized as an honest and dependable judge in that area, but they want to branch out from this narrow task. About a year ago, they completely revamped their structure and redefined themselves as a membership organization. (I plunked down some cash as one of their first individual members, having heard of the change from Simon at a Community Leadership Summit).

When they announced the summit, they opened up a wiki for discussion about what to cover. The winner hands down was an all-day workshop on licensing — I guess you can tell when you’re in Washington. (The location was the Library of Congress.)

They also held an unconference that attracted a nice mix of open-source and proprietary software companies, software uses, and government workers. I heard working group summaries that covered such basic advice as getting ownership of the code that you contract out to companies to create for you, using open source to attract and retain staff, and justifying the investment in open source by thinking more broadly than the agency’s immediate needs and priorities.

Organizers used the conference to roll out Working Groups, a new procedure for starting projects. Two such projects, launched by members, are the development of a FAQ and the creation of a speaker’s bureau. Anybody with an idea that fits the mission of promoting and adopting open source software can propose a project, but the process requires strict deadlines and plans for fiscally sustaining the project.

OSI is trying to change government and society by changing the way they make and consume software. USACM is trying to improve the institutions’ understanding of software as well as the environment in which it is made. NAF is trying to extend computing to everyone, and using software as a research tool in pursuit of that goal. Each organization, starting from a different place, is expanding its options and changing itself in order to change others.

September 13 2012

Growth of SMART health care apps may be slow, but inevitable

This week has been teaming with health care conferences, particularly in Boston, and was declared by President Obama to be National Health IT Week as well. I chose to spend my time at the second ITdotHealth conference, where I enjoyed many intense conversations with some of the leaders in the health care field, along with news about the SMART Platform at the center of the conference, the excitement of a Clayton Christiansen talk, and the general panache of hanging out at the Harvard Medical School.

SMART, funded by the Office of the National Coordinator in Health and Human Services, is an attempt to slice through the Babel of EHR formats that prevent useful applications from being developed for patient data. Imagine if something like the wealth of mash-ups built on Google Maps (crime sites, disaster markers, restaurant locations) existed for your own health data. This is what SMART hopes to do. They can already showcase some working apps, such as overviews of patient data for doctors, and a real-life implementation of the heart disease user interface proposed by David McCandless in WIRED magazine.

The premise and promise of SMART

At this conference, the presentation that gave me the most far-reaching sense of what SMART can do was by Nich Wattanasin, project manager for i2b2 at Partners. His implementation showed SMART not just as an enabler of individual apps, but as an environment where a user could choose the proper app for his immediate needs. For instance, a doctor could use an app to search for patients in the database matching certain characteristics, then select a particular patient and choose an app that exposes certain clinical information on that patient. In this way, SMART an combine the power of many different apps that had been developed in an uncoordinated fashion, and make a comprehensive data analysis platform from them.

Another illustration of the value of SMART came from lead architect Josh Mandel. He pointed out that knowing a child’s blood pressure means little until one runs it through a formula based on the child’s height and age. Current EHRs can show you the blood pressure reading, but none does the calculation that shows you whether it’s normal or dangerous. A SMART app has been developer to do that. (Another speaker claimed that current EHRs in general neglect the special requirements of child patients.)

SMART is a close companion to the Indivo patient health record. Both of these, aong with the i2b2 data exchange system, were covered in article from an earlier conference at the medical school. Let’s see where platforms for health apps are headed.

How far we’ve come

As I mentioned, this ITdotHealth conference was the second to be held. The first took place in September 2009, and people following health care closely can be encouraged by reading the notes from that earlier instantiation of the discussion.

In September 2009, the HITECH act (part of the American Recovery and Reinvestment Act) had defined the concept of “meaningful use,” but nobody really knew what was expected of health care providers, because the ONC and the Centers for Medicare & Medicaid Services did not release their final Stage 1 rules until more than a year after this conference. Aneesh Chopra, then the Federal CTO, and Todd Park, then the CTO of Health and Human Services, spoke at the conference, but their discussion of health care reform was a “vision.” A surprisingly strong statement for patient access to health records was made, but speakers expected it to be accomplished through the CONNECT Gateway, because there was no Direct. (The first message I could find on the Direct Project forum dated back to November 25, 2009.) Participants had a sophisticated view of EHRs as platforms for applications, but SMART was just a “conceptual framework.”

So in some ways, ONC, Harvard, and many other contributors to modern health care have accomplished an admirable amount over three short years. But some ways we are frustratingly stuck. For instance, few EHR vendors offer API access to patient records, and existing APIs are proprietary. The only SMART implementation for a commercial EHR mentioned at this week’s conference was one created on top of the Cerner API by outsiders (although Cerner was cooperative). Jim Hansen of Dossia told me that there is little point to encourage programmers to create SMART apps while the records are still behind firewalls.

Keynotes

I couldn’t call a report on ITdotHealth complete without an account of the two keynotes by Christiansen and Eric Horvitz, although these took off in different directions from the rest of the conference and served as hints of future developments.

Christiansen is still adding new twists to the theories laid out in c The Innovator’s Dilemma and other books. He has been a backer of the SMART project from the start and spoke at the first ITdotHealth conference. Consistent with his famous theory of disruption, he dismisses hopes that we can reduce costs by reforming the current system of hospitals and clinics. Instead, he projects the way forward through technologies that will enable less trained experts to successively take over tasks that used to be performed in more high-cost settings. Thus, nurse practitioners will be able to do more and more of what doctors do, primary care physicians will do more of what we current delegate to specialists, and ultimately the patients and their families will treat themselves.

He also has a theory about the progression toward openness. Radically new technologies start out tightly integrated, and because they benefit from this integration they tend to be created by proprietary companies with high profit margins. As the industry comes to understand the products better, they move toward modular, open standards and become commoditized. Although one might conclude that EHRs, which have been around for some forty years, are overripe for open solutions, I’m not sure we’re ready for that yet. That’s because the problems the health care field needs to solve are quite different from the ones current EHRs solve. SMART is an open solution all around, but it could serve a marketplace of proprietary solutions and reward some of the venture capitalists pushing health care apps.

While Christiansen laid out the broad environment for change in health care, Horvitz gave us a glimpse of what he hopes the practice of medicine will be in a few years. A distinguished scientist at Microsoft, Horvitz has been using machine learning to extract patterns in sets of patient data. For instance, in a collection of data about equipment uses, ICD codes, vital signs, etc. from 300,000 emergency room visits, they found some variables that predicted a readmission within 14 days. Out of 10,000 variables, they found 500 that were relevant, but because the relational database was strained by retrieving so much data, they reduced the set to 23 variables to roll out as a product.

Another project predicted the likelihood of medical errors from patient states and management actions. This was meant to address a study claiming that most medical errors go unreported.

A study that would make the privacy-conscious squirm was based on the willingness of individuals to provide location data to researchers. The researchers tracked searches on Bing along with visits to hospitals and found out how long it took between searching for information on a health condition and actually going to do something about it. (Horvitz assured us that personally identifiable information was stripped out.)

His goal is go beyond measuring known variables, and to find new ones that could be hidden causes. But he warned that, as is often the case, causality is hard to prove.

As prediction turns up patterns, the data could become a “fabric” on which many different apps are based. Although Horvitz didn’t talk about combining data sets from different researchers, it’s clearly suggested by this progression. But proper de-identification and flexible patient consent become necessities for data combination. Horvitz also hopes to move from predictions to decisions, which he says is needed to truly move to evidence-based health care.

Did the conference promote more application development?

My impression (I have to admit I didn’t check with Dr. Ken Mandl, the organizer of the conference) was that this ITdotHealth aimed to persuade more people to write SMART apps, provide platforms that expose data through SMART, and contribute to the SMART project in general. I saw a few potential app developers at the conference, and a good number of people with their hands on data who were considering the use of SMART. I think they came away favorably impressed–maybe by the presentations, maybe by conversations that the meeting allowed them to have with SMART developers–so we may see SMART in wider use soon. Participants came far for the conference; I talked to one from Geneva, for instance.

The presentations were honest enough, though, to show that SMART development is not for the faint-hearted. On the supply side–that is, for people who have patient data and want to expose it–you have to create a “container” that presents data in the format expected by SMART. Furthermore, you must make sure the data conforms to industry standards, such as SNOMED for diagnoses. This could be a lot of conversion.

On the application side, you may have to deal with SMART’s penchant for Semantic Web technologies such as OWL and SPARQL. This will scare away a number of developers. However, speakers who presented SMART apps at the conference said development was fairly easy. No one matched the developer who said their app was ported in two days (most of which were spent reading the documentation) but development times could usually be measured in months.

Mandl spent some time airing the idea of a consortium to direct SMART. It could offer conformance tests (but probably not certification, which is a heavy-weight endeavor) and interact with the ONC and standards bodies.

After attending two conferences on SMART, I’ve got the impression that one of its most powerful concepts is that of an “app store for health care applications.” But correspondingly, one of the main sticking points is the difficulty of developing such an app store. No one seems to be taking it on. Perhaps SMART adoption is still at too early a stage.

Once again, we are batting our heads up against the walls erected by EHRs to keep data from being extracted for useful analysis. And behind this stands the resistance of providers, the users of EHRs, to give their data to their patients or to researchers. This theme dominated a federal government conference on patient access.

I think SMART will be more widely adopted over time because it is the only useful standard for exposing patient data to applications, and innovation in health care demands these apps. Accountable Care Organizations, smarter clinical trials (I met two representatives of pharmaceutical companies at the conference), and other advances in health care require data crunching, so those apps need to be written. And that’s why people came from as far as Geneva to check out SMART–there’s nowhere else to find what they need. The technical requirements to understand SMART seem to be within the developers’ grasps.

But a formidable phalanx of resistance remains, from those who don’t see the value of data to those who want to stick to limited exchange formats such as CCDs. And as Sean Nolan of Microsoft pointed out, one doesn’t get very far unless the app can fit into a doctor’s existing workflow. Privacy issues were also raised at the conference, because patient fears could stymie attempts at sharing. Given all these impediments, the government is doing what it can; perhaps the marketplace will step in to reward those who choose a flexible software platform for innovation.

August 28 2012

Seeking prior art where it most often is found in software

Patent ambushes are on the rise again, and cases such as Apple/Samsung shows that prior art really has to swing the decision–obviousness or novelty is not a strong enough defense. Obviousness and novelty are subjective decisions made by a patent examiner, judge, or jury.

In this context, a recent conversation I had with Keith Bergelt, Chief Executive Officer of the Open Invention Network takes on significance. OIN was formed many years ago to protect the vendors, developers, and users of Linux and related open source software against patent infringement. They do this the way companies prepare a defense: accumulating a portfolio of patents of their own.

According to Bergelt, OIN has spent millions of dollars to purchase patents that uniquely enable Linux and open source and have helped free software vendors and developers understand and prepare to defend against lawsuits. All OIN patents are available under a free license to those who agree to forbear suit on Linux grounds and to cross license their own patents that read on OIN’s Linux System Definition. OIN has nearly 500 licensees and is adding a new one every three days, as everyone from individual developers to large multinationals are coming to recognize its role and the value of an OIN license.

The immediate trigger for our call was an announcement by OIN that they are expanding their Linux System Definition to include key mobile Linux software packages such as Dalvik, which expands the scope of the cross licenses under the OIN license. In this way OIN is increasing the freedom of action under which a company can operate under Linux.

OIN’s expansion of its Linux System Definition affects not only Android, which seems to be in Apple’s sights, but any other mobile distribution based on Linux, such as MeeGo and Tizen. They have been interested in this area for some time, but realize that mobile is skyrocketing in importance.

Meanwhile, they are talking to their supporters about new ways of deep mining for prior art in source code. Patent examiners, as well as developers filing patents in good faith, look mostly at existing patents to find prior art. But in software, most innovation is not patented. It might not even appear in the hundreds of journals and conference proceedings that come out in the computer science field each year. It is abstraction that emerges from code, when analyzed.

A GitHub staffer told me it currently hosts approximately 25 TB of data and adds over 65 GB of new data per day. A lot of that stuff is probably hum-drum, but I bet a fraction of it contains techniques that someone else will try to gain a monopoly over someday through patents.

Naturally, inferring innovative processes from source code is a daunting exercise in machine learning. It’s probably harder than most natural language processing, which tries to infer limited meanings or relationships from words. But OIN feels we have to try. Otherwise more and more patents may impinge (which is different from infringe) on free software.

August 13 2012

Smart notebooks for linking virtual teams across the net

Who has the gumption to jump into the crowded market for collaboration tools and call for a comprehensive open source implementation? Perhaps just Miles Fidelman, a networking expert whose experience spans time with Bolt, Beranek and Newman, work on military command and control systems, a community networking non-profit called the Center for Civic Networking, and building a small hosting company.

Miles, whom I’ve known for years and consider a mentor in the field of networking, recently started a Kickstarter project called Smart Notebooks. Besides promising a free software implementation based on popular standards, he believes his vision for a collaboration environment will work the way people naturally work together — not how some vendor thinks they should work, as so many tools have done.

Screenshot from Smart Notebooks project
A screenshot from the Smart Notebooks project


Miles’ concept of Smart Notebooks is shared documents that stay synchronized across the net. Each person has his or her own copy of a document, but they “talk to each other” using a peer-to-peer protocol. Edit your copy, and everyone else sees the change on their copy. Unlike email attachments, there’s no need to search for the most recent copy of document. Unlike a Google Doc, everyone has their own copy, allowing for private notes and working offline. All of this will be done using standard web browsers, email, and RSS: no new software to install, no walled-garden services, and no accounts to configure on services running in the cloud.

The motivation for the system comes from observations Miles has made in venues as small as a church board of directors and as large as an Air Force operations center. When people come together, they bring copies of documents — agendas, minutes, presentation slides — and receive more documents. They exchange information, discuss issues, and make decisions, recording them as scribbles on their copies of the documents they carry away with them. Smart Notebooks will mimic this process across the Internet (and avoid a lot of manual copying in the process).

Miles draws models from several sources, including one of his favorite tools that died out: Hypercard (he sometimes refers to Smart Notebooks as “HyperCard, for groups, running in a browser”). He also looks to TiddlyWiki (a personal wiki implemented as a single local file, opened and edited in a browser) as a model for smart notebooks, coupled with a peer-to-peer, replicated messaging model inspired by USENET News’ NNTP protocol. The latest HTML5 standards and the new generation of web browsers make the project possible.

Miles’ goal is a system that can let people collaborate in peer-to-peer fashion with minimal reliance on a central system hosted by a company. Users will simply create a document in their browser (like editing a wiki page), then send copies via email. Everyone stores their own copy locally (as a file or in their browser’s HTML5 Web Storage). Changes will be pushed across the net, while notifications will show up as an RSS feed. Opening one’s local copy will automatically pull in changes.

For more details, and to support the project, take a look at the project’s Kickstarter page.

Miles is particularly looking for a couple of larger sponsors — folks organizing an event, a conference, a crowdsourcing project, an issue campaign, a flash mob — who are looking for a better coordination tool and can serve as test cases.

Related:

July 25 2012

Inside GitHub’s role in community-building and other open source advances

In this video interview, Matthew McCullough of GitHub discusses what they’ve learned over time as they grow and watch projects develop there. Highlights from the full video interview include:

  • How GitHub builds on Git’s strengths to allow more people to collaborate on a project [Discussed at the 00:30 mark]
  • The value of stability and simple URLs [Discussed at the 02:05 mark]
  • Forking as the next level of democracy for software [Discussed at the 04:02 mark]
  • The ability to build a community around a GitHub repo [Discussed at the 05:05 mark]
  • GitHub for education, and the use of open source projects for university work [Discussed at the 06:26 mark]
  • The value of line-level comments in source code [Discussed at the 09:36 mark]
  • How to be a productive contributor [Discussed at the 10:53 mark]
  • Tools for Windows users [Discussed at the 11:56 mark]

You can view the entire conversation in the following video:

Related:

Democratizing data, and other notes from the Open Source convention

There has been enormous talk over the past few years of open data and what it can do for society, but proponents have largely come to admit: data is not democratizing in itself. This topic is hotly debated, and a nice summary of the viewpoints is available in this PDF containing articles by noted experts. At the Open Source convention last week, I thought a lot about the democratizing potential of data and how it could be realized.

Who benefits from data sets

At a high level, large businesses and other well-funded organizations have three natural advantages over the general public in the exploitation of data sets:

  • The resources to gather the data
  • The resources to do the necessary programming to crunch and interpret the data
  • The resources to act on the results

These advantages will probably always exist, but data can be useful to the public too. We have some tricks that can compensate for each of the large institutions’ advantages:

  • Crowdsourcing can create data sets that can help everybody, including the formation of new businesses. OpenStreetMap, an SaaS project based on open source software, is a superb example. Its maps have been built up through years of contributions by people trying to support their communities, and it supports interesting features missing from proprietary map projects, such as tools for laying out bike paths.

  • Data-crunching is where developers, like those at the Open Source convention, come in. Working at non-profits, during week-end challenges, or just on impulse, they can code up the algorithms that make sense of data sets and apps to visualize and accept interaction from people with less technical training.

  • Some apps, such as reports of neighborhood crime or available health facilities, can benefit individuals, but we can really drive progress by joining together in community organizations or other associations that use the data. I saw a fantastic presentation by high school students in the Boston area who demonstrated a correlation between funding for summer jobs programs and lowered homicides in the inner city–and they won more funding from the Massachusetts legislature with that presentation.

Health care track

This year was the third in which the Open Source convention offered a health care track. IT plays a growing role in health care, but a lot of the established institutions are creaking forward slowly, encountering lots of organizational and cultural barriers to making good use of computers. This year our presentations clustered around areas where innovation is most robust: personal tracking, using data behind the scenes to improve care, and international development.

Open source coders Fred Trotter and David Neary gave popular talks about running and tracking one’s achievements. Bob Evans discussed a project named PACO that he started at Google to track productivity by individuals and in groups of people who come together for mutual support, while Anne Wright and Candide Kemmler described the ambitious BodyTrack project. Jason Levitt gave the science of sitting (and how to make it better for you).

In a high-energy presentation, systems developer Shahid Shah described the cornucopia of high-quality, structured data that will be made available when devices are hooked together. “Gigabytes of data is being lost every minute from every patient hooked up to hospital monitors,” he said. DDS, HTTP, and XMPP are among the standards that will make an interconnected device mesh possible. Michael Italia described the promise of genome sequencing and the challenges it raises, including storage requirements and the social impacts of storing sensitive data about people’s propensity for disease. Mohamed ElMallah showed how it was sometimes possible to work around proprietary barriers in electronic health records and use them for research.

Representatives from OpenMRS and IntraHealth international spoke about the difficulties and successes of introducing IT into very poor areas of the world, where systems need to be powered by their own electricity generators. A maintainable project can’t be dropped in by external NGO staff, but must cultivate local experts and take a whole-systems approach. Programmers in Rwanda, for instance, have developed enough expertise by now in OpenMRS to help clinics in neighboring countries install it. Leaders of OSEHRA, which is responsible for improving the Department of Veteran Affairs’ VistA and developing a community around it, spoke to a very engaged audience about their work untangling and regularizing twenty years’ worth of code.

In general, I was pleased with the modest growth of the health care track this year–most session drew about thirty people, and several drew a lot more–and both the energy and the expertise of the people who came. Many attendees play an important role in furthering health IT.

Other thoughts

The Open Source convention reflected much of the buzz surrounding developments in computing. Full-day sessions on OpenStack and Gluster were totally filled. A focus on developing web pages came through in the popularity of talks about HTML5 and jQuery (now a platform all its own, with extensions sprouting in all directions). Perl still has a strong community. A few years ago, Ruby on Rails was the must-learn platform, and knock-off derivatives appeared in almost every other programming language imaginable. Now the Rails paradigm has been eclipsed (at least in the pursuit of learning) by Node.js, which was recently ported to Microsoft platforms, and its imitators.

No two OSCons are the same, but the conference continues to track what matters to developers and IT staff and to attract crowds every year. I enjoyed nearly all the speakers, who often pump excitement into the dryest of technical topics through their own sense of continuing wonder. This is an industry where imagination’s wildest thoughts become everyday products.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl