Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 08 2014

Four short links: 8 January 2014

  1. Launching the Wolfram Connected Devices Project — Wolfram Alpha is cognition-as-a-service, which they hope to embed in devices. This data-powered Brain-in-the-Cloud play will pit them against Google, but G wants to own the devices and the apps and the eyeballs that watch them … interesting times ahead!
  2. How the USA Almost Killed the Internet (Wired) — “At first we were in an arms race with sophisticated criminals,” says Eric Grosse, Google’s head of security. “Then we found ourselves in an arms race with certain nation-state actors [with a reputation for cyberattacks]. And now we’re in an arms race with the best nation-state actors.”
  3. Intel Edison — SD-card sized, with low-power 22nm 400MHz Intel Quark processor with two cores, integrated Wi-Fi and Bluetooth.
  4. N00b 2 L33t, Now With Graphs (Tom Stafford) — open science research validating many of the findings on learning, tested experimentally via games. In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning.

November 26 2013

Four short links: 26 November 2013

  1. The Death and Life of Great Internet Cities“The sense that you were given some space on the Internet, and allowed to do anything you wanted to in that space, it’s completely gone from these new social sites,” said Scott. “Like prisoners, or livestock, or anybody locked in institution, I am sure the residents of these new places don’t even notice the walls anymore.”
  2. What You’re Not Supposed To Do With Google Glass (Esquire) — Maybe I can put these interruptions to good use. I once read that in ancient Rome, when a general came home victorious, they’d throw him a triumphal parade. But there was always a slave who walked behind the general, whispering in his ear to keep him humble. “You are mortal,” the slave would say. I’ve always wanted a modern nonslave version of this — a way to remind myself to keep perspective. And Glass seemed the first gadget that would allow me to do that. In the morning, I schedule a series of messages to e-mail myself throughout the day. “You are mortal.” “You are going to die someday.” “Stop being a selfish bastard and think about others.” (via BoingBoing)
  3. Neural Networks and Deep Learning — Chapter 1 up and free, and there’s an IndieGogo campaign to fund the rest.
  4. What We Know and Don’t KnowThat highly controlled approach creates the misconception that fossils come out of the ground with labels attached. Or worse, that discovery comes from cloaked geniuses instead of open discussion. We’re hoping to combat these misconceptions by pursuing an open approach. This is today’s evolutionary science, not the science of fifty years ago We’re here sharing science. [...] Science isn’t the answers, science is the process. Open science in paleoanthropology.
Sponsored post

October 04 2013

Four short links: 4 October 2013

  1. Case and Molly, a Game Inspired by Neuromancer (Greg Borenstein) — On reading Neuromancer today, this dynamic feels all too familiar. We constantly navigate the tension between the physical and the digital in a state of continuous partial attention. We try to walk down the street while sending text messages or looking up GPS directions. We mix focused work with a stream of instant message and social media conversations. We dive into the sudden and remote intimacy of seeing a family member’s face appear on FaceTime or Google Hangout. “Case and Molly” uses the mechanics and aesthetics of Neuromancer’s account of cyberspace/meatspace coordination to explore this dynamic.
  2. Rethinking Ray Ozziean inescapable conclusion: Ray Ozzie was right. And Microsoft’s senior leadership did not listen, certainly not at the time, and perhaps not until it was too late. Hear, hear!
  3. Recursive Deep Models for Semantic Compositionality
    Over a Sentiment Treebank
    (PDF) — apparently it nails sentiment analysis, and will be “open sourced”. At least, according to this GigaOm piece, which also explains how it works.
  4. PLoS ASAP Award Finalists Announced — with pointers to interviews with the finalists, doing open access good work like disambiguating species names and doing open source drug discovery.

August 01 2013

Four short links: 2 August 2013

  1. Unhappy Truckers and Other Algorithmic ProblemsEven the insides of vans are subjected to a kind of routing algorithm; the next time you get a package, look for a three-letter letter code, like “RDL.” That means “rear door left,” and it is so the driver has to take as few steps as possible to locate the package. (via Sam Minnee)
  2. Fuel3D: A Sub-$1000 3D Scanner (Kickstarter) — a point-and-shoot 3D imaging system that captures extremely high resolution mesh and color information of objects. Fuel3D is the world’s first 3D scanner to combine pre-calibrated stereo cameras with photometric imaging to capture and process files in seconds.
  3. Corporate Open Source Anti-Patterns (YouTube) — Brian Cantrill’s talk, slides here. (via Daniel Bachhuber)
  4. Hacking for Humanity) (The Economist) — Getting PhDs and data specialists to donate their skills to charities is the idea behind the event’s organizer, DataKind UK, an offshoot of the American nonprofit group.

July 19 2013

Four short links: 19 July 2013

  1. Operative Design — A catalogue of spatial verbs. (via Adafruit)
  2. Open Source Malaria — open science drug discovery.
  3. Surviving Being (Senior) Tech Management (Kellan Elliott-McCrea) — Perspective is the thin line between a challenging but manageable problem, and chittering balled up in the corner.
  4. Disposable UAVs Inspired by Paper Planes (DIY Drones) — The first design, modeled after a paper plane, is created from a cellulose sheet that has electronic circuits ink-jet printed directly onto its body. Once the circuits have been laid on the plane’s frame, the craft is exposed to a UV curing process, turning the planes body into a flexible circuit board. These circuits are then connected to the planes “avionics system”, two elevons attached to the rear of the craft, which give the UAV the ability to steer itself to its destination.

February 22 2013

White House moves to increase public access to scientific research online

Today, the White House responded to a We The People e-petition that asked for free online access to taxpayer-funded research.

open-access-smallopen-access-smallAs part of the response, John Holdren, the director of the White House Office of Science and Technology Policy, released a memorandum today directing agencies with “more than $100 million in research and development expenditures to develop plans to make the results of federally-funded research publically available free of charge within 12 months after original publication.”

The Obama administration has been considering access to federally funded scientific research for years, including a report to Congress in March 2012. The relevant e-petition, which had gathered more than 65,000 signatures, had gone unanswered since May of last year.

As Hayley Tsukayama notes in the Washington Post, the White House acknowledged the open access policies of the National Institutes of Health as a successful model for sharing research.

“This is a big win for researchers, taxpayers, and everyone who depends on research for new medicines, useful technologies, or effective public policies,” said Peter Suber, Director of the Public Knowledge Open Access Project, in a release. “Assuring public access to non-classified publicly-funded research is a long-standing interest of Public Knowledge, and we thank the Obama Administration for taking this significant step.”

Every federal agency covered by this memomorandum will eventually need to “ensure that the public can read, download, and analyze in digital form final peer-reviewed manuscripts or final published documents within a timeframe that is appropriate for each type of research conducted or sponsored by the agency.”

An open government success story?

From the day they were announced, one of the biggest question marks about We The People e-petitions has always been whether the administration would make policy changes or take public stances it had not before on a given issue.

While the memorandum and the potential outcomes from its release come with caveats, from a $100 million threshold to national security or economic competition, this answer from the director of the White House Office of Science Policy accompanied by a memorandum directing agencies to make a plan for public access to research is a substantive outcome.

While there are many reasons to be critical of some open government initiatives, it certainly appears that today, We The People were heard in the halls of government.

An earlier version of this post appears on the Radar Tumblr, including tweets regarding the policy change. Photo Credit: ajc1 on Flickr.

Reposted bycheg00 cheg00

February 06 2013

Four short links: 6 February 2013

  1. Manipulating Google Scholar Citations and Google Scholar Metrics: simple, easy and tempting (PDF) — scholarly paper on how to citespam your paper up Google Scholar’s results list. Fortunately calling your paper “AAAAAA In-vitro Qualia of …” isn’t one of the winning techniques.
  2. Seamless Astronomybrings together astronomers, computer scientists, information scientists, librarians and visualization experts involved in the development of tools and systems to study and enable the next generation of online astronomical research.
  3. Eye Wirea citizen science game where you map the 3D structure of neurons.
  4. Open Science is a Research Accelerator (Nature Chemistry) — challenge was: get rid of this bad-tasting compound from malaria medicine, without raising cost. Did it with open notebooks and collaboration, including LinkedIn groups. Lots of good reflection on advertising, engaging, and speed.

February 05 2013

Crowdfunding science

In our first science-as-a-service post, I highlighted some of the participants in the ecosystem. In this one, I want to share the changing face of funding.

Throughout the 20th century, most scientific research funding has come from one of two sources: government grants or private corporations. Government funding is often a function of the political and economic climate, so researchers who rely on it risk having to deal with funding cuts and delays. Those who are studying something truly innovative or risky often find it difficult to get funded at all. Corporate research is most often undertaken with an eye toward profit, so projects that are unlikely to produce a return on investment are often ignored or discarded.

If one looks to history, however, scientific research was originally funded by individual inventors and wealthy patrons. These patrons were frequently rewarded with effusive acknowledgements of their contributions; Galileo, for example, named the moons of Jupiter after the Medicis (though the names he chose ultimately did not stick).

There has been a resurgence of that model — though perhaps more democratic — in the modern concept of crowdfunding. Kickstarter, the most well-known of the crowdfunding startups, enables inventors, artists, and makers to source the funds they need for their projects by connecting to patrons on the platform. Contributors donate money to a project and are kept updated on its progress. Eventually, they may receive some sort of reward — a sticker acknowledging their participation or an example of the completed work. Scientists have begun to use the site, in many cases, to supplement their funding. Anyone can be a micro-patron! screenshot - funded scientific screenshot - funded scientific project

Deceiving the Superorganism: Ant-Exploiting Beetles” met its goal through Petridish, a funding site.

Science-specific platforms have also appeared on the scene. Petridish is currently showcasing projects looking for funding to study everything from rare butterflies to mass-fatality events. On Microryza, you can fund investigations into cannibalism in T-Rex or viral causes of lung cancer. RocketHub also has a science-specific project roster and recently had a researcher raise funds to study the psycopharmacology of amphetamines. Widely covered as “Help scientist build a meth lab,” the researcher’s write-up of his proposal, including his reasons for crowdfunding it, is excellent and worth a read. And newcomer Iamscientist is combining fundraising help with a community, KnowledgeXchange, which helps researchers to recruit team members and find mentors. While these sites show great promise, several similar platforms founded a few years ago have failed. The extent to which this new crop is popularly adopted remains to be seen, though the excitement around crowdfunding may indicate the time is now right.

While anyone can submit a project to the sites above, there are also hybrid models that enable individuals to “top up” more traditionally funded research. In the UK, MyProjects enables individuals to fund research targeting specific types of cancer; the underlying projects have already been pre-approved and funded by Cancer Research UK. The process is more specific than traditional charitable giving, so contributors feel that they’re making a difference in a specific area that matters to them. The American Association for the Advancement of Science has begun to teach its members about crowdfunding.

For more expensive research, of course, micro-patronage falls short. Breakout Labs, run by the Thiel Foundation, has begun awarding grants of up to $350,000 to “to fill the funding gap that exists for innovative research outside the confines of an academic institution, large corporation, or government.” In exchange, the company retains the rights to its IP, and Breakout Labs is given a percentage of future revenue and an option to invest in an equity round.

Under the traditional grant model, the average researcher spends up to 40% of his or her time chasing funding, and 80% of grant applications are rejected. In addition, the necessity of ties to an academic or industrial organization means that researchers don’t retain control of their IP. The new models of funding can speed up the process, while enabling scientists to keep 100% of their research and results. They also enable citizen scientists to publicize their projects and built communities of involvement.

If you’ve participated in funding scientific research via one of these platforms, or are a scientist who has run a campaign on a crowdfunding site, we’d love to hear your thoughts on the experience in the comments below.


January 31 2013

NASA launches second International Space Apps Challenge

From April 20 to April 21, on Earth Day, the second international Space Apps Challenge will invite developers on all seven continents to the bridge to contribute code to NASA projects.

space app challengespace app challenge

Given longstanding concerns about the sustainability of apps contests, I was curious about NASA’s thinking behind launching this challenge. When I asked NASA’s open government team about the work, I immediately heard back from Nick Skytland (@Skytland), who heads up NASA’s open innovation team.

“The International Space Apps Challenge was a different approach from other federal government ‘app contests’ held before,” replied Skytland, via email.

“Instead of incentivizing technology development through open data and a prize purse, we sought to create a unique platform for international technological cooperation though a weekend-long event hosted in multiple locations across the world. We didn’t just focus on developing software apps, but actually included open hardware, citizen science, and data visualization as well.”

Aspects of that answer will please many open data advocates, like Clay Johnson or David Eaves. When Eaves recently looked at apps contests, in the context of his work on Open Data Day (coming up on February 23rd), he emphasized the importance of events that build community and applications that meet the needs of citizens or respond to business demand.

The rest of my email interview with Skytland follows.

Why is the International Space Apps Challenge worth doing again?

Nick Skytland: We see the International Space Apps Challenge event as a valuable platform for the Agency because it:

  • Creates new technologies and approaches that can solve some of the key challenges of space exploration, as well as making current efforts more cost-effective.
  • Uses open data and technology to address global needs to improve life on Earth and in space.
  • Demonstrates our commitment to the principles of the Open Government Partnership in a concrete way.

What were the results from the first challenge?

Nick Skytland: More than 100 unique open-source solutions were developed in less then 48 hours.

There were 6 winning apps, but the real “results” of the challenge was a 2,000+ person community engaged in and excited about space exploration, ready to apply that experience to challenges identified by the agency at relatively low cost and on a short timeline.

How does this challenge contribute to NASA’s mission?

Nick Skytland: There were many direct benefits. The first International Space Apps Challenge offered seven challenges specific to satellite hardware and payloads, including submissions from at least two commercial organizations. These challenges received multiple solutions in the areas of satellite tracking, suborbital payloads, command and control systems, and leveraging commercial smartphone technology for orbital remote sensing.

Additionally, a large focus of the Space Apps Challenge is on citizen innovation in the commercial space sector, lowering the cost and barriers to space so that it becomes easier to enter the market. By focusing on citizen entrepreneurship, Space Apps enables NASA to be deeply involved with the quickly emerging space startup culture. The event was extremely helpful in encouraging the collection and dissemination of space-derived data.

As you know, we have amazing open data. Space Apps is a key opportunity for us to continue to open new data sources and invite citizens to use them. Space Apps also encouraged the development of new technologies and new industries, like the space-based 3D printing industry and open-source ROV (remote submersibles for underwater exploration.)

How much of the code from more than 200 “solutions” is still in use?

Nick Skytland: We didn’t track this last time around, but almost all (if not all) of the code is still available online, many of the projects continued on well after the event, and some teams continue to work on their projects today. The best example of this is the Pineapple Project, which participated in numerous other hackathons after the 2012 International Space Apps Challenge and just recently was accepted into the Geeks Without Borders accelerator program.

Of the 71 challenges that were offered last year, a low percentage were NASA challenges — about 13, if I recall correctly. There are many reasons for this, mostly that cultural adoption of open government philosophies within government is just slow. What last year did for us is lay the groundwork. Now we have much more buy-in and interest in what can be done. This year, our challenges from NASA are much more mission-focused and relevant to needs program managers have within the agency.

Additionally, many of the externally submitted challenges we have come from other agencies who are interested in using space apps as a platform to address needs they have. Most notably, we recently worked with the Peace Corps on the Innovation Challenge they offered at RHoK in December 2012, with great results.

The International Space Apps Challenge was a way for us not only to move forward technology development, drawing on the talents and initiative of bright-minded developers, engineers, and technologists, but also a platform to actually engage people who have a passion and desire to make an immediate impact on the world.

What’s new in 2013?

Nick Skytland: Our goal for this year is to improve the platform, create an even better engagement experience, and focus the collective talents of people around the world on develop technological solutions that are relevant and immediately useful.

We have a high level of internal buy-in at NASA and a lot of participation outside NASA, from both other government organizations and local leads in many new locations. Fortunately, this means we can focus our efforts on making this an meaningful event and we are well ahead of the curve in terms of planning to do this.

To date, 44 locations have confirmed their participation and we have six spots remaining, although four of these are reserved as placeholders for cities we are pursuing. We have 50 challenge ideas already drafted for the event, 25 of which come directly from NASA. We will be releasing the entire list of challenges around March 15th on

We have 55 organizations so far that are supporting the event, including seven other U.S. government organizations, and international agencies. Embassies or consulates are either directly leading or hosting the events in Monterrey, Krakow, Sofia, Jakarta, Santa Cruz, Rome, London and Auckland.


January 30 2013

Science as a service

Software as a service (SaaS) is one of the great innovations of Web 2.0. SaaS enables flexibility and customized solutions. It reduces costs — the cost of entry, the cost of overhead, and as a result, the cost of experimentation. In doing so, it’s been instrumental in spurring innovation.

So, what if you were to apply the principles of SaaS to science? Perhaps we can facilitate scientific progress by streamlining the process. Science as a service (SciAAS?) will enable researchers to save time and money without compromising quality. Making specialized resources and institutional expertise available for hire gives researchers more flexibility. Core facilities that own equipment can rent it out during down time, helping to reduce their own costs. The promise of science as a service is a future in which research is more efficient, creative, and collaborative.

Outsourcing isn’t a new idea. Contract research organizations (CROs) appeared on the scene in the early 1980s, conducting experiments on a contract basis. Industrial science, especially pharmaceutical research, has been increasingly reliant on CROs; spending on CRO-run research increased from $1.6 billion in 1994 to $7.6 billion in 2004, and is projected to hit $20 billion in 2017. Alongside that trend is a corresponding decrease in the percentage of clinical trials run at academic centers — 63% to 23%. In big pharma, there has been a “strategic push away from the traditional strategies of Mergers & Acquisitions and licensing, toward partnering and outsourcing to acquire new drug candidates.”

Despite the steadily increasing involvement of CROs in industrial research, many academics and smaller researchers have found using outside labs to be cost prohibitive and opaque. For those researchers, the process of outsourcing a study involves googling to find service providers with specific expertise, contacting the provider to determine suitability and cost, and then going through a time-consuming reference check and quality verification process. Some simply don’t know what’s out there; they aren’t sure where to start the googling. For many university scientists, there’s an added layer of complexity in the form of purchase approvals for each facility. This process frustrates the scientist. It also results in many core facilities remaining underused.

Frustration has led a recent crop of enterprising startup founders — many of them scientists themselves — to apply IT “best practices” to science. Their goal is to disrupt the slow-moving pace and high cost of research. To do this, they’re applying innovative business models traditionally used by B2B and B2C startups — everything from the principles of collaborative consumption to decoupling service workers from their traditional places of employment.

One of these startups is Science Exchange, a marketplace that aims to increase transparency around experimental service provider cost and availability. Founded by a biologist, Science Exchange helps researchers source facilities or expertise that is unavailable in their own labs. The providers on the site offer everything from microarray analysis to microgravity experiments aboard the International Space Station. Customers search for a service, request an estimate, and pick a provider from the quotes that come in. Science Exchange handles purchase orders and payment transfers, and provides a project-management dashboard. Through the structure of the site, researchers become aware of new facilities, and providers may suggest new technologies. The relationship has the potential to be more collaborative than a typical provider-client relationship.

Science Exchange is the glue in a unique and developing ecosystem. Some of the providers on the site are themselves startups offering scientific experiments as a service. 3Scan, for example, offers a cutting-edge form of 3D microscopic scanning that produces high-resolution images in a fraction of the time of other methods. Researchers in need of this technique needn’t buy their own knife-edge scanning microscope; they can simply reserve the service.

Some SciAAS startups aim to disrupt CROs. Transcriptic, which describes itself as a “meticulously optimized, technology-enabled remote lab,” is working to change traditional wet lab biology by getting rid of infrastructure overhead. They’ve started with molecular cloning and are focused on reducing the time cost and error rate associated with running protocols by hand. Assay Depot has been called the “Home Depot for biology and medicine.” A researcher specifies the experiment he or she would like to see done, and labs submit bids to perform it.

The promise of applying big data technologies to biological research has led to SaaS data analysis tools built specifically with scientists in mind. SolveBio, a computational biology platform, enables researchers to have access to the latest in data-processing technology without having to maintain computing infrastructure or learn cumbersome tools. Collaborative Drug Discovery (CDD), which spun out of Eli Lilly, is a data platform that was built because the founder believed that the future of drug discovery would involve collaboration across specialized channels. Researchers can store and analyze their data with sophisticated tools, and can also open parts of their repository to others. The Gates Foundation and Novartis have been users. Benchling, a platform for life science data management, is also incorporating IT best practices via version control, aiming to create a “GitHub for biology.”

E-commerce principles underlie new marketplaces for scientific equipment. P212121 is helping SMB suppliers of chemical and laboratory reagents to bring their wares online. Their platform uses software to search and curate tens of thousands of products, and focuses on transparent pricing. Enabling labs to bypass behemoths such as Sigma and Fisher allows them to save money.

Startups are also tackling the problem of expertise by facilitating collaboration. Zombal is a job marketplace for contractors who need experts to meet freelance scientists. By outsourcing areas that are not core competencies, more resources are freed up to focus on what’s needed.

These facets of science as a service are just some of the ways that IT principles are being applied to the realm of research. There’s also exciting activity happening around crowdsourced science, open science, and crowdfunding for scientific research. If you’re a scientist, lab head, or SciAAS startup founder who’s reading this, we’d love to hear your thoughts on the changing face of scientific research in the comments below.


January 25 2013

Making the web work for science

The field of science is firmly on our radar as a vertical with huge interest in and opportunities for the things that are foundational to O’Reilly’s world view: openness, platforms and APIs, creating more value than you capture, the web as foundational platform, the power of big data both as key to analytics and as business model chokepoint, sensors and the Internet of Things, open publishing … and it’s just beginning to be accelerated by startups.

It’s an exciting time in science, so for an update I spent some time with Kaitlin Thaney (@kaythaney), who helps organise Science Foo Camp, co-chairs Strata Conference in London, and who serves as the manager of external partnerships for Digital Science, a technology company serving scientific research, where she is responsible for the group’s public-facing activities.

Nat Torkington: Thanks for joining me today. Let’s start with Science Foo Camp (“Sci Foo” for short). At O’Reilly, we get insight into how the trends we see in our early adopter technology market are feeding into (and being informed by) the work of practising scientists. What’s the Digital Science view of Science Foo?

Kaitlin Thaney: Sci Foo, to me, gets back to that curiosity and serendipity that often drives brilliant interactions and ideas. It’s a supercharged weekend with little to no structure, where you’re not scolded for leaving a session midway (or not attending one at all), that stretches your mind and leaves you inspired. It’s a mix of utterly fascinating people across disciplines and sectors all somehow linked by a common interest or work in science. I’d say it’s a means for us to connect to our community, but that’d be, albeit in some sense true, vastly underselling the power of it, I’m afraid. For attendees, of course, it’s energizing and open to possibilities — so very different from the usual scientific conference.

Nat Torkington: For some outsiders, obviously, it’s a bit of a strange thing for these Internet-age companies to be engaging with the white labcoats of science. Science is obviously a centuries-old (millenia-old if you take a wider view) endeavour. Why must science practice today be different from the scientific practice that gave us our understanding of atomic theory, black holes, and antibiotics?

Kaitlin Thaney: Those practices aren’t scalable anymore, and rather than helping the next generation of researchers build their careers, they’re hindering us from finding the next cure for a disease or making the next big discovery. In drug discovery, we’ve plucked the low-hanging fruit, and now need to look at larger scale datasets to identify interesting pathways for targeting.

The way we produced and disseminated knowledge has moved beyond writing on a sheet of paper and passing it to a colleague. Technology — and namely, the web — has fundamentally transformed how we interrogate and interact with content, how we discover information, our work environments, our agility. It’s time we bring our scientific research methods into the 21st century.

Nat Torkington: Nicely said! How have things changed since 2006 when you and John Wilbanks, as part of the Science Commons team at Creative Commons, first started talking about this?

Kaitlin Thaney: In some ways, we’re still fighting the same theoretical battles, just in different arenas and increasingly now with the public’s attention, not just that of the zealots. That’s an important shift in and of itself, and shows that our work in the early years not only advocating publicly for this change but also backing that up with the legal and technical tools for use in research (not just in copyright), is working, slowly but surely.

The messaging and theory our team at Creative Commons crafted, from principles of open science, to open data doctrine as well as broader theory on the commons, was meant for the community, and to see that taken forward still shows that there’s still work to be done, and an appetite for that change. Our initial aim — to make the web work better for science — is being carried forward by a number of organisations, projects and advocates, ranging from the open science world to software shops like ours. It’s validation of a sort to see that work continue, and for that message to still resonate.

It’s becoming a mainstream conversation, with an ever-increasing focus on making the fruits of research available to be learned from, built upon, consumed by the broader public as a means of engagement. That conversation has shifted from making the actual content available (a hard sell, even for say, green Open Access, in 2006) to making sure that all of the necessary components — the underlying data, the code, the software — are also somehow listed or included to reduce the burden on the next researcher who comes along, wanting to pick up where the research left off, or remix that into a new experiment.

Nat Torkington: Is it still a conversation? Why is actual change in practice so hard to make?

Kaitlin Thaney: In scientific research, we’re dealing with special circumstances, trying to innovate upon hundreds of years of entrenched norms and practices, broken incentive structures and information discovery problems dramatically slowing the system, keeping us from making the advances needed to better society. This stuff is tough, and there’s not a quick one-size-fits-all solution (trust me, I’ve looked).

I spoke to a number of these issues in my talk at OSCON this past July, on how for all of the incredible discoveries we see hit the news and the pages of Nature, for how many times we hear about “Science 2.0″ and even “3.0″ (sorry, Tim), we haven’t even hit “0.5″ … at least in the worlds I work in. Issues such as these are oft-overlooked, baseline assumptions we take for granted that haven’t actually been addressed or solved.

Think of it as a calcified pipe whose blockages cause only a trickle to come out on the other end. That’s most research. The big advances in science? They have a high-pressure firehose on one end. More pressure, same broken pipe. Those breaks in the system are keeping us from doing more efficient work and truly advancing discovery.

Nat Torkington: Are there success stories you can point to, though, of gains from change in practices and behaviours?

Kaitlin Thaney: We’re beginning to see data treated more as a first-class citizen when it comes to reporting, publication, and availability. Take for example the changes undertaken recently by the National Science Foundation (NSF), one of the leading funding bodies in the sciences. They put in place a requirement that for all proposals submitted on or after January 2011, there must be a data management plan. That’s now been elaborated upon with an additional requirement for grantees to list data, software, code, patents or other “products” as research outputs as part of that funding. And from what I can tell, they intend to enforce that requirement, which has surely caught the attention of researchers applying for NSF money.

There are a host of other mandates on the institutional and funder level that have helped push the access conversation from one that many feigned ignorance of, to one that while still uncomfortable for some, is at least unavoidable. This is progress, and is helping, in a top-down fashion, provide an incentive for researchers.

Offerings like figshare (whom we at Digital Science are delighted to support) are now increasingly being embedded in new ways, helping to better link data to publication (see their integration with Faculty of 1000′s new research journal), as well as stitch data and other research objects (ie., posters, datasets, video, figures) into schemes familiar to researchers like citation. Not only is the interface stupidly simple and fast to use, it’s free, the defaults are set to open (CC-BY if copyright applies, CC0 for the rest), and everything uploaded gets a Digital Object Identifier allowing for citation. That, to me, we need to see more of.

What I think will be interesting to watch is the continuing evolution in thinking when it comes to mapping our digital behaviours in other settings to research, taking lessons from our habits as digital consumers (which, in many cases, have become second nature) and seeing if there’s an opportunity to apply to science. For example, the way that when we go to purchase an item, chances are many of us will look to online reviews, be it on Amazon or TripAdvisor, to decide whether that’s a smart investment. Now, apply that to say, sourcing biological materials like antibodies for experiments. Efficacy is a massive problem (and an expensive one at that), where you can buy the same Huntingtin antibody from five top suppliers and not realise until you’re midway through your experiment that it’s of insufficient quality. It’s like shopping blindly, hoping for the best. A team we work with in Toronto, 1DegreeBio, is working to change those odds, providing useful reviews on products in an open fashion matched with a catalogue of the top commercial and university offerings, allowing researchers to make educated decisions that won’t waste time and money.

Of course, moving more toward efficient, digital research opens up opportunities to remix, link up, tinker, and learn in new ways, from stumbling on that next scientific discovery due to new collaborative technologies and analysis tools, to having a better understanding as to what’s going on under the hood and preserving that knowledge for the next crop of students to come along. I still think there’s work to be done in really making the web work for science, but we’re heading in the right direction — and luckily, there are plenty of us working towards that goal.

Nat Torkington: How has this mapped to tool development?

Kaitlin Thaney: In terms of the technology, we have seen a burst of growth in the scientific startup space, as well as innovation in big pharma, biotech, and non-profits to name a few. Researchers are overwhelmed with choices for means to manage, share and analyse their research in new ways, with an ever-growing stack of tools and web apps that can enhance their experience and further bridge the gap between the way we consume, sift and create as digital consumers to how we do so as researchers.

But at the end of the day, the researcher has to be incentivised to learn how to navigate a new interface or engage with a system. They have to trust that their investment of time and energy into adopting a new technology is going to pay off. It has to show immediate value.

At Digital Science, we craft software solutions for researchers, many of us coming from that world ourselves. I can tell you, trust and incentives are not easy problems to crack, but underestimating their importance could lead to the downfall of your project or company. Many of us come from the research world, which helps in terms of grasping the severity of the problem and the right approach to get scientists on board. Beyond our core team, the founders we work with each come with a similar story in some regards of encountering a problem in their day-to-day work, getting frustrated, and crafting a solution to get around it.

To go back to the figshare example cited previously, Mark Hahnel (the founder of figshare) crafted that while he was finishing his PhD in stem cell biology. He was frustrated that he’d collected so much other data that wasn’t going to be used in publication, but still was of use to the community. That’s how figshare was born. Labguru, a lightweight lab management tool, was developed by a researcher who was sick of how much waste (time, money, and materials) there was in his lab, where the norm was a lab bench organised with colored stickers, post-it notes and stacks of invoices. You’ll see a number of these projects — and they’re the ones I find most promising — where the team involved comes with that domain expertise paired with a bit of frustration. With some added support and integration where it makes sense, they really start to make a difference.

Nat Torkington: From my limited experience, most scientists aren’t as comfortable with software and the possibilities of the Internet as they would have to be in order to make the change in practice. Is this right?

Kaitlin Thaney: Yes. Looking beyond tools offerings, skills training to match the technology is still leagues behind where it should be. This could be in part, I’m told, due to bandwidth or lack of resources to update curriculum at universities (whilst arguably a legitimate concern to some extent, I’m not sure this excuses it); also, in part, due to increased specialisation on a disciplinary level.

As I was told by one researcher “you either are good at the wet lab stuff, or the computational analysis,” dependent on whether you choose to pursue a traditional life sciences degree or the computational flavoring (ie., computational neuroscience / biology / etc.). Some are self-taught to understand packages such as R or to program in Python. Others are left at a loss, and basic concepts necessary for understanding information let alone discipline-specific research such as basic statistical literacy, data management, visualisation, and analysis fall by the wayside.

Resetting the defaults when students first learn how to, say, save their data alongside an experiment could make a tremendous difference in the long-term, in terms of understanding how to store and mark up their data as they go along is something like figshare, for example, how to tie that into publication, understanding how that fits into data management plans. A broader understanding and introduction to the vast array of tools that exist earlier in a student’s research career could also dramatically alter behavioral chain in how they teach their students, and the next generation after that, be it in better managing their lab, making their data available, or sourcing materials.

The skills gap is only increasing, and there are groups out there working to address this (i.e, Software Carpentry, ROpenSci, Young Rewired State, Code Club, etc.) that we can, and should, learn from, keeping this in mind as we continue developing top-notch tools for researchers and working to increase awareness. Having a better understanding as to how a result was reached or what data from an experiment may truly represent is empowering, and we need to make sure those methods of inquiry and skills aren’t lost.

Nat Torkington: So what’s going to happen next in this transformation of science practice? What should we be keeping our eye on and watching for?

Kaitlin Thaney: Content will continue to move online, not just in a digitised fashion, but meaningfully linked to the necessary components (where feasible) to reproduce that experiment. We’re making headway with getting the content online. Next is adding value to the content and making it useful for the community.

Work environments are also moving more to the cloud — private or public — providing a more distributed approach to sharing, analysing and filtering information.

I also foresee a more concerted effort to reset our defaults to open, from how we teach undergraduates, to the tools we arm them with to conduct their research and even publication. Providing entry points that are set to foster collaboration, sharing of information and better information management will help us establish these practices as the norm, rather than a conflicting way of doing research. And we’re starting to see the beginnings of that.

Nat Torkington: Thanks so much for taking the time to talk with me. Best of luck with your science revolution!

Kaitlin Thaney: Thanks, and I’ll see you at Kiwi Foo Camp one of these years!

October 22 2012

Open Access Week mit Podiumsdiskussion am Dienstag in Berlin

Heute beginnt die Open Access Week, eine seit 2008 stattfindende Aktionswoche, die für den freien Zugang zu wissenschaftlichen Informationen wirbt. Nach eigenen Angaben findet sie „überall”, also international statt und ist übrigens nicht zu verwechseln mit den Ende September in Wien abgehaltenen Open-Access-Tagen.

In Berlin gibt es zum Thema am Dienstagabend eine Podiumsdiskussion, die sich mit „Chancen und Herausforderungen” von Open Science beschäftigt. Damit ist neben Open Access im engeren Sinn auch die dauerhafte Zugänglichkeit von Forschungsdaten und der Einsatz des Web 2.0 in der Wissenschaft gemeint.

Aus dem Programm:

Wann: am Dienstag, den 23.Oktober 2012 ab 18 Uhr
Wo: Auditorium des Jacob-und-Wilhelm-Grimm-Zentrums der Humboldt-Universität zu Berlin
Die Veranstaltung ist kostenfrei. Es ist keine Anmeldung erforderlich.

Begrüßung durch Dr. Andreas Degkwitz (Humboldt-Universität zu Berlin)

Einführungsreferat: Prof. Dr. Martin Grötschel (Zuse-Institut Berlin, Technische Universität Berlin)
Moderation: Prof. Dr. Peter Schirmbacher (Humboldt-Universität zu Berlin)

Die Teilnehmer:

Dr. Christoph Bruch (Helmholtz-Gemeinschaft)
Prof. Dr. Ortwin Dally (Deutsches Archäologisches Institut)
Dr. Andreas Degkwitz (UB der Humboldt-Universität zu Berlin)
Prof. Dr. Martin Grötschel (Technische Universität Berlin)
Dr. Jeanette Hofmann (Wissenschaftszentrum Berlin für Sozialforschung)
Dr. Angelika Lex (Elsevier)
Dr. Anne Lipp (Deutsche Forschungsgemeinschaft)

Veranstaltet wird das Podium vom Open-Access-Büro der Helmholtz-Gemeinschaft, dem Institut für Bibliotheks- und Informations­wissenschaften der HU Berlin und anderen Berliner Hochschuleinrichtungen.

Weitere Veranstaltungen im Rahmen der Open Access Week gibt es diese Woche unter anderem auch in Dresden und Hannover.

September 24 2012

Four short links: 24 September 2012

  1. Open Monograph Pressan open source software platform for managing the editorial workflow required to see monographs, edited volumes and, scholarly editions through internal and external review, editing, cataloguing, production, and publication. OMP will operate, as well, as a press website with catalog, distribution, and sales capacities. (via OKFN)
  2. Sensing Activity in Royal Shakespeare Theatre (NLTK) — sensing activity in the theatre, for graphing. Raw data available. (via Infovore)
  3. Why Journalists Love Reddit (GigaOM) — “Stories appear on Reddit, then half a day later they’re on Buzzfeed and Gawker, then they’re on the Washington Post, The Guardian and the New York Times. It’s a pretty established pattern.”
  4. Relatively Prime: The Toolbox — Kickstarted podcasts on mathematics. (via BoingBoing)

September 06 2012

Four short links: 6 September 2012

  1. ENCODE Project — International project (headed by Ewan Birney of BioPerl fame) doxes the human genome, bigtime. See the Nature piece, and Ed Yong’s explanation of the awesome for more. Not only did they release the data, but also the software, including a custom VM.
  2. 5 Ways You Don’t Realize Movies Are Controlling Your Brain — this! is! awesome!
  3. RC Grasshoppers — not a band name, an Israeli research project funded by the US Army, to remotely-control insects in flight. Instead of building a tiny plane whose dimensions would be measured in centimeters, the researchers are taking advantage of 300 million years of evolution.
  4. enquire.js — small Javascript library for building responsive websites. (via Darren Wood)

August 24 2012

Four short links: 24 August 2012

  1. Speak Like a Pro (iTunes) — practice public speaking, and your phone will rate your performance and give you tips to improve. (via Idealog)
  2. If Hemingway Wrote Javascript — glorious. I swear I marked Andre Breton’s assignments at university. (via BoingBoing)
  3. R Open Sciopen source R packages that provide programmatic access to a variety of scientific data, full-text of journal articles, and repositories that provide real-time metrics of scholarly impact.
  4. Keeping Your Site Alive (EFF) — guide to surviving DDOS attacks. (via BoingBoing)

April 08 2010

02mydafsoup-01, a Discussion and Lecture Online Platform, Founded in 2007 - is Charging since March 2010 for Access to New Complete Videostreams

Since March 24th 2010 there is at no more a clearly seperating line between scientific information, PR and charging: the newly introduced premium access is nether clearly defined nor transparent in its extensions - what is charged, how long, is there a time line, from where on the videos are free? - etc. - The shift between free access and premium access was done mostly silently - intransparency rules, bad style for the audience, which to a greater part is composed by an international community of netizens, mostly people who tried to support and build up a freely supported network of good information sources, what was btw. also the PR strategy of during the last years -'s way to handle now the financial reward shows a lack of information society conceptions and a new way to organize them by digital supported technologies to gain nevertheless a financial outcome - unfortunately it proves also a lack of honesty which silently menaces by a systematicaly build up intransparency the access to reliable www based qualitiy of information.

[to whome it may concerne - @sigalon02 @sigalon @sigaloninspired ]

oanth - muc - 20100408
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
No Soup for you

Don't be the product, buy the product!

YES, I want to SOUP ●UP for ...