Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

July 17 2013

Four short links: 17 July 2013

  1. Hideout — augmented reality books. (via Hacker News)
  2. Patterns and Practices for Open Source Software Success (Stephen Walli) — Successful FOSS projects grow their communities outward to drive contribution to the core project. To build that community, a project needs to develop three onramps for software users, developers, and contributors, and ultimately commercial contributors.
  3. How to Act on LKML — Linus’s tantrums are called out by one of the kernel developers in a clear and positive way.
  4. Beyond the Coming Age of Networked Matter (BoingBoing) — Bruce Sterling’s speculative short story, written for the Institute For The Future. “Stephen Wolfram was right about everything. Wolfram is the greatest physicist since Isaac Newton. Since Plato, even. Our meager, blind physics is just a subset of Wolfram’s new-kind-of- science metaphysics. He deserves fifty Nobels.” “How many people have read that Wolfram book?” I asked him. “I hear that his book is, like, huge, cranky, occult, and it drives readers mad.” “I read the forbidden book,” said Crawferd.

May 31 2013

Four short links: 31 May 2013

  1. Modeling Users’ Activity on Twitter Networks: Validation of Dunbar’s Number (PLoSone) — In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar’s number. We find that the data are in agreement with Dunbar’s result; users can entertain a maximum of 100–200 stable relationships. Thus, the ‘economy of attention’ is limited in the online world by cognitive and biological constraints as predicted by Dunbar’s theory. We propose a simple model for users’ behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.
  2. Mary Meeker’s Internet Trends (Slideshare) — check out slide 24, ~2x month-on-month growth for MyFitnessPal’s number of API calls, which Meeker users as a proxy for “fitness data on mobile + wearable devices”.
  3. What I Learned as an Oompa Loompa (Elaine Wherry) — working in a chocolate factory, learning the differences and overlaps between a web startup and an more traditional physical goods business. It’s so much easier to build a sustainable organization around a simple revenue model. There are no tensions between ad partners, distribution sites, engineering, and sales teams. There are fewer points of failure. Instead, everyone is aligned towards a simple goal: make something people want.
  4. Augmented Reality Futures (Quartz) — wrap-up of tech in the works and coming. Instruction is the bit that interests me, scaffolding our lives: While it isn’t on the market yet, Inglobe Technologies just previewed an augmented reality app that tracks and virtually labels the components of a car engine in real time. That would make popping the hood of your car on the side of the road much less scary. The app claims to simplify tasks like checking oil and topping up coolant fluid, even for novice mechanics.

May 09 2013

Four short links: 9 May 2013

  1. On Google’s Ingress Game (ReadWrite Web) — By rolling out Ingress to developers at I/O, Google hopes to show how mobile, location, multi-player and augmented reality functions can be integrated into developer application offerings. In that way, Ingress becomes a kind of “how-to” template to developers looking to create vibrant new offerings for Android games and apps. (via Mike Loukides)
  2. Nanoscribe Micro-3D Printerin contrast to stereolithography (SLA), the resolution is between 1 and 2 orders of magnitude higher: Feature sizes in the order of 1 µm and less are standard. (via BoingBoing)
  3. ThingpunkThe problem of the persistence of these traditional values is that they prevent us from addressing the most pressing design questions of the digital era: How can we create these forms of beauty and fulfill this promise of authenticity within the large and growing portions of our lives that are lived digitally? Or, conversely, can we learn to move past these older ideas of value, to embrace the transience and changeability offered by the digital as virtues in themselves? Thus far, instead of approaching these (extremely difficult) questions directly, traditional design thinking has lead us to avoid them by trying to make our digital things more like physical things (building in artificial scarcity, designing them skeumorphically, etc.) and by treating the digital as a supplemental add-on to primarily physical devices and experiences (the Internet of Things, digital fabrication).
  4. Kickstarter and NPRThe internet turns everything into public radio. There’s a truth here about audience-supported media and the kinds of money-extraction systems necessary to beat freeloading in a medium that makes money-collection hard and freeloading easy.

March 12 2013

Google Glass: Was darf ein Datenbrillenträger?

Googles neue Datenbrille kann Daten in einer neuen Qualität sammeln. Ihr Träger wirkt an Googles Datenbergen mit. Was sind die datenschutzrechtlichen Folgen?

Noch in diesem Jahr könnte, Medienberichten zufolge, eine neue Ära der Mensch-Maschine-Beziehung anbrechen: Google Glass, die Datenbrille des Internetriesen, wird es ihren Trägern für einen geschätzten Preis von weniger als 1200 Euro – so viel kostete eine Entwicklerversion der Brille – unter anderem erlauben, per Sprachsteuerung Fotos und Videos aufzunehmen, diese sofort ins Internet zu übertragen und etwa mit anderen Nutzern zu teilen. Bereits jetzt beginnen die Diskussionen um die Effekte und Gefahren der Nutzung dieses Gerätes – siehe dazu auch Netzpolitik.org.

Das ultimative Cookie

Zum einen bestehen Bedenken, dass Google damit ein weiterer Weg eröffnet ist, um Daten über Personen zu sammeln und so ein noch genaueres Profil des Trägers der Brille zu erstellen. Das ultimative Cookie also? Hierfür etwa GPS-Daten aus dem Gerät zu lesen, wäre nichts wirklich Neues. Doch Google würde viel mehr Daten erhalten. Neben dem Erstellen von Fotos und Videos wird es auch möglich sein, die Suchmaschine von Google zu nutzen, ebenso wie den Übersetzungsdienst oder auch das Schreiben und Versenden von Nachrichten.

Ja, die Speicherung solcher Daten ist auch bereits jetzt schon über ein Smartphone möglich. Der Unterschied wird wohl jedoch darin liegen, dass durch Google Glass das Volumen immens zunehmen wird. Denn die Benutzerfreundlichkeit erhöht sich um ein Vielfaches, wenn man über eine Sprachsteuerung all die Aktionen ausführen kann, für die es mit dem Smartphone noch des Einsatzes der Finger und lästigen Tippens bedurft hätte.

Verantwortlichkeit des Trägers

Zum anderen wird man sich die Frage nach der Verantwortlichkeit des Trägers der Brille stellen müssen, wenn er Fotos und Videos seiner Umgebung macht. Völlig unbemerkt von Dritten – das Anvisieren mit dem Smartphone ist ja nicht mehr nötig – ließen sich so personenbezogene Daten erheben, welche dann auf die Server von Google in die USA übermittelt werden und durch den Nutzer, etwa bei Google Plus, veröffentlicht werden können.

Datenschutzrechtlich wird man den Nutzer in solchen Fällen als (mit-)verantwortliche Stelle ansehen können. Eine Privilegierung für rein familiäre oder persönliche Tätigkeiten lässt sich zumindest bei der allgemeinen Veröffentlichung von Bildern im Internet nicht annehmen. Auf die Regelungen des Bundesdatenschutzgesetzes zur Videoüberwachung (Paragraf 6b BDSG) käme es wohl nicht an, da diese eine Beobachtung voraussetzen, also das Erstellen eines Videos über einen längeren Zeitraum hinweg.

Entscheidendes Kriterium der Zulässigeit der Verarbeitung personenbezogener Daten wäre dann die Abwägung mit den schutzwürdigen Interessen der Betroffenen (Paragraf 28 Abs. 1 S. 1 Nr. 3 BDSG). Diese werden meist keine Kenntnis von der Anfertigung des Fotos haben. Sie sehen nur eine Person mit Brille, die in ihre Richtung blickt. Zudem wird es auf den Inhalt der Daten ankommen. Was wird fotografiert? Intime Momente, peinliche Situationen?

Allgegenwärtige Datenverarbeitung

Eventuell zieht man auch einen Vergleich mit der Problematik um Google Street View. Aber wer ist dann für eine Unkenntlichmachung der Gesichter der abgebildeten Personen verantwortlich? Doch abgesehen von den datenschutzrechtlichen Implikationen kommen hier freilich auch noch andere Rechtsgebiete zum tragen, etwa das allgemeine Persönlichkeitsrecht mit dem Schutz der Privatsphäre oder das Recht am eigenen Bild. Nicht zu vergessen eine eventuell bestehende strafrechtliche Verantwortlichkeit (Paragraf 201a StGB).

Die Einführung von Google Glass wird – neben vielen rechtlichen Fragen – aber sicherlich auch einen weiteren Beitrag zur allgemeinen gesellschaftlichen Diskussion um den Stellenwert von Privatsphäre anstoßen.

Wie weit sind wir bereit, unser Leben der allgegenwärtigen Datenverarbeitung anzupassen und unterzuordnen?

Wo liegen die Grenzen oder verschwimmen diese gerade?

Carlo Piltz ist Referendar am Kammergericht Berlin und hat zum Thema: “Soziale Netzwerke im Internet – Eine Gefahr für das Persönlichkeitsrecht?” promoviert. Er bloggt unter „de lege data – Datenschutz, Privacy, Web 2.0”, wo auch dieser Beitrag zuerst erschienen ist. Veröffentlichung mit freundlicher Genehmigung des Autors (die CC-Lizenz gilt hier nicht). Foto: Thomas Hawk, CC BY-NC.

March 06 2013

Augmented Reality: Das Produkt von Träumern oder bald auch Realität?

Beim Thema Augmented Reality, der Technik der »erweiterten Realität«, scheiden sich die Meinungen von E-Commerce-Experten: Für die einen stellt es Webshopping in Perfektion dar, das enorme umsatzsteigernde Potentiale birgt, so z. B. durch die positive Beeinflussung von Kaufentscheidung sowie durch das Senken von Retouren-Quoten. Andere dagegen bewerten diese Technik, mit deren Hilfe virtuelle Daten mit der realen Welt verknüpft werden, lediglich als neuen Hype ohne nennenswerte verkaufsfördernde Effekte.

Wie sinnvoll eine Investition in Augmented Reality tatsächlich ist, hängt in erster Linie davon ab, wie empfänglich Internet-Nutzer bzw. potentielle Online-Einkäufer für diese technische Innovation sind. Der Kreis der Nutzer, die Augmented Reality beim virtuellen Einblenden, Anprobieren und Auswählen von Produkten für interessant hält, ist durchaus beachtlich. Und: er variiert je nach Produktgattung erheblich.

Augmented Reality bei der Wohnungseinrichtung birgt noch großes Potential

Vor allem bei der Wohnungseinrichtung halten viele Nutzer Augmented Reality für attraktiv: Jeder zweite Internet-Nutzer gab an, dass er die Möglichkeit nützlich findet, bei der Planung einer Wohnungseinrichtung Räume zu filmen oder zu fotografieren und Einrichtungsgegenstände anschließend so in das Bild einzublenden, als ob sie sich tatsächlich in dieser Umgebung befänden.

Besonders interessant ist dies offensichtlich bei kostspieligeren Anschaffungen wie Möbeln (80 %) und Kücheneinrichtungen (69 %). Aber auch bei der Gestaltung von Wänden (z. B. zum Testen von Tapetenmustern und Wandfarben) wissen viele (62 %) die Vorzüge von Augmented Reality zu schätzen. Auf wenig Resonanz hingegen stößt Augmented Reality, wenn es um das virtuelle Einpassen von Unterhaltungselektronik geht: Dies halten lediglich 17 % der Interessierten für nützlich.

Augmented Reality bei Modeprodukten für viele Interessant

Bei Modeprodukten dagegen fällt der Kreis der Augmented Reality-Interessenten etwas niedriger aus: Hier signalisieren lediglich knapp 42 % der befragten Internet-Nutzer Interesse. Mit Abstand die meisten (gut zwei Drittel) halten das Anprobieren von Brillen für nützlich. Jeweils gut die Hälfte ist gegenüber Augmented Reality-Angeboten beim Kauf von T-Shirts, Hosen- und Haarfarben aufgeschlossen. Anders sieht es beim virtuellen Anprobieren von Schuhen, Dessous/Unterwäsche, Schmuck und Kosmetik aus: Dies hält maximal jeder fünfte Interessent für sinnvoll.

Der W3B-Report »Trends im E-Commerce« befasst sich daher intensiv mit aktuellen E-Commerce-Themen und -Trends aus der Perspektive von Nutzern bzw. Kunden. Er unterstützt damit die erfolgreiche, zielgruppenorientierte Optimierung und Positionierung von Webshops.

March 04 2013

Four short links: 4 March 2013

  1. Life Inside the Aaron Swartz Investigationdo hard things and risk failure. What else are we on this earth for?
  2. crossfilter — open source (Apache 2) JavaScript library for exploring large multivariate datasets in the browser. Crossfilter supports extremely fast (<30ms) interaction with coordinated views, even with datasets containing a million or more records.
  3. Steve Mann: My Augmediated Life (IEEE) — Until recently, most people tended to regard me and my work with mild curiosity and bemusement. Nobody really thought much about what this technology might mean for society at large. But increasingly, smartphone owners are using various sorts of augmented-reality apps. And just about all mobile-phone users have helped to make video and audio recording capabilities pervasive. Our laws and culture haven’t even caught up with that. Imagine if hundreds of thousands, maybe millions, of people had video cameras constantly poised on their heads. If that happens, my experiences should take on new relevance.
  4. The Google Glass Feature No-One Is Talking AboutThe most important Google Glass experience is not the user experience – it’s the experience of everyone else. The experience of being a citizen, in public, is about to change.

December 07 2012

Four short links: 7 December 2012

  1. AR Drone That Infects Other Drones With Virus Wins DroneGames (IEEE) — how awesome is a contest where a group who taught a drone to behave itself on the end of a leash, constantly taking pictures and performing facial recognition, posting the resulting images to Twitter in real-time didn’t win.
  2. BitCoin-Central Becomes Legit BankAfter all this patient work and lobbying we’re finally happy and proud to announce that Bitcoin-Central.net becomes today the first Bitcoin exchange operating within the framework of European regulations. Covered by FDIC-equivalent, can have debit or credit cards connected to the BitCoin account, can even get your salary auto-deposited into your BitCoin account.
  3. The Antifragility of the Web (Kevin Marks) — By shielding people from the complexities of the web, by removing the fragility of links, we’re actually making things worse. We’re creating a fragility debt. Suddenly, something changes – money runs out, a pivot is declared, an aquihire happens, and the pent-up fragility is resolved in a Black Swan moment.
  4. xcharts (GitHub) — sweet charts in Javascript.

August 28 2012

Four short links: 28 August 2012

  1. Javascript Tips for Non-Specialists (OmniTI) — “hey kid, you’re going to have to write browser Javascript. Read this and you’ll avoid the obvious cowpats.”
  2. Museum Datasets (Seb Chan) — collections metadata aren’t generally in good quality (often materials are indexed at the “box level”, ie this item number is a BOX and it contains photos of these things), and aren’t all that useful. The story about the Parisian balcony grille is an excellent reminder that the institution’s collections aren’t a be-all and end-all for researchers.
  3. Hurricane Electric BGP Toolkit — open source tools for diagnosing network problems. (via Nelson Minar)
  4. Evernote Smart Notebook by Moleskine — computer vision to straighten up photographed pages of the notebook, and the app recognizes special stickers placed on the book as highlights and selections. Nifty micro-use of augmented reality.

July 25 2012

Four short links: 25 July 2012

  1. Bank of England Complains About AR Bank NotesAfter downloading the free Blippar app on iPhone or Android, customers were able to ‘blipp’ any ten-pound note in circulation by opening the app and holding their phone over the note. An animated Queen, and other members of the Royal Family, then appeared on the screen and voiced opinions on the latest football matters.
  2. Kittydar — open source computer vision library in Javascript for identifying cat faces. I am not making this up. (via Kyle McDonald
  3. Quantified Mind — battery of cognitive tests, so you can track performance over time and measure the effect of interventions (coffee, diet, exercise, whatever). (via Sara Winge)
  4. Jellyfish Made From Rat Cells (Nature) — an artificial jellyfish using silicone and muscle cells from a rat’s heart. The synthetic creature, dubbed a medusoid, looks like a flower with eight petals. When placed in an electric field, it pulses and swims exactly like its living counterpart. Very cool, but the bit that caught my eye was: the team built the medusoid as a way of understanding the “fundamental laws of muscular pumps”. It is an engineer’s approach to basic science: prove that you have identified the right principles by building something with them.

April 22 2011

HUMAN+ The Future of Our Species - in pictures

Artists and scientists explore the future of our species in the HUMAN+ exhibition at the Science Gallery, Trinity College Dublin



April 21 2011

See the world in a new light

Dr Patrick Degenaar explains how retinal prosthetics may one day allow humans to see in ultraviolet and infrared, a concept explored in a film unveiled at the HUMAN+ exhibition in Dublin

The purpose of retinal prosthetics is to restore sight to patients who have a degenerative condition called retinitis pigmentosa, which affects one in 3,500 people. In the condition, the retina's light-sensing cells – rods and cones – become inactive and eventually die. Symptoms start with night blindness and worsening tunnel vision, but eventually there is a total loss of sight.

In 1992, research showed that the eye's communication cells – known as retinal ganglion cells – remain intact in patients with retinitis pigmentosa. The discovery opened up the prospect of restoring some form of visual function to these people by controlling the cells' communication patterns.

In the past two decades since the research was published, hundreds of millions of pounds have been invested in retinal prosthesis research. Unfortunately, in contrast to the development of cochlear implants – which restore hearing to the deaf – progress has been slow. The highest resolution prosthesis to date was created by the Retina Implant company based in Tübingen, Germany, whose 1,500-electrode implant has allowed one of their patients, Mika, to distinguish large white characters on a black background.

One of the key challenges has been the fundamental architecture of our visual system. The eye is not simply a camera, but the first stage in a system for understanding the world around us. There are around 50 different types of processing neuron in the retina, and more than 20 types of retinal ganglion cell. So the visual cortex of the brain expects to receive the visual world encoded in a "neural song" of many different voices. Precise coding to reproduce this song is hard to achieve with implanted electrodes and the result is that the patient sees phosphenes – flashing dots of light – rather than what we would normally define as sight.

Optogenetics, an exciting new gene therapy technique, has the potential to bypass many of these problems and last year was hailed as Method of the Year by the journal Nature. Invented in the lab of Ernst Bamberg at the Max Planck Institute in Frankfurt eight years ago, the technique uses gene therapy to sensitise nerve cells to particular colours of light. Intense pulses of this wavelength of light make the photosensitised nerve cells fire. (Neurologists call each firing of a nerve an "action potential" – the currency of information in the nervous system.)

So in optogenetic retinal prosthetics, rather than performing highly complex surgery to implant electrodes into a patient's retina, a solution of a special virus would simply be injected to introduce new genes into the nerve cells. The patient would then wear a headset that records and interprets the visual scene and sends coded pulses of light to the retina. As a single pulse of light can generate a single action potential, the information encoded from the visual scene can be much more in tune with the neural song expected by the visual cortex.

The OptoNeuro European project I lead at Newcastle University is researching this new approach, and we hope to start human trials towards the middle of this decade.

The first optogenetic retinal prostheses will not deliver perfect vision, so we have teamed up with the London-based design practice Superflux to explore how the user's interaction with this new technology can be made more practical and meaningful in the coming years. The key objective is to maximise the useful sight restored to the patient while also exploring the unique possibilities of this new, modified – even enhanced – form of vision.

In their concept video Song of the Machine (above), Anab Jain, Jon Ardern and Justin Pickard explore the personal and emotional complexities that might arise once this science leaves the lab and begins to touch our daily lives. The title is derived from the idea that in optogenetic retinal prosthetics the body is itself modified to interface with the machine in order to appreciate the neural song.

Even if resolution is low, the prosthesis could allow users to experience the visual world in wavelengths beyond those perceptible to normal-sighted humans. For example the eye absorbs ultraviolet light before it reaches the retina, and nature finds it difficult to make infrared light receptors. Such constraints do not affect modern camera technology.

This "multi-spectral imaging" could be used for purely pragmatic purposes, such as telling at a glance whether an object is too hot to touch. Alternatively, it could create a certain visual poetry by allowing us to experience a flower in all its ultraviolet glory – as seen by honey bees.

By exploring these possibilities in our research, it may be possible to improve the experience of the patients who will eventually wear these prostheses, allowing them to enjoy some of the benefits of the new field of augmented reality.

Dr Patrick Degenaar is an optogenetics researcher at Newcastle University where he leads the OptoNeuro project

Song of the Machine is on show as part of the HUMAN+ exhibition at the Science Gallery, Trinity College Dublin, which runs until 24 June


guardian.co.uk © Guardian News & Media Limited 2011 | Use of this content is subject to our Terms & Conditions | More Feeds


Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl