Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

September 24 2013

July 17 2013



AutoMySQLBackup with a basic configuration will create Daily, Weekly and Monthly #backups of one or more of your #MySQL databases from one or more of your MySQL servers.

Other Features include:
– Email notification of backups
– Backup Compression and Encryption
– Configurable backup rotation
– Incremental database backups

quelqu’un connaît ? ça semble aussi simple que `apt-get install automysqlbackup` et ça remplacerait mes scripts persos


Sponsored post
soup-sponsored will be discontinued :(

Dear fans and users,
today, we have to share very sad news. will stop working in less than 10 days. :(
It's breaking our heart and we honestly tried whatever we could to keep the platform up and running. But the high costs and low revenue streams made it impossible to continue with it. We invested a lot of personal time and money to operate the platform, but when it's over, it's over.
We are really sorry. is part of the internet history and online for one and a half decades.
Here are the hard facts:
- In 10 days the platform will stop working.
- Backup your data in this time
- We will not keep backups nor can we recover your data
July, 20th, 2020 is the due date.
Please, share your thoughts and feelings here.
Reposted bydotmariuszMagoryannerdanelmangoerainbowzombieskilledmyunicorntomashLogHiMakalesorSilentRulebiauekjamaicanbeatlevuneserenitephinangusiastysmoke11Climbingpragne-ataraksjisauerscharfArchimedesgreywolfmodalnaTheCrimsonIdoljormungundmarbearwaco6mieczuuFeindfeuerDagarhenvairashowmetherainbowszpaqusdivihindsightTabslawujcioBateyelynTabslaensommenitaeliblameyouHalobeatzalicexxxmgns

April 23 2013

Four short links: 23 April 2013

  1. Drawscript — Processing for Illustrator. (via BERG London)
  2. Archive Team Warriora virtual archiving appliance. You can run it to help with the ArchiveTeam archiving efforts. It will download sites and upload them to our archive. (via Ed Vielmetti)
  3. Retro Vectorsroyalty-free and free of charge.
  4. TokutekDB Goes Open Sourcea high-performance, transactional storage engine for MySQL and MariaDB. See the announcement.

July 25 2012

Four short links: 26 July 2012

  1. Drones Over Somalia are Hazard to Air Traffic (Washington Post) — In a recently completed report, U.N. officials describe several narrowly averted disasters in which drones crashed into a refu­gee camp, flew dangerously close to a fuel dump and almost collided with a large passenger plane over Mogadishu, the capital. (via Jason Leopold)
  2. Sequel Pro — free and open source Mac app for managing MySQL databases. It’s an update of CocoaMySQL.
  3. Neural Network Improves Accuracy of Least Invasive Breast Cancer Test — nice use of technology to make lives better, for which the creator won the Google Science Fair. Oh yeah, she’s 17. (via Miss Representation)
  4. Free Harder to Find on Amazon — so much for ASINs being permanent and unchangeable. Amazon “updated” the ASINs for a bunch of Project Gutenberg books, which means they’ve lost all the reviews, purchase history, incoming links, and other juice that would have put them at the top of searches for those titles. Incompetence, malice, greed, or a purely innocent mistake? (via Glyn Moody)

June 08 2012

Four short links: 8 June 2012

  1. HAproxy -- high availability proxy, cf Varnish.
  2. Opera Reviews SPDY -- thoughts on the high-performance HTTP++ from a team with experience implementing their own protocols. Section 2 makes a good intro to the features of SPDY if you've not been keeping up.
  3. Jetpants -- Tumblr's automation toolkit for handling monstrously large MySQL database topologies. (via Hacker News)
  4. LeakedIn -- check if your LinkedIn password was leaked. Chris Shiflett had this site up before LinkedIn had publicly admitted the leak.

April 14 2012

MySQL in 2012: Report from Percona Live

The big annual MySQL conference, started by MySQL AB in 2003 and run
by my company O'Reilly for several years, lives on under the able
management of Percona. This
fast-growing company started out doing consulting on MySQL,
particularly in the area of performance, and branched out into
development and many other activities. The principals of this company
wrote the most recent two editions of the popular O'Reilly book href="">High
Performance MySQL

Percona started offering conferences a couple years ago and decided to
step in when O'Reilly decided not to run the annual MySQL conference
any more. Oracle did not participate in Percona Live, but has
announced href="">its own MySQL
conference for next September.

Percona Live struck me as a success, with about one thousand attendees
and the participation of leading experts from all over the MySQL
world, save for Oracle itself. The big players in the MySQL user
community came out in force: Facebook, HP, Google, Pinterest (the
current darling of the financial crowd), and so on.

The conference followed the pattern laid down by old ones in just
about every way, with the same venue (the Santa Clara Convention
Center, which is near a light-rail but nothing else of interest), the
same food (scrumptious), the standard format of one day of tutorials
and two days of sessions (although with an extra developer day tacked
on, which I will describe later), an expo hall (smaller than before,
but with key participants in the ecosystem), and even community awards
(O'Reilly Media won an award as Corporate Contributor of the Year).
Monty Widenius was back as always with a MariaDB entourage, so it
seemed like old times. The keynotes seemed less well attended than the
ones from previous conferences, but the crowd was persistent and
showed up in impressive numbers for the final events--and I don't
believe it was because everybody thought they might win one of the
door prizes.

Jeremy Zawodny ready to hand out awards
Jeremy Zawodny ready to hand out awards.

Two contrasting database deployments

I checked out two well-attended talks by system architects from two

high-traffic sites: Pinterest and craigslist. The radically divergent
paths they took illustrate the range of options open to data centers
nowadays--and the importance of studying these options so a data
center can choose the path appropriate to its mission and

Jeremy Zawodny (co-author of the first edition of High Performance
) href="
the design of craigslist's site, which illustrates the model of
software accretion over time and an eager embrace of heterogeneity.
Among their components are:

  • Memcache, lying between the web servers and the MySQL database in
    classic fashion.

  • MySQL to serve live postings, handle abuse, data for monitoring
    system, and other immediate needs.

  • MongoDB to store almost 3 billion items related to archived (no longer
    live) postings.

  • HAproxy to direct requests to the proper MySQL server in a cluster.

  • Sphinx for text searches, with

    indexes over all live postings, archived postings, and forums.

  • Redis for temporary items such as counters and blobs.

  • An XFS filesystem for images.

  • Other helper functions that Zawodny lumped together as "async

Care and feeding of this menagerie becomes a job all in itself.
Although craigslist hires enough developers to assign them to
different areas of expertise, they have also built an object layer
that understands MySQL, cache, Sphinx, MongoDB. The original purpose
of this layer was to aid in migrating old data from MySQL to MongoDB
(a procedure Zawodny admitted was painful and time-consuming) but it
was extended into a useful framework that most developers can use
every day.

Zawodny praised MySQL's durability and its method of replication. But
he admitted that they used MySQL also because it was present when they
started and they were familiar with it. So adopting the newer entrants
into the data store arena was by no means done haphazardly or to try
out cool new tools. Each one precisely meets particular needs of the

For instance, besides being fast and offering built-in sharding,
MongoDB was appealing because they don't have to run ALTER TABLE every
time they add a new field to the database. Old entries coexist happily
with newer ones that have different fields. Zawodny also likes using a
Perl client to interact with a database, and the Perl client provided
by MongoDB is unusually robust because it was developed by 10gen
directly, in contrast to many other datastores where Perl was added by
some random volunteer.

The architecture at craigslist was shrewdly chosen to match their
needs. For instance, because most visitors click on the limited set
of current listings, the Memcache layer handles the vast majority of
hits and the MySQL database has a relatively light load.

However, the MySQL deployment is also carefully designed. Clusters are
vertically partitioned in two nested ways. First, different types of
items are stored on separate partitions. Then, within each type, the
nodes are further divided by the type of query:

  • A single master to handle all writes.

  • A group for very fast reads (such as lookups on a primary key)

  • A group for "long reads" taking a few seconds

  • A special node called a "thrash handler" for rare, very complex

It's up to the application to indicate what kind of query it is
issuing, and HAproxy interprets this information to direct the query
to the proper set of nodes.

Naturally, redundancy is built in at every stage (three HAproxy
instances used in round robin, two Memcache instances holding the same
data, two data centers for the MongoDB archive, etc.).

It's also interesting what recent developments have been eschewed by
craigslist. The self-host everything and use no virtualization.
Zawodny admits this leads to an inefficient use of hardware, but
avoids the overhead associated with virtualization. For efficiency,
they have switched to SSDs, allowing them to scale down from 20
servers to only 3. They don't use a CDN, finding that with aggressive
caching and good capacity planning they can handle the load
themselves. They send backups and logs to a SAN.

Let's turn now from the teeming environment of craigslist to the
decidedly lean operation of Pinterest, a much younger and smaller
organization. As href="">presented
by Marty Weiner and Yashh Nelapati, when they started web-scale
growth in the Autumn of 2011, they reacted somewhat like craigslist,
but with much less thinking ahead, throwing in all sorts of software
such as Cassandra and MongoDB, and diversifying a bit recklessly.
Finally they came to their senses and went on a design diet. Their
resolution was to focus on MySQL--but the way they made it work is
unique to their data and application.

They decided against using a cluster, afraid that bad application code
could crash everything. Sharding is much simpler and doesn't require
much maintenance. Their advice for implementing MySQL sharding

  • Make sure you have a stable schema, and don't add features for a
    couple months.

  • Remove all joins and complex queries for a while.

  • Do simple shards first, such as moving a huge table into its own

They use Pyres, a
Python clone of Resque, to move data into shards.

However, sharding imposes severe constraints that led them to
hand-crafted work-arounds.

Many sites want to leave open the possibility for moving data between
shards. This is useful, for instance, if they shard along some
dimension such as age or country, and they suddenly experience a rush
of new people in their 60s or from China. The implementation of such a
plan requires a good deal of coding, described in the O'Reilly book href="">MySQL High
, including the creation of a service that just
accepts IDs and determines what shard currently contains the ID.

The Pinterest staff decided the ID service would introduce a single
point of failure, and decided just to hard-code a shard ID in every ID
assigned to a row. This means they never move data between shards,
although shards can be moved bodily to new nodes. I think this works
for Pinterest because they shard on arbitrary IDs and don't have a
need to rebalance shards.

Even more interesting is how they avoid joins. Suppose they want to
retrieve all pins associated with a certain board associated with a
certain user. In classical, normalized relational database practice,
they'd have to do a join on the comment, pin, and user tables. But
Pinterest maintains extra mapping tables. One table maps users to
boards, while another maps boards to pins. They query the
user-to-board table to get the right board, query the board-to-pin
table to get the right pin, and then do simple queries without joins
on the tables with the real data. In a way, they implement a custom
NoSQL model on top of a relational database.

Pinterest does use Memcache and Redis in addition to MySQL. As with
craigslist, they find that most queries can be handled by Memcache.
And the actual images are stored in S3, an interesting choice for a
site that is already enormous.

It seems to me that the data and application design behind Pinterest
would have made it a good candidate for a non-ACID datastore. They
chose to stick with MySQL, but like organizations that use NoSQL
solutions, they relinquished key aspects of the relational way of
doing things. They made calculated trade-offs that worked for their
particular needs.

My take-away from these two fascinating and well-attended talks was
that how you must understand your application, its scaling and
performance needs, and its data structure, to know what you can
sacrifice and what solution gives you your sweet spot. craigslist
solved its problem through the very precise application of different
tools, each with particular jobs that fulfilled craigslist's

requirements. Pinterest made its own calculations and found an
entirely different solution depending on some clever hand-coding
instead of off-the-shelf tools.

Current and future MySQL

The conference keynotes surveyed the state of MySQL and some
predictions about where it will go.

Conference co-chair Sarah Novotny at keynote
Conference co-chair Sarah Novotny at keynotes.

The world of MySQL is much more complicated than it was a couple years
ago, before Percona got heavily into the work of releasing patches to
InnoDB, before they created entirely new pieces of software, and
before Monty started MariaDB with the express goal of making a better
MySQL than MySQL. You can now choose among Oracle's official MySQL
releases, Percona's supported version, and MariaDB's supported
version. Because these are all open source, a major user such as

Facebook can even apply patches to get the newest features.

Nor are these different versions true forks, because Percona and
MariaDB create their enhancements as patches that they pass back to
Oracle, and Oracle is happy to include many of them in a later
release. I haven't even touched on the commercial ecosystem around
MySQL, which I'll look at later in this article.

In his href="
keynote, Percona founder Peter Zaitsev praised the latest MySQL
release by Oracle. With graceful balance he expressed pleasure that
the features most users need are in the open (community) edition, but
allowed that the proprietary extensions are useful too. In short, he
declared that MySQL is less buggy and has more features than ever.

The href="">former
CEO of MySQL AB, Mårten Mickos, also found that MySQL is
doing well under Oracle's wing. He just chastised Oracle for failing
to work as well as it should with potential partners (by which I
assume he meant Percona and MariaDB). He lauded their community
managers but said the rest of the company should support them more.

Keynote by Mårten Mickos
Keynote by Mårten Mickos.

Aker presented an OpenStack MySQL service developed by his current
employer, Hewlett-Packard. His keynote retold the story that had led
over the years to his developing href="">Drizzle (a true fork of MySQL
that tries to return it to its lightweight, Web-friendly roots) and
eventually working on cloud computing for HP. He described modularity,
effective use of multiple cores, and cloud deployment as the future of

A href="
on the second day of the conference brought together high-level
managers from many of the companies that have entered the MySQL space
from a variety of directions in a high-level discussion of the
database engine's future. Like most panels, the conversation ranged
over a variety of topics--NoSQL, modular architecture, cloud
computing--but hit some depth only on the topic of security, which was
not represented very strongly at the conference and was discussed here
at the insistence of Slavik Markovich from McAfee.

Keynote by Brian Aker
Keynote by Brian Aker.

Many of the conference sessions disappointed me, being either very
high level (although presumably useful to people who are really new to
various topics, such as Hadoop or flash memory) or unvarnished
marketing pitches. I may have judged the latter too harshly though,
because a decent number of attendees came, and stayed to the end, and
crowded around the speakers for information.

Two talks, though, were so fast-paced and loaded with detail that I
couldn't possibly keep my typing up with the speaker.

One such talk was the href="
by Mark Callaghan of Facebook. (Like the other keynotes, it should be
posted online soon.) A smattering of points from it:

  • Percona and MariaDB are adding critical features that make replication
    and InnoDB work better.

  • When a logical backup runs, it is responsible for 50% of IOPS.

  • Defragmenting InnoDB improves compression.

  • Resharding is not worthwhile for a large, busy site (an insight also
    discovered by Pinterest, as I reported earlier)

The other fact-filled talk was href="
Yoshinori Matsunobu of Facebook, and concerned how to achieve
NoSQL-like speeds while sticking with MySQL and InnoDB. Much of the
talk discussed an InnoDB memcached plugin, which unfortunately is
still in the "lab" or "pre-alpha" stage. But he also suggested some
other ways to better performance, some involving Memcache and others
more round-about:

  • Coding directly with the storage engine API, which is storage-engine

  • Using HandlerSocket, which queues write requests and performs them
    through a single thread, avoiding costly fsync() calls. This can
    achieve 30,000 writes per second, robustly.

Matsunobu claimed that many optimizations are available within MySQL
because a lot of data can fit in main memory. For instance, if you
have 10 million users and store 400 bytes per user, the entire user
table can fit in 20 GB. Matsunobu tests have shown that most CPU time
in MySQL is spent in functions that are not essential for processing
data, such as opening and closing a table. Each statement opens a
separate connection, which in turn requires opening and closing the
table again. Furthermore, a lot of data is sent over the wire besides
the specific fields requested by the client. The solutions in the talk
evade all this overhead.

The commercial ecosystem

Both as vendors and as sponsors, a number of companies have always
lent another dimension to the MySQL conference. Some of these really
have nothing to do with MySQL, but offer drop-in replacements for it.
Others really find a niche for MySQL users. Here are a few that I
happened to talk to:

  • Clustrix provides a very
    different architecture for relational data. They handle sharding
    automatically, permitting such success stories as the massive scaling
    up of the social media site Massive Media NV without extra
    administrative work. Clustrix also claims to be more efficient by
    breaking queries into fragments (such as the WHERE clauses of joins)
    and executing them on different nodes, passing around only the data

    produced by each clause.

  • Akiban also offers faster
    execution through a radically different organization of data. They
    flatten the normalized tables of a normalized database into a single
    data structure: for instance, a customer and his orders may be located
    sequentially in memory. This seems to me an import of the document
    store model into the relational model. Creating, in effect, an object
    that maps pretty closely to the objects used in the application
    program, Akiban allows common queries to be executed very quickly, and
    could be deployed as an adjunct to a MySQL database.

  • Tokutek produced a drop-in
    replacement for InnoDB. The founders developed a new data structure
    called a fractal tree as a faster alternative to the B-tree structures
    normally used for indexes. The existence of Tokutek vindicates both

    the open source distribution of MySQL and its unique modular design,
    because these allowed Tokutek's founders to do what they do
    best--create a new storage engine--without needing to create a whole
    database engine with the related tools and interfaces it would

  • Nimbus Data Systems creates a
    flash-based hardware appliance that can serve as a NAS or SAN to
    support MySQL. They support a large number of standard data transfer
    protocols, such as InfiniBand, and provide such optimizations as
    caching writes in DRAM and making sure they write complete 64KB blocks
    to flash, thus speeding up transfers as well as preserving the life of
    the flash.

Post-conference events

A low-key developer's day followed Percona Live on Friday. I talked to
people in the Drizzle and
Sphinx tracks.

As a relatively young project, the Drizzle talks were aimed mostly at
developers interested in contributing. I heard talks about their
kewpie test framework and about build and release conventions. But in
keeping with it's goal to make database use easy and light-weight, the
project has added some cool features.

Thanks to a
and a built-in web server, Drizzle now presents you with a
Web interface for entering SQL commands. The Web interface translates
Drizzle's output to simple HTML tables for display, but you can also
capture the JSON directly, making programmatic access to Drizzle
easier. A developer explained to me that you can also store JSON
directly in Drizzle; it is simply stored as a single text column and
the JSON fields can be queried directly. This reminded me of an XQuery
interface added to some database years ago. There too, the XML was
simply stored as a text field and a new interface was added to run the
XQuery selects.

Sphinx, in contrast to Drizzle, is a mature product with commercial
support and (as mentioned earlier in the article) production
deployments at places such as craigslist, as well as an href="">O'Reilly
book. I understood better, after attending today's sessions, what
makes Sphinx appealing. Its quality is unusually high, due to the use

of sophisticated ranking algorithms from the research literature. The
team is looking at recent research to incorporate even better
algorithms. It is also fast and scales well. Finally, integration with
MySQL is very clean, so it's easy to issue queries to Sphinx and pick
up results.

Recent enhancements include an href="">add-on called fSphinx
to make faceted searches faster (through caching) and easier, and
access to Bayesian Sets to find "items similar to this one." In Sphinx
itself, the team is working to add high availability, include a new
morphology (stemming, etc.) engine that handles German, improve
compression, and make other enhancements.

The day ended with a reception and copious glasses of Monty Widenius's
notorious licorice-flavored vodka, an ending that distinguishes the
MySQL conference from others for all time.

December 20 2011

Four short links: 20 December 2011

  1. How Twitter Stores 250M Tweets a Day Using MySQL (High Scalability) -- notes from a talk at the MySQL conference on how Twitter built a high-volume MySQL store.
  2. How The Atlantic Got Profitable With Digital First (Mashable) -- Lauf says his team has focused on putting together premium advertising experiences that span print, digital, events and (increasingly) mobile.
  3. Data Mining Without Prejudice -- an attempt to measure fit without pre-favouring one type of curve over another.
  4. It Is No Longer OK Not To Know How Congress Works (Clay Johnson) -- looking for a specific innovation to try and change the way Washington works by the time Congress votes on SOPA is about as foolish as Steve Jobs trying to diet his way out of having pancreatic cancer.

August 03 2011

Developer Week in Review: Lion drops pre-installed MySQL

A busy week at Casa Turner, as the infamous Home Renovations of Doom wrap up, I finish the final chapters of "Developing Enterprise iOS Applications" (buy a copy for all your friends, it's a real page turner!), pack for two weeks of vacation with the family in California (Palm Springs in August, 120 degrees, woohoo!), and celebrate both a birthday and an anniversary.

But never fear, WIR fans, I'll continue to supply the news, even as my MacBook melts in the sun and the buzzards start to circle overhead.

The law of unintended consequences

Lion ServerIf you decide to install Lion Server, you may notice something missing from the included software: MySQL. Previous releases of OS X server offered pre-installed MySQL command line and GUI tools, but they are AWOL from Lion. Instead, the geek-loved but less widely used Postgres database is installed.

It seems pretty obvious to the casual observer why Apple would make this move. With Oracle suing Google over Java, and Oracle's open source philosophy in doubt, I know I wouldn't want to stake my bottom line on an Oracle package bundled with my premiere operating system. Apple could have used one of the non-Oracle forks of MySQL, but it appears they decided to skirt the issue entirely by going with Postgres, which has a clear history of non-litigiousness.

Meanwhile, Oracle had better be asking themselves if they can afford to play the games they've been playing without alienating their market base.

South Korea fines Apple 3 million won, which works out to ...

Apple has bee been hit with a penalty from the South Korean government that's a result of the iPhone location-tracking story that broke earlier this year. Now, Apple may have more money than the U.S. Treasury sitting in petty cash right now, but it will be difficult for them to recover from such a significant hit to their bottom line: a whopping 3 million won, which works out to a staggering ... um ... $2,830. Never mind.

Strata Conference New York 2011, being held Sept. 22-23, covers the latest and best tools and technologies for data science -- from gathering, cleaning, analyzing, and storing data to communicating data intelligence effectively.

Save 20% on registration with the code STN11RAD

Java 7 and the risks of X.0 software

Java 7 was recently released to the world with great fanfare and todo. This week, we got a reminder why using an X.0 version of software is a risky endeavor. It turns out that the optimized compiler is really a pessimized compiler, and that programs compiled with it stand a chance of crashing. Even better, there's a chance they'll just go off and do the wrong thing.

Java 7 seems to be breaking new ground in non-deterministic programming, which will be very helpful for physics researchers working with the Heisenberg uncertainty principle. What could be more appropriate for simulating the random behavior of particles than a randomly behaving compiler?

Got news?

Please send tips and leads here.


April 15 2011

Wrap-up of 2011 MySQL Conference

Two themes: mix your relational database with less formal solutions and move to the cloud. Those were the messages coming from O'Reilly's MySQL conference this week. Naturally it included many other talks of a more immediate practical nature: data warehousing and business intelligence, performance (both in MySQL configuration and in the environment, which includes the changes caused by replacing disks with Flash), how to scale up, and new features in both MySQL and its children. But everyone seemed to agree that MySQL does not stand alone.

The world of databases have changed both in scale and in use. As Baron Schwartz said in his broad-vision keynote, databases are starting to need to handle petabytes. And he criticized open source database options as having poorer performance than proprietary ones. As for use, the databases struggle to meet two types of requirements: requests from business users for expeditious reports on new relationships, and data mining that traverses relatively unstructured data such as friend relationships, comments on web pages, and network traffic.

Some speakers introduced NoSQL with a bit of sarcasm, as if they had to provide an interface to Hbase or MongoDB as a check-off item. At the other extreme, in his keynote, Brian Aker summed up his philosophy about Drizzle by saying, "We are not an island unto ourselves, we are infrastructure."

Judging from the informal audience polls, most of the audience had not explored NoSQL or the cloud yet. Most of the speakers about these technologies offered a mix of basic introductory material and useful practical information to meet the needs of their audience, who came, listened, and asked questions. I heard more give and take during the talks about traditional MySQL topics, because the audience seemed well versed in them.

The analysts and experts are telling us we can save money and improve scalability using EC2-style cloud solutions, and adopt new techniques to achieve the familiar goals of reliability and fast response time. I think a more subtle challenge of the cloud was barely mentioned: it encourages a kind of replication that fuzzes the notion of consistency and runs against the ideal of a unique instance for each item of data. Of course, everyone uses replication for production relational databases anyway, both to avert disaster and for load balancing, so the ideal has been blurred for a long time. As we explore the potential of cloud systems as content delivery networks, they blur the single-instance ideal. Sarah Novotny, while warning about the risks of replication, gave a talk about some practical considerations in making it work, such as tolerating inconsistency for data about sessions.

What about NoSQL solutions, which have co-existed with relational databases for decades? Everybody knows about key-value stores, and Memcache has always partnered with MySQL to serve data quickly. I had a talk with Roger Magoulas, an O'Reilly researcher, about some of the things you sacrifice if you use a NoSQL solution instead of a relational database and why that might be OK.

Redundancy and consistency

Instead of storing an attribute such as "supplier" or "model number" in a separate table, most NoSQL solutions make it a part of the record for each individual member of the database. The increased disk space or RAM required becomes irrelevant in an age when those resources are so cheap and abundant. What's more significant is that a programmer can store any supplier or model number she wants, instead of having to select from a fixed set enforced by foreign key constraints. This can introduce inconsistencies and errors, but practical database experts have known for a long time that perfect accuracy is a chimera (see the work of Jeff Jonas) and modern data analytics work around the noise. When you're looking for statistical trends, whether in ocean samples or customer preferences, you don't care whether 800 of your 8 million records have corrupt data in the field you're aggregating.

Decentralizing decisions and control

A relational database potentially gives the DBA most of the power: it is the DBA that creates the schema, defines stored procedures and triggers, and detects and repairs inconsistencies. Modern databases such as MySQL have already blurred the formerly rigid boundaries between DBA and application programmer. In the early days, the programmer had to do a lot of the work that was reserved for DBAs in traditional databases. NoSQL clearly undermines the control freaks even more. As I've already said, enforcing consistency is not as important nowadays as it once seemed, and modern programming languages offer other ways (such as enums and sets) to prevent errors.


I think this is still the big advantage of relational databases. Their complicated schemas and join semantics allow data mining and extended uses that evolve over the years. Many NoSQL databases are designed around the particular needs of an organization at a particular time, and require records to be ordered in the way you want to access them. And I think this is why, as discussed in some of the sessions at this conference, many people start with their raw data in some NoSQL data store and leave the results of their processing in a relational database.

The mood this week was business-like. I've attended conferences held by emerging communities that crackle with the excitement of building something new that no one can quite anticipate; the MySQL conference wasn't like that. The attendees have a job to do and an ROI to prove; they wanted to learn whatever would help them do that.

But the relational model still reflects most of the data handling needs we have, so MySQL will stick around. This may actually be the best environment it has ever enjoyed. Oracle still devotes a crack programming team (which includes several O'Reilly authors) to meeting corporate needs through performance improvements and simple tools. Monty Program has forked off MariaDB and Percona has popularized XtraDB, all the while contributing new features under the GPL that any implementation can use. Drizzle strips MySQL down to a core while making old goals such as multi-master replication feasible. A host of companies in layered applications such as business intelligence cluster around MySQL.

MySQL spawns its own alternative access modes, such as the HandlerSocket plugin that returns data quickly to simple queries while leaving the full relational power of the database in place for other uses. And vendors continue to find intriguing alternatives such as the Xeround cloud service that automates fail-over, scaling, and sharding while preserving MySQL semantics. I don't think any DBA's skills will become obsolete.

April 05 2011

Brian Aker explains Memcached

Memcached is one of the technologies that holds the modern Internet together, but do you know what it actually does? Brian Aker has certainly earned the title of Memcached guru, and below he offers a peek under the hood. He'll also provide a deeper dive into Memcached in a tutorial at the upcoming 2011 MySQL Conference.

What problem is Memcached meant to solve?

Brian AkerBrian Aker: In an operation like a database or an application server, it's very expensive to build an object or get a result set. And because of that overhead cost, what you really want to be able to do is to cache that result set. So, for instance, suppose you have 10 clients that are going to all be making the same request, and that request is fairly expensive. By placing the results inside of Memcached, all you are doing is requesting the answer and not actually requesting the complete build of an object all over again, whatever that object may be.

What are the first steps someone should take to integrate Memcached into their applications?

Brian Aker: The first thing to do is to think about how you develop the application in the first place. If you can think of the application as being event-driven, you're way ahead of the pack when you go and design applications. So, for instance, if you were going to design something like Twitter, what you would first ask is not how do I access the database but how does my application react to a given request? That really gets into the core of how to make a difference when designing with any of these types of applications.

People will ask me, "Should I build everything with Memcached the first day?" And my usual answer is, "No, don't." There's an awful lot of applications out there where you'll never actually need to use anything like Memcached at all. And really, you should spend time designing and working on your actual application. That's where you tend to add value. But do it in the same way so that you can eventually scale it, which really means thinking about how your interfaces If my interface is set up in a way that I can make a request and eventually insert my caching layer right there when I need it, then it's a complete win.

O'Reilly MySQL Conference & Expo, being held April 11-14, will spread the latest and best knowledge on MySQL and related technologies to the global open source community.

Save 25% on registration with the code MYS11RAD

Where does Memcached live in a corporate infrastructure?

Brian Aker: Usually, in smaller shops, people will put Memcached directly on the web server itself. The advantage then is that the local look-up cost overhead is really quite low. As they grow, those shops will then move it off of that local web server and out to dedicated hardware. In a lot of cases, where we really see it shine is when people start using it and spreading it across multiple servers.

Memcached has modern interfaces to pretty much any programming language. We have C, C++ bindings. We have Java bindings. We have Ruby, Perl, Python. You name it.

As far as the server goes, traditionally, we see it mostly run on Linux. Obviously it will run on OS X. There is a Windows port that's available at any given time. It may or may not be necessarily the latest and greatest. But essentially whatever platform you're on, it's there. I even have a very torn down one that will run on Arduino. We can get it down to pretty small devices.

What is the most common Memcached mistake?

Brian Aker:

People will think of Memcached as a database — they'll think that data actually persists in it. That's incorrect. Trying to build it so that it persists is not really a good plan. There are a couple of Memcached startups that have built persistent products, so you can look at one of those, but really, that's not the point to it.

A question I'll often hear is, "How do I dump the data back out of Memcached?" If you're asking that question it usually means you're not thinking about Memcached correctly.


March 31 2011

Improving healthcare in Zambia with CouchDB

A new healthcare project in Zambia is trying to integrate supervisors, clinics, and community healthcare workers (CHW) into a system that can improve patient service and provide more data about the effectiveness of care. Because of the technical challenges in an extreme rural setting, unique solutions are required. According to Cory Zue, chief technology officer of Dimagi, CouchDB went a long way toward keeping a consistent set of records under extreme circumstances. The full story will be laid out in Zue's talk at the upcoming MySQL conference, but here's a sneak peak.

You're involved with a rural healthcare project in Africa. Can you talk a bit about it, and how CouchDB is being used?

Coary ZueCory Zue: We chose to use CouchDB for very specific reasons for our project, which had to do with it being very good at replicating itself. The project is an effort to deploy health record and data collection systems to extremely remote, rural clinics in Zambia. Working in that environment, we're facing a lot of really challenging technical limitations. If you've only worked in America or in Europe, you don't necessarily run across these types of issues.

We've got computers at clinics that are maintaining patient records. That data needs to sync to cell phones and to a central server, but around 35% of them don't have power, so we're installing solar panels. So one limitation is that that the system has to work on low resources.

None of these clinics have Internet out of the box, so most of the time our only Internet connection is through a GSM modem that connects over the local cell network. It's very hard to move data in that environment, and you can't do anything that relies on an always-on Internet connection with a web app that is always accessing data remotely.

CouchDB was a really good option for us because we could install a Couch database at each clinic site, and then that way all the clinic operations would be local. There would be no Internet use in terms of going out and getting the patient records, or entering data at the clinic site. Couch has a replication engine that lets you synchronize databases — both pull replication and push replication — so we have a star network of databases with one central server in the middle and all of these satellite clinic servers that are connecting through that cell network whenever they're able to get on, and sending the data back and forth. That way we're able to get data in and out of these really remote, rural areas without having to write our own synchronization protocols and network stack.

O'Reilly MySQL Conference & Expo, being held April 11-14, will spread the latest and best knowledge on MySQL and related technologies to the global open source community.

Save 25% on registration with the code MYS11RAD

It could be argued that the same money could have been spent on things like medicine
and staff. What makes this project cost effective?

Cory Zue: That's something that we are constantly thinking about in my organization. In this case, the goal of the entire project is to prove the benefit. This is actually part of a five-year research study that's looking at improving primary care through better oversight and through community health worker integration with the health system.

What we're looking at are two arms: One is that when the data gets to a central place, people like district supervisors can have performance indicator reports that give them a sense for how well the clinic is doing, how well they're following protocols in terms of things such as taking vitals at every visit. They get a report and then they can come and talk to the clinic about what they're doing right and what they're doing wrong.

The other arm that is more interesting, and I think the arm that has more of an impact, is integrating the community health workers with the primary care system. In many of these places, the community health worker is the first line of defense between a person getting sick and what they do about it. The system creates cases or follow ups, from visits to the clinic, that tell the community health worker: "This person had a problem. You should go follow up with them in five days and make sure they're okay."

By sending data through the cell network and then back down to applications that are running on the phones at the community health centers, community health workers are able to find out about complications and people they need to check on. The entire workflow of patient tracing, from clinic to community and then back to the clinic, can be much more complete.

How has the project been going so far?

Cory Zue: This project kicked off in September and it will be going on through the course of the next five-plus years. Hopefully we'll find that the system was a smashing success and it will continue to live on or evolve past that time.

Anecdotally, the thing that we've heard so far is about the visibility of the data in terms of, for the people who oversee these clients, just having any idea of what's happening on the ground. We haven't gotten a lot of feedback from the community health workers yet, and so as we continue to grow out we're really looking to focus on getting testimonials from the actual CHWs and the patients that are affected.

This interview was edited and condensed.

March 30 2011

Developer Week in Review

This is your Developer Week in Review, I'm Casey Kasem. Our first letter comes from a software developer in New England who writes, "Dear Casey. My wife just got accepted into the Experimental Psych doctoral program at UNH, and I'd like you to play something appropriate for the occasion." Well, going out especially for you, here's "I'll be Proofreading Your Papers for the Next Five Years, 'Cause I'll Never Split (Our Infinitive)" (Seriously, congratulations Bonnie!)

And you thought that Justin Bieber tickets were hard to score ...

What's the matter, pal? You say you had your heart set on going to Google I/O, but the tickets sold out in 59 minutes? Well, cheer up, because tickets went on sale this week for the Apple WWDC, and — oh, wait, those sold out in 8 hours...

If you heard the sound of keys frantically typing on Monday, it was developers all across America e-mailing their managers for authorization to buy the $1,600 Golden Tickets to Steve's Magic Chocolate Factory. Unfortunately, for those with glacial purchasing infrastructures, it took less than half a day for the doors to open and shut on this year's WWDC, to be held as widely expected on the week of June 6th at the beautiful (if you squint a lot) Moscone Center in the City by the Bay.

I have little sympathy for those griping that they didn't have time to get approval from their corporate overlords, since we've known for nearly half-a-year when WWDC was likely to be held and how much it was probably going to be. I got approval last December to go. Remember, a lack of planning on your part does not constitute a mistake on Apple's part.

The good news is that video of all the WWDC sessions gets posted to the developer's portal soon after the conference ends, so you can get a lot of the value of being there, without being there. In the meantime, you can start getting the purchasing wheels turning for the Microsoft PDC, which is evidently going to be in Seattle in the fall.

Google adds another legend to their menagerie

It seems like a Google collects another famous name in programming every month. There must be a mail-order club for it. This week, they added James Gosling, the father of Java.

There's all sorts of interesting dynamics at work here. For one, given the current fireworks about the Android JVM, adding one of the patent holders for Java to the Google side of the fight can't help but stir up the mud even more, since he'll now essentially be suing himself. It also is another sign of the leakage of talent from Oracle to everywhere else since the database giant acquired Sun's portfolio of products, although Gosling left Oracle several months ago.

With the acquisition of Gosling, Google is now just 3 geniuses away from total world domination.

"And I think it would be ironic if we were all made of iron!"

Alanis Morissette may not know what irony is, but provided a fine example this week, as they fell victim to a — wait for it — SQL Injection attack. In addition to user and staff credentials for that website (and its worldwide clones), reports say that (RIP) credentials were also raided.

As I sit in my office (well, my cube ...), with one of my favorite XKCD strips framed on the wall next to me, I can't help but marvel that in this day and age, people are still neglecting to sanitize their database inputs. And a database company has absolutely no excuse! Following the news that credit card data from a chain of Boston restaurants (including one I ate at) was stolen last year, and the company didn't find it necessary to notify anyone that their information might be in the wind, it's been just a fine and dandy week for the computer security industry.

Got news?

If you have a request for that special someone, or some juicy news, please send tips or leads here.


March 23 2011

Developer Week in Review

Netflix went down over three hours ago, and everyone is on edge here. My son just started reciting the script to "Monty Python and the Holy Grail" in an attempt to keep our courage up. This may be the last thing I ever write, so — Oh, never mind, it's back up again ... Crisis averted, and on to this week's developer news.

We have an App Store Appstore for that!

Amazon AppstoreAmazon this week unleashed their own Appstore for Android devices. Apple took umbrage at the use of the (evidently trademarked) term "App Store" and fired a salvo of lawyers across Amazon's bow. Amazon responded by essentially saying "Hey, app store is a generic term." To which Apple responded "So is windows, and Microsoft gets to protect that ..."

Given that Apple gave Amazon multiple warnings not to use the App Store tag, it's unclear why Amazon decided to be the one application store to challenge Apple for use of it. And unlike nebulous patent claims, trademark claims are pretty straightforward. Amazon is going to have a hard time arguing that their use of the term "Appstore" is sufficiently different from Apple's App Store, and that there couldn't be any confusion. But however things fall out in the end, the only winners, of course, will be the lawyers.

A moment of silence for, please

An old and venerable friend is passing away, and we should all take a moment to reflect on the long and fruitful life that it led. Oracle has announced that they will be decommissioning the domain, the 12th oldest domain name on the Internet. For those of us whose fingers can type "" automatically, it will be a jarring change.

I first encountered Sun way back in 1985 when Xerox AI Systems decided to port their Interlisp workstations to run on Sun hardware. The Sun 4/110 on my desktop was (to me at the time) the coolest thing on the planet. For a while, it seemed like Sun (along with Cisco) was the Internet. Now the last major vestige of Sun, their domain, will be sold off like cattle. Excuse me, I think I have something in my eye ...

O'Reilly MySQL Conference & Expo, being held April 11-14, will spread the latest and best knowledge on MySQL and related technologies to the global open source community.

Save 25% on registration with the code MYS11RAD

Bake for 20 minutes, and Drizzle liberally with SQL

One of the side-effects of Oracle acquiring Sun is that MySQL was thrown into somewhat of a turmoil. Forked versions seemed to be cropping up everywhere. Now the first of the baby-MySQLs is getting on the bus and heading off to school, as Drizzle has announced their general availability release.

For individuals and companies looking for an alternative to Oracle's MySQL offering, Drizzle offers an API-compatible replacement that aims to be a leaner, meaner database optimized for high-performance operations. For those wanting a more traditionally MySQL experience without the Oracle baggage, MariaDB is still out there, of course.

Got news?

If you have some juicy news, please send tips or leads here.


December 20 2010

Strata Gems: Turn MySQL into blazing fast NoSQL

We're publishing a new Strata Gem each day all the way through to December 24. Yesterday's Gem: What your inbox knows.

Strata 2011 The trend for NoSQL stores such as memcache for fast key-value storage should give us pause for thought: what have regular database vendors been doing all this time? An important new project, HandlerSocket, seeks to leverage MySQL's raw speed for key-value storage.

NoSQL databases offer fast key-value storage for use in backing web applications, but years of work on regular relational databases has hardly ignored performance. The main performance hit with regular databases is in interpreting queries.

HandlerSocket is a MySQL server plugin that interfaces directly with the InnoDB storage engine. Yoshinori Matsunobu, one of HandlerSocket's creators at Japanese internet and gaming company Dena, reports over 750,000 queries per second performance on commodity server hardware: compared with 420,000 using memcache, and 105,000 using regular SQL access to MySQL. Furthermore, since the underlying InnoDB storage is used, HandlerSocket offers a NoSQL-type interface that doesn't have to trade away ACID compliance.

With the additional benefits of being able to use the mature MySQL tool ecosystem for monitoring, replication and administration, HandlerSocket presents a compelling case for using a single database system. As the HandlerSocket protocol can be used on the same database and tables used for regular SQL access, the problems of inconsistency and replication created by multiple tiers of databases can be mitigated.

HandlerSocket has now been integrated into Percona's XtraDB, an enhanced version of the InnoDB storage engine for MySQL. You can also compile and install HandlerSocket yourself alongside MySQL.

December 16 2010

Strata Gems: Who needs disks anyway?

We're publishing a new Strata Gem each day all the way through to December 24. Yesterday's Gem: Kinect democratizes augmented reality.

Strata 2011 Today's databases are designed for the spinning platter of the hard disk. They take into account that the slowest part of reading data is seeking: physically getting the read head to the part of the disk it needs to be in. But the emergence of cost effective solid state drives (SSD) is changing all those assumptions.

Over the course of 2010, systems designers have been realizing the benefits of using SSDs in data centers, with major IT vendors and companies adopting them. Drivers for SSD adoption include lower power consumption and greater physical robustness. The robustness is a key factor when creating container-based modular data centers.

That still leaves the problem of software optimized for spinning disks. Enter RethinkDB, a project to create a storage engine for the MySQL database that is optimized for SSDs.

As well as taking advantage of the extra speed SSDs can offer, RethinkDB also majors on consistency, achieved by using append-only writes. Additionally, they are writing their storage engine with modern web application access patterns in mind: many concurrent reads and few writes.

The smartest aspect of what RethinkDB are doing, however, is creating their product as a MySQL storage engine, minimizing barriers to adoption. Currently in rapid development, you can obtain binary only downloads of RethinkDB from their web site. Definitely a project to watch as it matures over the course of the next year.

October 21 2010

Four short links: 21 October 2010

  1. Using MysQL as NoSQL -- 750,000+ qps on a commodity MySQL/InnoDB 5.1 server from remote web clients.
  2. Making an SLR Camera from Scratch -- amazing piece of hardware devotion. (via
  3. Mac App Store Guidelines -- Apple announce an app store for the Macintosh, similar to its app store for iPhones and iPads. "Mac App" no longer means generic "program", it has a new and specific meaning, a program that must be installed through the App store and which has limited functionality (only one can run at a time, it's full-screen, etc.). The list of guidelines for what kinds of programs you can't sell through the App Store is interesting. Many have good reasons to be, but It creates a store inside itself for selling or distributing other software (i.e., an audio plug-in store in an audio app) is pure greed. Some are afeared that the next step is to make the App store the only way to install apps on a Mac, a move that would drive me away. It would be a sad day for Mac-lovers if Microsoft were to be the more open solution than Apple. cf the Owner's Manifesto.
  4. Privacy Aspects of Data Mining -- CFP for an IEEE workshop in December. (via jschneider on Twitter)

September 16 2010

Four short links: 16 September 2010

  1. jsTerm -- ANSI-capable telnet terminal built in HTML5 with Javascript, Websocket, and Node.js. (via waxpancake on Twitter)
  2. MySQL EXPLAINer -- visualize the output of the MySQL EXPLAIN command. (via eonarts on Twitter)
  3. Google Code University -- updated with new classes, including C++ and Android app development.
  4. Cloudtop Applications (Anil Dash) -- Anil calling "trend" on multiplatform native apps with cloud storage. Another layer in the Web 2.0 story Tim's been telling for years, with some interesting observations from Anil, such as: Cloudtop apps seem to use completely proprietary APIs, and nobody seems overly troubled by the fact they have purpose-built interfaces.

July 01 2010

Four short links: 1 July 2010

  1. Conflict Minerals and Blood Tech (Joey Devilla) -- electronic components have a human and environmental cost. I remember Saul Griffith asking me, "do you want to kill gorillas or dolphins?" for one component. Now we can add child militias and horrific rape to the list. (via Simon Willison)
  2. Meteor -- an open source HTTP server that serves streaming data feeds (for apps that need Comet-style persistent connections). (via gianouts on Delicious)
  3. Hobby King RC Store -- online source for remote control goodness, as recommended by Dan Shapiro at Foo.
  4. RethinkDB -- MySQL storage engine optimised for SSD drives. See also TechCrunch article.

April 08 2010

Brian Aker on post-Oracle MySQL

Brian Aker parted ways with the mainstream MySQL release, and with Sun Microsystems, when Sun was acquired by Oracle. These days, Aker is working on Drizzle, one of several MySQL offshoot projects. In time for next week's MySQL Conference & Expo, Aker discussed a number of topics with us, including Oracle's motivations for buying Sun and the rise of NoSQL.

The key to the Sun acquisition? Hardware:

MySQL Conference and ExpoBrian Aker: I have my opinions, and they're based on what I see happening in the market. IBM has been moving their P Series systems into datacenter after datacenter, replacing Sun-based hardware. I believe that Oracle saw this and asked themselves "What is the next thing that IBM is going to do?" That's easy. IBM is going to start pushing DB2 and the rest of their software stack into those environments. Now whether or not they'll be successful, I don't know. I suspect once Oracle reflected on their own need for hardware to scale up on, they saw a need to dive into the hardware business. I'm betting that they looked at Apple's margins on hardware, and saw potential in doing the same with Sun's hardware business. I'm sure everything else Sun owned looked nice and scrumptious, but Oracle bought Sun for the hardware.

The relationship between Oracle and the MySQL Community:

BA: I think Oracle is still figuring things out as far as what they've acquired and who they've got. All of the interfacing I've done with them so far has been pretty friendly. In the world of Drizzle, we still make use of the Innodb plugin, though we are transitioning to the embedded version. Everything there has gone just along swimmingly well. In the MySQL ecosystem you have MariaDB and the other distributions. They're doing the same things that Ubuntu did for Debian, which is that they're taking something that's there and creating a different sort of product around it. Essentially though, it's still exactly the same product. I think some patches are flowing from MariaDB back into MySQL, or at least I've seen some notice of that. So for the moment it looks like everything's as friendly as it is going to be.

Is NoSQL a fad or the next big thing?

BA: There are the folks who say "just go use gdbm or Berkeley DB." What they don't fundamentally understand is that when you get into a certain data size, you're just going to be dealing with multiple computers. You can't scale up infinitely. Those answers come from an immaturity of understanding that when you get to a certain data size, everything's not going to fit on a single computer. When everything doesn't fit onto a computer, you have to be able to migrate data to multiple nodes. You need some sort of scaling solution there.

With Cassandra, and similar solutions, the only issues that come up is when they don't fit the data's usage pattern. Like for instance with data analytics. There is also still the "I need these predicates across a relational entity." That's the part where the value key systems obviously fail. They have no knowledge of a relationship between two given items. So what happens then? Well, you can end up doing MapReduce. That's great if you've got an awful lot of computers and you don't really care about when the answer is going to be found. MapReduce works as a solution when your queries are operating over a lot of data; Google sizes of data. Few companies have Google-sized datasets though. The average sites you see, they're 10-20 gigs of data. Moving to a MapReduce solution for 20 gigs of data, or even for a terabyte or two of data, makes no sense. Using MapReduce with NoSQL solutions for small sites? This happens because people don't understand how to pick the right tools.

MySQL and location data:

BA: SQL goes very well with temporal data. SQL does very well with range data. I would say that SQL works very poorly today with location-based data. Is it the best thing out there, though? Probably. I'm still waiting for someone to really spend some time thinking about the location data problem, and come up with a true location store. I don't believe that SQL databases are the solution for tomorrow's location-based data. Location services are going to require something a lot better then what we have today. Because all we have today is a set of cobbled together hacks.

MySQL's future:

BA: There hasn't been a roadmap for MySQL for some time. Even before Sun acquired MySQL, it was languishing, and Sun's handling of MySQL just further eroded the canonical MySQL tree. I'm waiting to see what Oracle announces at the MySQL Conference. I expect Oracle to scrap the current 5.5 plan and come up with a viable roadmap. It won't be very innovative, but I am betting it will be a stable plan that users can look at.

I see a lot of excitement about multiple versions of MySQL. I'm hoping to see this push innovation as the different distributions differentiate themselves. I believe that the different MySQL distributions will all become forks eventually.

In the Drizzle world, the excitement is in the different sorts of plugins that have been written, and the opportunity for more. There has been a bunch of work around the replication system, and how it integrates with other systems. We have plugins now that allow Drizzle to replicate into things like RapidMQ, Cassandra, Gearman, Voldemort, Memcached and other database systems. Having a replication system that was designed from day one to be pluggable is a game changer for some enterprises. Drizzle's future? Everything is open source, and we will see where the community wants to take it.

I would like to see more focus on data bus architectures, i.e. geographical replication. In the past, replication was a lot about how to scale out. That's dead and gone. Anybody who's doing scale-out with replication is creating a future headache for themselves. What I'd like to actually see is more attention to how we pass data between datacenters. I would also like to see more work done on shared-nothing storage systems. There's been a few attempts at that with MySQL, but thus far, the attempts have been failures. The reasons for this? Poor code quality and difficulty of use. I believe we'll see new shared-nothing solutions coming out that will work better then anything that's been written so far.

March 23 2010

Joe Stump on data, APIs, and why location is up for grabs

I recently had a long conversation with Joe Stump, CTO of SimpleGeo, about location, geodata, and the NoSQL movement. Stump, who was formerly lead architect at Digg, had a lot to say. Highlights are posted below. You can find a transcript of the full interview here.

Competition in the geodata industry:

I personally haven't seen anybody that has come out and said, "We're actively indexing millions of points of data. We're also offering storage and we're giving tools to leverage that. I've seen a lot of fragmentation." Where SimpleGeo fits is, I really think, at the crossroads or the nexus of a lot of people that are trying to figure out this space. So ESRI is a perfect example. They have a lot of data. Their stack is enormous. They answer everything from logistics down to POI things, but they haven't figured out the whole cloud, web, infrastructure, turn-key approach. They definitely haven't had to worry about real time. How do you index every single tweet and every single Twitter photo without blowing up? With the data providers, there's been a couple of people that are coming out with APIs and stuff.

I think largely, things are up for grabs. I think one of the issues that I see is as people come out with their location APIs here, like NAVTEQ is coming out with an API, as a developer, in order to do location queries and whatnot, especially on the mobile device, I don't want to have to do five different queries, right? Those round trips could add up a lot when you're on high latency slow networks. So while I think that there's a lot of people that are orbiting the space and I know that there's a lot of people that are going to be coming out with location and geo offerings, a lot of people are still figuring out and trying to work their way into how they're going to use location and how they're going to expose it.

How SimpleGeo stores location data:

O'Reilly Where 2010 Conference

The way that we've gone about doing that is we' actually have two clusters of databases. We've essentially taken Cassandra, the NoSQL relational store that Facebook made, and we've built geospatial features into it. The way that we do that is we actually have an index cluster and then we have a records cluster. The records cluster is very mundane, it's used for getting the actual data. The index cluster, however, we have tackled that in a number of different ways. But the general idea is that those difficult computations that you're talking about are done on write rather than read. So we precompute a number of scenarios and add those to different indexes. And then based on the query, use the most appropriate precomputed index to answer those questions.

We do that in a number of different ways. We do that through very careful key construction based on a number of different algorithms that are developed publicly and a couple that we developed internally. One of the big problems with partitioning location data is that location data has natural density problems. If you partition based on UTM or zip code or country or whatever, servers that are handling New York City are going to be overloaded. And servers that are handling South Dakota are going to be bored out of their minds. We basically have answered that in a couple of different layers. And then we also have some in-process stuff that we do before serving up the request.

The virtues of keeping your data API simple:

We started out with what we considered to be the most basic widely-needed use case for developers, which is simply "my users here tell me about points of data that are within a certain radius of where my user's sitting." And we've been slowly growing indexes from there.

Our most basic index is that my user is at this lat/long, tell me about any point that I've told you about previously, within one kilometer. That's a very basic query. We do some precomputing on the write to help with those types of queries. The next level up is that my user was here a week ago and would like to see stuff that was produced or put into the system a week ago. So we've, again, done some precomputing on the indexes to do temporal spatial queries.

A lot of users, once they see a storage thing in the cloud, they inevitably want to be able to do very open-ended abstract queries like "show me all pieces of data that are in this zip code where height is this and age is greater than this, ordered by some other thing." We push back on that. Basically, we're not a generic database in the cloud. We're there specifically to help you manage your geodata.

Why NoSQL is gaining in popularity:

I think that this is in direct response to the relational database tool chain failing, and failing catastrophically at performing in large scale, real-time environments. The simple fact is that creating a relational database that's spread across 50 servers is not an easy thing to build or manage. There's always some sort of app logic you have to build on top of it. And you have to manage the data and whatnot.

If you go into almost any high performance shop that's working at a pretty decent scale, you'll find that they avoid doing foreign key constraints because they'll slow down write. They're partitioning their data across multiple servers, which drastically increases their overhead and managing data. So I think really why you're seeing an uptake in NoSQL is because people have tried that tool chain; it's not evolving. It basically falls over in real-time environments.

The role of social networking in the demise of SQL:

I bet if you track the popularity of social, you'll almost identically track the popularity of NoSQL. Web 1.0 was extremely easy to scale because there was a finite amount of content, highly cacheable, highly static. There was just this rush to put all of the brick and mortar stuff online. So scaling, for instance, an online catalog is not that difficult. You've got maybe 50,000 products. Put it in memcached. MySQL can probably handle those queries. You're doing maybe an order every second or something. It's high value and not a lot of volume. And then, of course, during Web 2.0, we had this bright idea to hand content creation over to the masses. If you draw a diagram and one circle is users and one circle is images, scaling out a whole bunch of users and scaling out a whole bunch of pictures is not too difficult, because you run into the same thing where I need to do a primary key look-up and then I need to cache in memcached because people don't edit their user's data that often. And once they upload a photo, they pretty much never change that.

The problem comes when you intersect those social objects. The intersection in that Venn diagram, which is a join in SQL, falls over pretty quickly. You don't need a very big dataset. Even for a fairly small website, MySQL tends to fall over pretty quickly on those big joins. And most of the NoSQL stuff that people are using, they've found that if you're just doing a primary key look up 99 percent of the time on some table, you don't need to use MySQL and worry about managing that, when you can stuff it into Cassandra or some other NoSQL DB because it's basically just a key value store.

How NoSQL simplifies data administration:

Essentially, there are a lot of people out there that are "using MySQL," but they're using it in a very, very NoSQL manner. Like at Digg, for instance, joins were verboten, no foreign key constraints, primary key look-ups. If you had to do ranges, keep them highly optimized and basically do the joins in memory. And it was really amazing. For instance, we rewrote comments about a year-and-a-half ago, and we switched from doing the sorting on a MySQL front to doing it in PHP. We saw a 4,000 percent increase in performance on that operation.

There's just so many things where you have to basically use MySQL as a NoSQL store. What people have found is, rather than manage these cumbersome MySQL clusters, with NoSQL they get those basics down. With MySQL, if I wanted to spread data across five servers, I'd have to create an application layer that knows that every user that starts with "J" goes to this server; every user that starts with "V" goes over here. Whereas with Cassandra, I just give it token ranges and it manages all of that stuff internally. And with MySQL, I'd have to handle something like, "Oh, crap. What if that server goes down?" Well, now I'm going to have three copies across those five servers. So I build more application logic. And then I build application logic to migrate things. Like what happens if I have three Facebook users that use up a disproportionate amount of resources? Well, I've got to move this whale off this MySQL server because the load's really high. And if I moved him off of it, it would be nothing. And then you have to make management tools for migrating data and whatnot.

Cassandra has all of that stuff built in. All of the annoying things that we had at the beginning, NoSQL has gotten those things really, really well. Coming from having been in a shop that has partitioned MySQL, Cassandra is a Godsend.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
No Soup for you

Don't be the product, buy the product!

YES, I want to SOUP ●UP for ...