Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

New submitter rescrv writes "Key-value stores (like Cassandra, Redis and DynamoDB) have been replacing traditional databases in many demanding web applications (e.g. Twitter, Google, Facebook, LinkedIn, and others). But for the most part, the differences between existing NoSQL systems come down to the choice of well-studied implementation techniques; in particular, they all provide a similar API that achieves high performance and scalability by limiting applications to simple operations like GET and PUT.
HyperDex, a new key-value store developed at Cornell, stands out in the NoSQL spectrum with its unique design. HyperDex employs a unique multi-dimensional hash function to enable efficient search operations — that is, objects may be retrieved without using the key (PDF) under which they are stored. Other systems employ indexing techniques to enable search, or enumerate all objects in the system. In contrast, HyperDex's design enables applications to retrieve search results directly from servers in the system. The results are impressive. Preliminary benchmark results on the project website show that HyperDex provides significant performance improvements over Cassandra and MongoDB. With its unique design, and impressive performance, it seems fittng to ask: Is HyperDex the start of NoSQL 2.0?"

NoSQL is a terrible misnomer, in that the difference is far more than just "doesn't use SQL", and there are NoSQL systems that do actually support SQL. It's really just referring to data storage systems that aren't based on relations. That change in paradigm has its advantages (speed (in some cases), scalability, and flexibility) and disadvantages (speed (in some cases), lack of consistency, less restriction on bad programming). Of course, each NoSQL system tries to mitigate the disadvantages, and each RDBMS tries to prove itself better than all of NoSQL's advantages. It's a big fun party involving lots of mud-slinging.

Most NoSQL systems I've worked with are distributed hash tables, in a basic sense. Each value has a key, and that key determines where it's stored on a cluster. Values are not tied to any other values, so things like "foreign-key relations" are silly in a discussion of NoSQL. Rather, the algorithm to retrieve the data does all of the processing to connect data, using massive parallelization across a cluster to handle huge amounts of data at once.

With a traditional RDBMS, the application must fit its data to the schema completely before any data can be stored. This, of course, means that all data in the database can be assumed to be complete. You won't find references that don't exist, which makes queries straightforward.

With NoSQL, the database is treated as a more flexible bucket. Data is dumped in with a key, with little concern for fitting the design of the application's model. This, of course, means a bit more planning at design time, but the data can be arranged to better fit whatever it actually represents. Some details are present, and some aren't, but that's okay. The retrieval algorithm (typically a MapReduce program) should check for the existence of whatever data it needs, and handle errors accordingly. Those MapReduce programs are far more complicated than a simple SQL query, but the database's backend is conceptually simpler as an abstract key/value store. Key/value stores have been around for decades, and studied extensively. They can be made more fault-tolerant and scalable than RDBMS shards, but lack the support for large set-based comparisons.

The comparison to the BASIC-vs-C battles is appropriate. Both BASIC and C serve their purposes well (education and system programming, respectively), but neither should be used where the other is better suited. NoSQL and RDBMSs also both have their places.

Each has its advantages. Travelling by buggy might be speedier, but transporting a dozen people by van might be more efficient (different forms of faster). This is because a van has greater scalability than a buggy. Also, a van is more flexible in that it can transport a variety of cargo/passengers. The high speeds in a buggy going off road can also be a disadvantage, compared to the relative safety of driving on roads following the many traffic rules.

Perhaps I'm just in an old state-of-mind, but what good is data without relations? I don't mean that as a gripe about the system, just, how would one ever pull members of a group, or messages belonging to a user, etc? I guess I don't understand how that's more efficient.

I guess he's saying that with NoSQL the relations are done at application level rather than database level. You still have the equivalent of schema and queries, but they are managed by the code, not the DB engine.

Not all data needs to be related. Look at the history of databases: relational databases are fairly new actually, as they only came around in the 1970s. Before that were hierarchical databases, such as IBM's IMS, which was used on the Apollo program to track all the millions of parts used in that project. Wikipedia has a nice article about it here:http://en.wikipedia.org/wiki/IBM_Information_Management_System [wikipedia.org] Even though it was first used in the 60s, it's still in widespread use now, and every time you us

A hierarchy IS a relationship. In a hierarchical databases, child segments and parent segments were the main kind of relationship used.

All relational databases did was allow the relationships to be more freely defined.

Further to that, a key / value pair is also a relationship, in that the key symbolically represents the data. That's why it is correct to call them NoSQL databases: They forgo the complexity of a general query language. In doing so, they also lose the ability to inherently store anything except the most basic relationship: the key / value lookup.

I've just gotten my NoSQL feet wet by playing around a weekend with python + mongodb. I am pretty used to SQL, and generally had the same thinking you (and most other SQL people) had.

But, many people liked it, so I figured out I should at least have a look at it. I made a small webapp for tracking my movies, with query to imdb and with users. I was surprised to see that most of the problems I anticipated wasn't a problem at all, and things mostly just worked naturally. For a quick get-started intro to python + mongodb : Part 1 [pythonisito.com] and Part 2 [pythonisito.com]. If you got the spare time and some interest, poking around with it is a great little weekend project.

Anyway, back to your question. MongoDB store data in a format very similar to JSON (technically BSON, a JSON superset), if you're familar with that. Unordered key->value and ordered lists. For the python driver, it translates the data to and from native python dict/list structures. I started with three fields; filename, added and imdb. The imdb field was more or less the raw data from imdb (json format, decoded to python native and encoded to mongodb's BSON format again.)

Later on I added option for users to mark movies as favorites and seen (by adding two new fields to movie list, "seenby" and "favoriteof" - both lists - these were added to a movie entry the first time someone marked one as seen or favorite). To add a new user I just did movie["seenby"].append(user_id) and movies.save(movie)

When I wanted to query the db, I created a data structure of what I wanted, and sent that to the server. The server would then return all documents that matched that example structure. So, to find the entry for file "/bla/test.mp4" I would do movies.find( {'filepath': '/bla/test.mp4} ).

For finding by imdb Title value : {'imdb.Title': '300'}. For finding all favorites by user: {"favoriteof": user_id} (yes, it would handle the list of users as you'd expect, and find all that the list of "favoriteof" had user in it. It would also of course skip all entries without that field).

mongodb also support some special keywords for searching. Let's say I have a list of 3 users, and want to have all movies that any of them have favorited. {"favoriteof" : {"$in": users} } would fix that - for movies that all of them have as favorite, {"favoriteof" : {"$all": users} }. Sorting was done using sort_by( field_n_direction_list )

You have a full list of modifiers here [mongodb.org]. And all could of course be combined to quickly and easily create powerful queries. And you of course have options for indexes. You might notice that you do lose something from normal SQL's here, if you wanted both movie and user info, you'd have to make two queries (well, from what I've understood) so highly relational data is not fitted for this. Also, you don't have the type constraints any more.

In the app I also wanted to list all movie genres (I did one preprocessing of the imdb data, splitting up comma seperated genres string to a list of genres) and number of times each genre was used. This led me to mapreduce, which was the thing I both anticipated most, and feared most. Well, I kinda chickened out, since the pymongo doc had an excellent example [mongodb.org] which was exactly what I wanted doing, but I did get a look at it at least:) And it was fast enough to not making a noticeable dent in load time for a few hundred movie entries.

*Cough* well, that was a long post.. I hope it helped you at least a bit in answering your question, and maybe inspire you to take a closer look at it when you get some spare time. I've only used it over a weekend, so I've probably just scratched the surface, and I probably have missed some neat features or horrible gotchas here and

There's another piece to the definition. The traditional RDBMS (Oracle, DB2, SQL Server, MySQL, PostgreSQL) is designed to give 100% consistent results. All other design goals are sacrificed so that two people asking the DB the same question at the same time will get the same answer, and no one can make a modification and someone else gets an answer that is not 100% up to date. NoSQL trades consistency for flexibility/simpler scalability.

There's another piece to the definition. The traditional RDBMS (Oracle, DB2, SQL Server, MySQL, PostgreSQL) is designed to give 100% consistent results. All other design goals are sacrificed so that two people asking the DB the same question at the same time will get the same answer, and no one can make amodification and someone else gets an answer that is not 100% up to date.

This is incorrect. Oracle and MySQL use MVCC for all reads by default. SQL Server is the only one in your list that blocks readers for data where write locks have been issued unless SI or uncommited reads are enabled for the query ( CHOICE). Oracle does not even offer a serialized reads option.

If one person authorizes $500 on your credit card at 1:00 and consumes your limit and someone else tries to authorize $300 at 1:00:10 and it goes through because the DBMS isn't giving consistent answers, that's a problem.

Changes are consistant...answers are NOT.

NoSQL trades consistency for flexibility/simpler scalability.

You can make consistancy tradeoffs with most RDBMS systems as well.

Want to store terabytes of big LOBs and use your DB as a transactional filesystem? It can be done, but it won't be pretty.

SQL = Structured Query Language. NoSQL = key/value stores. With SQL, you have a query, the database parses, plans, and executes it. With NoSQL, you have a key (string, number, etc). The database hashes it and finds the previously stored value.

So that's good at finding the record if you already know the key, but there's no help in finding a record if you don't know the key, or getting a count of records with the same attribute attached... SQL for the win.

So that's good at finding the record if you already know the key, but there's no help in finding a record if you don't know the key, or getting a count of records with the same attribute attached... SQL for the win.

This isn't totally true. In MongoDB, for example, you don't even really have to think about the "primary key" for every document. Many times I don't know it or even care to. If you wants to look up customers in by name, you'd index the last_name and first_name fields and then do your query

So it's basically key/value where the value is a serialized freeform array ? So then, if there is no structural integrity at the DB level, it has to be implemented in the application logic ? Doesn't that merely displace the performance bottleneck from the DB to the application ?

Perhaps I'm not getting the point, but I'd much rather have DB-enforced structural integrity, than have to write all those checks and balances myself for every single app. Computing time is cheap. Development time, not so much.

So it's basically key/value where the value is a serialized freeform array ? So then, if there is no structural integrity at the DB level, it has to be implemented in the application logic ? Doesn't that merely displace the performance bottleneck from the DB to the application ?

By not imposing any structure on what is being stored, performance is very, very good. And yeah, it's up to the application to put and get what it needs properly. The real win is that if your data isn't "relational" meaning tha

At scale, sure. I am not Google. The biggest clusters I manage consist of maybe a dozen machines. If I have the choice between spending a month trying to optimize throughput, or adding an extra node, I'll do the latter because:

- 150+ hours of dev costs about the same as a decked out DB/web server- I'd rather spend that month on billable work, or getting ahead of the game- YAY more toys!

Despite that, I can see how there's a tipping point, dependent on the relation between traffic and revenue. I'm not exa

This isn't totally true. In MongoDB, for example, you don't even really have to think about the "primary key" for every document. Many times I don't know it or even care to. If you wants to look up customers in by name, you'd index the last_name and first_name fields and then do your query like so:
db.users.find({last_name : 'Cluster', first_name : 'Lost'})

An excellent example.

I think of the NoSQL world as "get a document/piece of data by an indexed data column". It works very well for that. SQL is better for "correlate and compute summation on these data with these sets of conditions".

Can somebody explain how this NoSQL stuff works? It's a database without SQL, so what replaces it? Is this just the difference between BASIC and C being expanded.

The basic distinguishing feature of RDBMS systems is that they are data-centric, so as to speak. The DB engine takes care of the metadata, access methods, query plan preparation ("SQL compilation") and optimization, transactions, backups, etc. Additionally, the RDBMS can (or is at least supposed to, if it's a proper RDBMS, see Codd's original work) allow you to separate the design of the physical layout of the data from the conceptual design of the data model ("this column is going to be accessed quite a lo

NoSQL 1.0 is usually not much more than a hash-accessed flat-table database. GDBM, QDBM and BerkeleyDB are all hash-accessed flat-table databases. The refinements mentioned as being added to NoSQL databases (such as searchable indexes) are simply sequential indexes that associate some indexed parameters with the hash value.

NoSQL generally works by you pushing an item into the database and getting one or more hash values back. You want the item back, you give the database the hash values and you get the item. Object-oriented and object-based NoSQL both work by allowing objects to point to other objects. This gives you inheritance. (Basically you have a hash value that points to another record, where the structure of that other record is fixed rather than chosen at run-time via a join statement.)

Basically, database theory describes all the various forms of database you can have: flat-file, hierarchical, network, relational, object/relational, relational, semi-structured, associative, entity-attribute-value, transactional and star (aka data warehouse). A description of some of these can be found here [unixspace.com].

This describes how the data is actually laid out, but does NOT necessarily describe how the data is accessed.

Database theory also describes the following underlying methods of accessing data: sequential, indexed, hash. Any combination of these is permitted, so you can have an index that points into sections of a database that are then searched sequentially for example. Or you can have indexes that point to other indexes that in turn point to a hash value. And so on.

SQL is just a meta-language that allows you to apply a restricted form of set theory on the underlying access methods. There were arguments at the time SQL appeared that it should allow all of set theory - and those arguments still go on, with some SQL alternatives using actual set theory notation as opposed to SQL notation.

NoSQL, in some cases, is just direct access to hash tables for directly accessing items. In other cases, it's a lightweight abstraction layer.

In the example advertised in the summary, an object is referenced through a set of indexes. If you have a partial set of indexes, you reference multiple objects but they will be related in some way. There is nothing X.0 about it, it's just a NoSQL database that uses a network database topology rather than a flat-file topology. It is nothing new.

I recognize that marketspeak is what sells things, that calling the systems by what they actually are would not be nearly as impressive to managers. Managers do not, as a rule, read Slashdot. Geeks and Nerds read Slashdot. Geeks and Nerds know Database Theory. (Well, if they don't, they damn well should -- either that, or they can use Google to look the terms up.) The two additions to database theory in the past 30 years have been the Object-Relational and Object-Oriented models.

1) No, only some map the model direct to disk. Oh, and except on one or two very primitive databases, views aren't mapped into a physical form on disk. MySQL only does so when you tell it to use a storage engine that allows you to map the model that way AND you configure it to. In the example you gave, you could use 1 table and 1 view (or, indeed, 1 table and 1 table-returning function) for almost every relational database out there. More s

That's half of the point of NoSQL, or at least Mongo. The point is to have very large data sets that can be accessed quickly and reliably (but not necessarily consistently). Mongo does that in two ways: by simplifying the data store significantly and by providing fast and easy replication and sharding. It's usually as simple as designating which group the server belongs to and then letting mongo take care of the rest.

Dead on. And I'm currently building an ecommerce site on openldap. It's way better than it used to be. In particular, I'd never use it in the past because slurpd stank. Now that that's gone their replication is fast and solid. And yeah, NoSQL is basically a poor reimplementation of well tuned LDAP.

Dead on. And I'm currently building an ecommerce site on openldap. It's way better than it used to be. In particular, I'd never use it in the past because slurpd stank. Now that that's gone their replication is fast and solid. And yeah, NoSQL is basically a poor reimplementation of well tuned LDAP.

OpenLDAP is not the directory server you seek.. Switch to 389 you will.

Although I don't know how well OpenLDAP handled replication -- the 'many servers' part...

OpenLDAP handled replication in two different ways. Older OpenLDAP servers used a separate daemon ("slurpd") to handle replication. IME, it worked pretty well. New OpenLDAP servers...well, it's pretty much just voodoo*, but it seems to work, too <shrug>

*Okay, it's not really voodoo, but I haven't spent the time to figure it all out yet. I believe it's more a network of peers than the older master/slave server configuration, but I don't completely understand all the details of how the

is the best introduction to this subject I've seen. Until someone can explain the pros of hyperdex with a funny video featuring cute animals I'm sticking with technology that's been tested more thoroughly.

The hashing system is pretty neat. The idea that you could get at records without their specific key via search criterion is astounding.

In the future more advanced hashing systems will allow NoSQL databases to extract a set of records all containing a similar subset of data without keys at all!

Of course we'd need a name for the sections that are matching. Perhaps "Columns", yeah, then each result returned could be called a "Row", makes sense. I bet you could then create even more complex matching patterns for multiple "Columns" against each record in the data-set. If only there was a language to describe query we're sending to the servers... Oh! Server Query Language!

And we'd still be able to have the cluster support, scalability, lax schema, and MapReduce algorithms NoSQL currently provides, right? Sometimes those aspects are vital to the application design, and key to the system's overall performance.

[citation needed], and preferably one that actually covers NoSQL as it's intended for use.

Last time I checked thoroughly (2009), most RDBMSs (MS SQL Server included) could scale across an arbitrarily-large cluster, but for every doubling of the cluster's power, the costs would be around 300% to 400%. When you get to the point of needing billions of rows per table (and yes, there are applications out there that need that, even at relatively small startups), those outpacing costs become prohibitive.

The lax schema isn't about not knowing what you're doing, but about acknowledging that you won't know everything about the data you'll receive. Back when I did server programming, the mantra was "be strict in what you provide, and lax in what you accept". This is that principle applied to databases. Maybe the website you're crawling doesn't have a title, or its address is obviously dynamic. Maybe the medical record's patient has seven different insurance providers. Maybe the passport holder legally doesn't have a surname. When you design a schema for a strict database like an RDBMS, you make certain assumptions about the data you'll get. Those assumptions lead to performance increases if they're accurate, and failure if they're wrong.

MapReduce is the key to performance without assumptions, at lower cost. By moving processing to the data, and replicating the data to multiple nodes, network transfer is reduced greatly. The MapReduce programs are designed to operate on any amount of data they are presented with, so each node in the cluster contributes its available resources, and since the data is spread evenly, most "queries" will be partially processed by every node. Contrast that with RDBMS sharding, where certain servers handle certain shards, and the massive parallelism of the cluster isn't used. Some servers will sit idle while others do all of the work. Note that the parallelism applies generally, to all MapReduce algorithms. This means that you do not need to make as many assumptions about your queries ahead of time, like expecting to only look up a customer by name or phone number (and therefore indexing those).

NoSQL isn't just "not using SQL". It's a different storage paradigm, which comes with its own advantages and disadvantages.

It seems to me that laxness and strictness of schemas is very much like static or dynamic typing. With static typing, certain classes of errors simply cannot happen, but you can deisgn yourself into a corner. With dynamic typing, smooshing things around is a bit easier, but you can get runtime errors if you don't design it properly. Personally, I prefer static typing.

Google? Yeah, they clearly didn't have a clue what they're doing when they invented MapReduce.

Facetiousness aside, the highly structured storage with tables and columns that a Codd style relational database provides is a better fit for most problems than most of the key/value pair (KVP) databases out there. There is too much R&D invested in that technology to just ignore. Using KVP puts more work on the programmer to organize the data. Storing serialized tuples in JSON or XML or whatever is en vogue

Some nosql "db" support 256 bit keys and everyone knows filesystems can only support 8.3 filenames, so at 8 characters of 7 bit ascii thats only something like 56 bits. If only microsoft had a filesystem supporting longer filenames... maybe next decade.

(note I'm intentionally avoiding the idea of a 256 directory deep filesystem, each directory containing a subdirectory 0 or 1, because that is just... illness)

Previously, I was developing MMO backend software that uses MySQL for a data storage. The fit to the model was completely inappropriate, there was just no applications of the relational model, since we were just checking in and out large blobs of data, not actually performing read/update transactions. Storing records (persistent game entities) as files in a directory would have worked far better than forcing that stuff into a relational DB. But customers know that Databases are what professionals use, so we did it anyway. Clients can buy it, realise they need the flat files and turn them on after benchmarking, we get the sale, they get a good product in the end, win win, but a bit of wasted effort.

Now NoSQL is what professionals use, relational DBs can be used for what they are good at and NoSQL gives us marketing hype for doing certain things in the right way that could have been done using filesystems all along. I couldn't be happier. Furthermore we get this nice application level distributed data store with map-reduce stuff built in if we can be bothered using it.

Here's what most geeks don't get about marketing: it's not just about being smarter than the other guy, you've got to be smarter than him and make him give you his money. Money is good, it buys freedom and power and if branding makes sure that you have more of this freedom and power than the fool who falls for it, then the world will be a better place.

Well, no, I'm arguing your point as best I can, this stuff is too murky to "prove" anything concretely, but you're welcome.

A technical discussion is not at all biased by marketing. What's most efficient is most efficient, what is most stable is most stable, what can be implemented the fastest can be implemented the fastest nomatter what the marketing concerns regarding who wants to buy it. But still, the "best" solution involves many factors, the technical factors are extremely important, but you've still g

This is a type of index, not a type of database. See locally sensitive hashing. [wikipedia.org] It's an efficient way to find keys which are "near" the search key in some sense.

Such a mechanism could be provided in a key/value store or an SQL database. It's even possible to do it on top of an SQL database. [compgeom.com] It's more powerful in a database that can do joins, because you can ask questions with several approximate keys.

This is an area of active research. Many machine-learning algorithms are scaled up by locally sensitive hashing, so they can work on big data.

I'm kind of confused about your reply, but to clarify -- if you want fast random access, XML is a terrible format. If you want to transfer data between two systems, XML can be excellent. The example of Oracle being able to return XML data just confirms what I'm saying -- the data is stored in Oracle's binary format, and *transferred to you* as XML.

You seem to be thinking I'm claiming there's something wrong with XML, but all I'm saying is that XML files are not designed to be databases.

But they don't store the data as XML. They usually decompose it down to Infoset, and then store that in some relational fashion with indexes and stuff; and reconstute XML when returning results of a query.

They usually decompose it down to Infoset, and then store that in some relational fashion with indexes and stuff; and reconstute XML when returning results of a query.

All of that processing and reconstitution really destroys the nutritional value, and excess compression contributes to high 0x80 levels. You really should be mindful about the data that you are putting into your program.

I don't know about Oracle, but in my experience XML databases built on top of RDBMSes (I'm looking at you Microsoft) suck. XML data is often highly unstructured, and at least in the case of SQL Server, tries to force unstructured XML into a structure and ends up doing it poorly.

Although the coordinator is logically centralized, we've got a version in the works that uses Paxos (a consensus algorithm) to distribute the coordinator as well.
For more information check out http://openreplica.org/ [openreplica.org]