Non-relational database notes

From Helpful
Revision as of 22:12, 14 December 2010 by Helpful (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

For other database related things, see:

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


Key-value stores

When you can fit your use/application into a key-value model, there are fairly simple and fast implementations (often memory-cached, disk-backed) that basically give you a simple database that can store a lot of information, with no need of an external database.

If you have something in a RDBMS where you are actually mostly retrieving by primary key, and doing few or no joins (which may include things like simple data logging), you could use one of these instead.

Disk-based

Various file-based stores (see e.g. File database notes) are effectively disk-based key-value stores.

Used more than you may think; Berkeley DB is used in many embedded setups.


Since they are often indexed, and cached in some way, you may easily think of them as fastish disk-backed hashmaps.


There has been some revival in this area; see e.g. Tokyo Cabinet (basically a faster reimplementation of the dbm idea, similar to BDB in a few ways), with some existing extensions (e.g. Tokyo Tyrant to have it be networked, Tokyo Dystopia to add full-text search, ).


Networked

Networked key-values stores may be caches or stores.

That is, some things are purely there for scaling and memory cacheing, and don't do any disk backing, because that usually introduces a lot of expectations that can only be met by a specific design and some tradeoffs on your part.

Networked memcaches (see e.g. memcached below) are an almost direct extension of the basic key-value store.

NoSQL

See also: http://en.wikipedia.org/wiki/NoSQL

See also




Specific software notes

memcached notes

These are primarily notes
It won't be complete in any sense.
It exists to contain fragments of useful information.
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

memcached is an in-memory key-value cache (LRU-style).

Network daemon; can be used in a distributed ways by configuring clients to do so, which means that many clients can share this cache (and avoid (platform-)specific IPC mechanisms).


There is no access control; firewalls should be enough.


It works purely in memory, is written to never block, and is modelled/optimized in a few other ways to make it always respond quickly.

The cache throws away data based on expiration timeouts, as well as LRU (Least Recently Used) logic so that it cannot overflow, and generally stays full with the items most likely to be requested again.

Its most common use is probably caching data that was complex and/or high-latency to generate but is not very volatile (under some set of variables and within some time, controlled by you), so that you can easily store and re-fetch this data (...and manage invalidation).


It is not:

  • redundant. You are probably looking for a distributed filesystem if you are expecting that. (you can look at memcachedb and MogileFS, and there are many others)
  • a document store. Keys are limited to 250 characters and values to 1MB. (again: look at distributed filesystems)
  • a transparent database proxy. If your data has to persist, you should consider any data dependencies, and any entries that should be invalidated when the database is updated. There are of course other ways of solving this (e.g. periodical memcache state dumps into the database, adding code that makes the database slave to the memcache state, and others)


Originally developed for livejournal (by Danga Interactive) and released under a BSD-style license.


Daemon options

The main command line options:

-d            daemon
-m 2048       take 2048MB of memory
-l 10.0.0.40  bind to this IP 
-p 11211      ...and this port

The default amount of memory (64MB) is fairly little for large-scale uses, you want to give this a chunk of memory that will serve your needs.

On 32-bit machines, you cannot give a single process more than, usually, 3GB or 2GB of memory (see 4GB of memory on a 32-bit machine).

However, this is not really a problem as you can start many separate daemons on the same host, and make memcached clients distribute data among them (this is a client/protocol feature).

Some client, server, and interaction details

Clients can choose to distribute objects among various daemons/computers (based on the hash; you may also get control of the hash so that you can ensure related objects are stored on the same server).

A single server is mostly just a simple and fast hashmap. The client adds another layer of hashing, in that it chooses the server to store an object on based on its hash and the available servers. For this reason, for optimum cache hits, all clients should use the same client list, and use the same hashing method. It helps to have each be the same client implementation (also because each may serialize its own type of data; you may want to set up separate pools for separate languages, or split it by fake-namespacing the keys).

Note that when you change the server list/amount, the client may choose to invalidate all data. You should probably also reset all clients to get the same server mapping.

Client libraries are robust to a server dropping out in that they will react as if all cache lookups are misses, causing your app to fall back to always generating things the slow way. You can bring up memcached at the same address, which will then start caching again.


Client interfaces

There are APIs/clients for most major languages (see [1], [2], [3]), and you can implement your own reading the the protocol.


Exactly what the interface takes and returns varies. It may be dictionaries, persisted objects, bytestrings, etc.


Basic usage notes

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Retrieval:

  • get: note that you can ask for multiple entries in a single request
  • gets fetches more information, including the the value needed for the cas operation (see below)

Storage commands are:

  • set: creates entry if necessary
  • replace: like set, but only stores if the key already existed
  • add: like set, but only stores if they key was not already present exist
  • append: append to current cached value (only if it already existed)
  • prepend: prepend to current cached value (only if it already existed)
  • incr and decr increment and decrement a 64-bit integer. Entry must already exist (so e.g. set(key,'0') first). Interprets a non-integer value as 0.


  • cas: 'check-and-set': store, but only if no one else has updated it

You should write your interaction to minimize the amount of round trips (i.e. amount of commands)


(verify)

get_multi: fetch for many keys at once. Avoids latency overhead from doing multiple requests

flush_all: Clears the cache. Useful when developing, since you don't get to list items to clear.


The time is interpreted either as

  • if <2592000 (which is 30 days in seconds): a delta time from the current server time
  • if larger than that: Unix (seconds-since-epoch) time

Designing access models; Tricks

Rules of thumb:

  • Tackle the most obvious cases first. Usage probably follows 90-10 patterns. You can leave this to the LRU-ness of the cache, but in some cases you can avoid a bulk of nonsense that has to be managed from entering the cache.
  • Aside from the obvious networking and management costs, also consider serialization (marshalling) costs.


Things to consider:

  • Your setup may count on touching cache elements, but badly designed setups may mean a lot of touches per page view (or other overall product), that bottleneck your access to memcached (there are a few different ways to reduce touches)
  • It can help to layer your cache a little more. For example, fragments of pages may be constant, and could be cached. Some of this can also be cached in whatever end process you have, to lighten the load on memcached for things that have fairly simple/obvious/static use cases.


  • You may want to use cacti/munin/some other logging/graphing on certain stats while you are developing, to see both long-term patterns, and you may see some some obvious mistakes in, say, relative amounts of gets/sets this way.
  • You can't really control treatment of subsets of elements. That is, you can't say that certain elements should always be removed first. When you are using memcached for small-scale app caching, and not for application scaling, it may be useful to set up multiple daemons, to set up separate treatment per cache. (this does waste memory, but also note that you can easily set limits on the amount of memory to be used for each namespace this way)



See also


CouchDB notes

MongoDB notes

redis notes