SQLite notes: Difference between revisions
m (→On concurrency) |
m (→CLI admin) |
||
(9 intermediate revisions by the same user not shown) | |||
Line 80: | Line 80: | ||
===On concurrency=== | ===On concurrency=== | ||
Say the writers: "SQLite is not designed to replace Oracle. It is designed to replace fopen()" | Say the writers: "SQLite is not designed to replace Oracle. It is designed to replace fopen()" | ||
Line 90: | Line 89: | ||
====Concurrency from distinct tasks==== | |||
'''Multiple threads''' | '''Multiple threads''' | ||
There is usually nothing to be gained from distinct threads accessing the same database, so maybe avoid doing this. | |||
Yet if you insist: SQLite can be compiled to be thread-safe, and usually is. {{comment|(the reason you can compile ''without'' is that removing this safety is ''slightly'' faster for cases you know will always be single-threaded)}} | |||
You can query that compile option from the database at runtime if you want to be sure. | |||
[https://www.sqlite.org/threadsafe.html This page] points out there is a further distinction between multi-threaded (where threads ''must not'' share a connection) and serialized (where that appears to be fine). | [https://www.sqlite.org/threadsafe.html This page] points out there is a further distinction between multi-threaded (where threads ''must not'' share a connection) and serialized (where that appears to be fine). | ||
Line 108: | Line 109: | ||
'''Multiple processes''' | '''Multiple processes''' | ||
Multiple processes can | Multiple processes can access the same database. | ||
Assume that: | |||
* multiple can read concurrently | |||
* writes are not concurrent with other writes | |||
* writes are not concurrent with reads | |||
It is somewhat up to clients to error out, or wait and/or time out{{verify}} | It is somewhat up to clients to error out, or wait and/or time out{{verify}} | ||
Things work a little better with a non-standard method, but it comes with more requirements. | |||
Limitations: | Limitations: | ||
* locking (and the journaling that relies on it) relies on some filesystem semantics | * locking (and the journaling that relies on it) relies on some filesystem semantics | ||
: assume this will not work over NFS | : assume this will not work properly over some network filesystems (e.g. NFS, some SMB, a.k.a. windows network mounts). | ||
* You should ''not'' carry an open connection though a [[fork]] | * You should ''not'' carry an open connection though a [[fork]] | ||
====More on concurrent operations==== | |||
namely in that it's largely details in the rollback journal, or WAL, | {{stub}} | ||
that | |||
<!-- | |||
Safety from multiple processes ''relates'' to how proper atomic commits are done, | |||
namely in that it's largely details in the rollback journal, or WAL. | |||
Note that, before we dig into underlying details, | |||
that as in general you can always unintentionally mess up your own ability to have concurrency, e.g. by | |||
* keeping transactions open (easier to do with e.g. autocommit off) | |||
* counting on garbage collection to close a connection, rather than an explicit close() | |||
* database middleware doing their own thing to make things worse | |||
:: this can make a lot of sense in a "make everything act the same" | |||
Comparison: | |||
* in the absence of writing, both equally allow lots of concurrent reading | |||
* WAL allows writes to collect while reading is still going on; rollback journal does not | |||
* WAL is more about concurrency than speed | |||
: there are only some cases where more concurrency means more speed | |||
The | The details of the two also dictate what operations can be concurrent or not. | ||
can | |||
The | The default is using a '''rollback journal'''. | ||
: This | |||
:: copies the original version of what we are altering in a separate file | |||
:: we alter the database file itself directly | |||
:: and close=commit=remove that separate file. | |||
: after a crash, the separate file is replayed to fix the database file | |||
: rollback of an ongoing transaction is similar | |||
: which means | |||
:: you can have many concurrent readers | |||
:: ...but reading is exclusive with writing | |||
The alternative is ''WAL'' style journaling | |||
: It | |||
:: collects writes into a separate file (from multiple transactions) | |||
:: the database is as of yet untouched (so separate from ongoing reads) | |||
:: only writes into the database file at checkpoint time. | |||
: which means | |||
:: reads often don't block writes (as is easily true with rollback journal). | |||
:: writes don't block reads | |||
:: writes ''do'' still block other writes (there's only one WAL) | |||
:: checkpointing (=writing WAL contents into the database) ''can'' be largely concurrent with reading ''but'' there are more hold-up edge cases. | |||
:: writes are a little faster, because they happen once, not twice as with rollback journaling | |||
:: reads can be slower because each reader must sometimes check whether the content they need is in the WAL | |||
:: you can balance it towards somewhat faster reads, or writes, by controlling the WAL size | |||
If you're familiar with MVCC, it's ''not'', but it's halfway there. | |||
As such, read-heavy workloads are probably more concurrent. | |||
--> | |||
<!-- | <!-- | ||
Say, in a nonsense benchmark I can have 10 concurrent at 30 requests/sec each no issue | |||
...but in real code I can get it to break with two processes and a low transaction rate | ...but in real code I can get it to break with two processes and a low transaction rate | ||
Line 253: | Line 297: | ||
--> | |||
<!-- | |||
https://fly.io/blog/sqlite-internals-rollback-journal/ | |||
--> | --> | ||
Line 260: | Line 307: | ||
* https://www.sqlite.org/wal.html | * https://www.sqlite.org/wal.html | ||
===CLI admin=== | |||
<!-- | |||
Note that the CLI is intentionally simple, so | |||
Sometimes even just getting a column name to be readable (in {{inlinecode|.mode column}}) requires trickery bcause | |||
.width auto | |||
just looks at XX so sometimes forcing things like: | |||
.width 10 10 30 10 auto | |||
sqlite> SELECT * FROM sqlite_master; | |||
works a little better. | |||
https://www.sqlite.org/cli.html#changing_output_formats | |||
Vacuum database: | |||
vacuum | |||
List tables: | |||
.tables | |||
-- or | |||
SELECT name FROM sqlite_master; -- since 3.33, before that: | |||
SELECT name FROM sqlite_schema; | |||
List indexes: | |||
.indexes | |||
-- or | |||
PRAGMA index_list() | |||
Describe table | |||
PRAGMA table_info('TABLENAME'); | |||
(you might like {{.header on}} {{inlinecode|.mode column}}) | |||
In SQL form: | |||
.schema tablename | |||
Get table size (in bytes?) | |||
SELECT SUM("pgsize") FROM "dbstat" WHERE name='TABLENAME'; | |||
Which you may prefer in a "give me the largest" form like: | |||
SELECT name ,SUM(pgsize)/1024 table_size FROM "dbstat" GROUP BY name ORDER BY table_size desc; | |||
(note: requires SQLITE_ENABLE_DBSTAT_VTAB) | |||
.schema | |||
.fullschema also adds things like the stat_ tables (the intent seems debugging that recreates the query plan). | |||
.databases - open databases in current connection, where main is your data, and temp seems a tablespace for temporary tables. | |||
.open opens another database file (after closing the current) | |||
.read and .import are for importing data from SQL and CSV files | |||
.dump puts the current database into one UTF string, usually used from the prompt something like | |||
sqlite3 example.db .dump | gzip -c > example_db.sql.gz | |||
As the docs mention this is pure SQL so you ''could'' stream it to another database engine. | |||
'''dbstat''' | |||
dbstat is a virtual table, meaning it calculates things on the fly. | |||
It can be useful to calculate things like the size of indexes, though. | |||
Beware that that means that e.g. | |||
SELECT DISTINCT name FROM dbstat; | |||
is not something you want because it touches a ''lot'' of data (and you can get that list elsewhere, see e.g. sqlite_master or PRAGMA index_list for table-specific indexes) | |||
and e.g. the distinction between something like: | |||
SELECT sum(pgsize) FROM dbstat WHERE name='sqlite_autoindex_kv_1'; | |||
and | |||
SELECT name,sum(pgsize) as size FROM dbstat WHERE name LIKE 'sqlite_autoindex_%' GROUP BY name; | |||
can be larger than you think | |||
--> | |||
===OPTIMIZE, ANALYZE, etc=== | |||
<!-- | |||
'''OPTIMIZE''' | |||
Roughly speaking, | |||
the query planner counts cases where (on a given connection) it would have been useful to have (any, or newer) ANALYZE results at hand; | |||
optimize does an ANALYZE when | |||
As such, it can be quite useful to do an OPTIMIZE before you close a connection -- if the table never changes this is usually a no-op, | |||
and the times at which it ''does'' do work is probably good for subsequent access. | |||
ANALYZE (by default) does a full scan of every index, which ''can'' be slow for larger data. | |||
You limit the amount of work any one ANALYZE via {{inlinecode|PRAGMA analysis_limit}}. | |||
You should conside this an ''approximate'' ANALYZE (may still be accurate if there was little work to be done, but don't count on this). | |||
https://www.sqlite.org/lang_analyze.html | |||
sqlite_stat1 | |||
: useful mainly to the query planner (contains information about tables and indices) | |||
: allowed to be ''read'' (but not altered) by applications, which is sometimes useful | |||
: https://www.sqlite.org/fileformat2.html#stat1tab | |||
sqlite_stat4 | |||
: can be useful for the query planner (contains information about keys within an index, or primary key of a WITHOUT ROWID table) | |||
: won't be updated from a partial ANALYZE | |||
sqlite_stat2 | |||
: Only a thing between 3.6.18 and 3.7.8; basically replaced by 3 and 4 | |||
sqlite_stat3 | |||
: can be useful for the query planner (contains information about the keys in an index) | |||
: 4 reads 3, but does not write it; | |||
: you can continue 3 deprecated if you use 4{{verify}} | |||
https://www.sqlite.org/fileformat2.html#stat4tab | |||
ANALYZE kv; | |||
select * from sqlite_stat1; | |||
kv|sqlite_autoindex_kv_1|433 1 | |||
sqlite> select count(*) from KV; | |||
433 | |||
BUT note that | |||
* that 433 is the size of the index, which we only know is useful when we e.g. know that comes from a PRIMARY KEY / UNIQUE, NOT NULL constraint | |||
: | |||
* before an ANALYZE, sqlite_stat1 will not exist | |||
Apparently SQLite can only use one index for each table, | |||
so EXPLAIN can be more important than it is for | |||
to tell it's choosing the right one -- and in some cases you can push it towards a better index choice. | |||
https://stackoverflow.com/questions/12947214/sqlite-analyze-breaks-indexes | |||
List all indexes | |||
* SELECT name FROM sqlite_master WHERE type='index'; | |||
* or, on the prompt, .indexes | |||
SQLite automatically creates internal indexes for UNIQUE and PRIMARY KEY constraints, [https://sqlite.org/fileformat2.html#intschema] | |||
: which will be called something like sqlite_autoindex_table_''number'' | |||
There are also per-query temporary indexes created by the query optimizer [https://sqlite.org/optoverview.html#autoindex] when it estimates that creating that table in RAM is faster. | |||
--> | |||
===Errors=== | ===Errors=== |
Latest revision as of 17:53, 25 March 2024
Database related
More theoretical - thinking about databases:
Everyday-use notes |
For some introduction, see Some_databases,_sorted_by_those_types#SQLite
On text coding
On SQLite typing
This page mentions a lot of what you want to know.
For contrast: where most RDBMSes are statically typed (converts data to the column's type on storage, fails if it can't)...
...SQLite is more like dynamic typing in that SQLite decides the type from the value you give it, and only somewhat cares about the according column's type.
A little more precisely, the schema defines a type affinity, not a rigid type.
- 'type affinity' here meaning 'the value may still be of any type, but we prefer this one if that works'
The SQL types map to the affinity of INTEGER, REAL, NUMERIC, TEXT, BLOB, or NULL
- e.g. INTEGER is split into seven specific-sized things -- on disk, anyway; in memory all are loaded into int64(verify)
- TEXT is stored as UTF-8, UTF-16LE, or UTF-16BE (according to the database encoding) (verify)
For example(verify)
- if you store into a numeric column, it will do something like
- if not well formed as an integer or real, it's stored as TEXT
- if it's a real number (with digit) and store as REAL if (seems like float64 so if there was more precision in the text that will be lost)
- if it's an integer, and find the smallest integer type that will store it
- if it's an integer larger than int64 can store, it will try REAL
- hex is stored as text
Assume that the schema is not used for conversions going out
AFAICT, what you get out is whatever type that got stored, and it is the logic on insert that is interesting.
...at least, that's what synamically typed languages tend to do with it. Statically typed languages (including SQLite's onw API) may get you to convert it, and/or makes you test for it(verify).
On concurrency
Say the writers: "SQLite is not designed to replace Oracle. It is designed to replace fopen()"
That said, it does about as much as it can with what the filesystem provides, which is pretty great on a single device.
...but keep in mind there are limits - and that filesystems vary. In particular some network filesystems do not do enough.
Concurrency from distinct tasks
Multiple threads
There is usually nothing to be gained from distinct threads accessing the same database, so maybe avoid doing this.
Yet if you insist: SQLite can be compiled to be thread-safe, and usually is. (the reason you can compile without is that removing this safety is slightly faster for cases you know will always be single-threaded)
You can query that compile option from the database at runtime if you want to be sure.
This page points out there is a further distinction between multi-threaded (where threads must not share a connection) and serialized (where that appears to be fine).
Some libraries may have their own decisions - e.g. python3's sqlite3 will complain SQLite objects created in a thread can only be used in that same thread, though that seems to be overzealous.
Multiple processes
Multiple processes can access the same database.
Assume that:
- multiple can read concurrently
- writes are not concurrent with other writes
- writes are not concurrent with reads
It is somewhat up to clients to error out, or wait and/or time out(verify)
Things work a little better with a non-standard method, but it comes with more requirements.
Limitations:
- locking (and the journaling that relies on it) relies on some filesystem semantics
- assume this will not work properly over some network filesystems (e.g. NFS, some SMB, a.k.a. windows network mounts).
- You should not carry an open connection though a fork
More on concurrent operations
See also: