Small. Fast. Reliable.
Choose any three.
This page describes cases where bugs in SQLite have lead to database corruption.

Database corruption is usually caused by things other than SQLite bugs - things over which SQLite has no control. For other sources of corruption in SQLite database files, see http://www.sqlite.org/lockingv3.html#how_to_corrupt and ticket #2224.


Fixed In Version 3.4.0 (2007-06-15):

In version 3.3.0 through 3.3.17, if an SQLITE_BUSY error occurs within an explicit transaction and that transaction is not rolled back, the database file can be corrupted. See CorruptionFollowingBusyError for additional information.

Fixed In Version 3.3.9 (2007-01-04):

If a process is writing to the database file but exits abruptly without finishing its write (perhaps because of a power failure or an OS crash) it leaves behind a "hot journal". Subsequent processes which try to read the database will see this hot journal and use it to restore the database to a consistent state.

If two processes see the hot journal at the same instant, they will both race to try to rollback the hot journal. Only one of these two processes will succeed. The other will back off. Unfortunately, the process that backs off might end up with an inconsistent cache in versions of SQLite prior to 3.3.9. If that process later goes on to make additional changes to the database, it might cause database corruption. The chances of corruption are increased if auto_vacuum is enabled.

The race condition in which two processes are both trying to rollback the same hot journal is very tight and is exceedingly unlikely to occur on single-processor systems. And, the problem only occurs if there are hot journals that need to roll back. So we consider this problem to be obscure. Nevertheless, we recommend that all users upgrade to SQLite version 3.3.9 or later just to be safe.

Fixed in Version 3.2.8 and 2.8.17 (2005-12-19):

If a large INSERT or UPDATE makes many changes to the database (more than will fit in the in-memory cache) and then fails due to a uniqueness constraint, but that INSERT or UPDATE was part of a larger transaction enclosed within BEGIN...COMMIT and the larger transaction later commits successfully, the database might be corrupted.

Fixed in Version 3.2.6 (2005-09-17):

If you have a database larger than 1GiB and you try to VACUUM that database but the VACUUM fails (perhaps because it ran out of temporary disk space) and has to rollback, the rollback can cause database corruption.

Fixed in Version 3.1.6 (2005-03-16):

Database corruption might occur when inserting records ito a table that has approximately 125 columns. Inserts always work when the table has more columns than this. If some of the fields being inserted are large strings or BLOBs, then corruption might occur with fewer columns. For example, of two fields are large strings with 1000s of characters, then the insert might cause corruption on a table with 123 columns instead of 125.

This probably is exceedingly unlikely to appear on a well-normalized schema where tables have no more than a dozen or so columns.

Fixed in Version 3.1.4 (2005-03-10):

If autovacuum is turned on and you attempt to do a CREATE UNIQUE INDEX but the index creation fails because some preexisting data is not unique, then the database can be left in an inconsistent state. The problem only occurred if autovacuum was enabled and autovacuum was a relatively new feature at the time.

Fixed in Version 3.1.2 and 2.8.16 (2005-02-15):

If the same database is opened twice (for example by two different programs) and one program does a VACUUM and the other program makes some simple change to the database, the second program might not realize that the VACUUM had occurred and reorganized the database and might therefore cause database corruption.

Fixed in Version 3.0.4 (2004-08-08):

If the same database connection (the same thread in the same process) is trying to do two things at once: If it is in the middle of a query against an indexed table and then at the same time it tries to delete a row out of that indexed query, and if the query does not use the index, then the deletion will remove the row from the index but not from the main table. This will result in an index that is missing an entry. The problem can be fixed by dropping and recreating the index.