Database corruption is usually caused by things other than SQLite bugs - things over which SQLite has no control. For other sources of corruption in SQLite database files, see http://www.sqlite.org/lockingv3.html#how_to_corrupt and ticket #2224.
Fixed in Version 3.5.4 (2007-12-14):
Any DELETE or UPDATE operation that uses side-effects to delete additional rows of the same table that is the subject of the DELETE or UPDATE might cause database corruption. The issue was first identified by ticket #2832. But the issue is very old and effects all versions of SQLite at least back through version 3.3.13 (the earliest version that we have checked.)
A "delete side-effect" in the previous paragraph means a deletion that occurs as a result of an OR REPLACE clause or due to a trigger. For example:
CREATE TABLE ex1(a INTEGER PRIMARY KEY, b); INSERT INTO ex1 VALUES(1,2); INSERT INTO ex1 VALUES(2,3); CREATE TRIGGER ex1_tr1 AFTER UPDATE ON ex1 BEGIN DELETE FROM ex1 WHERE a=old.b; END;
UPDATE ex1 SET b=b+1;
In the example above, the first cycle of the UPDATE causes the trigger to fire and delete the second row of the ex1 table. When the second cycle of the UPDATE loop runs, it attempts to process the second row of the ex1 table. SQLite recognized that the second row had been deleted so it aborts the second cycle, but it was failing to clean up after itself properly which could lead to database corruption on subsequent cycles of the loop.
The simple example above would not cause database corruption. In fact, the usual result of this bug is a segmentation fault. Getting the system to corrupt the database rather than segfault is very difficult. Nevertheless, it is possible, as ticket #2832 demonstrated.
Fixed in Version 3.5.1 (2007-10-04):
If an SQLITE_FULL error occurs within an explicit transaction and the transaction is not rolled back, the database file can be corrupted. See ticket #2686. This is essentially the same bug that was fixed in version 3.4.0 below except that this time it is the SQLITE_FULL error instead of SQLITE_BUSY.
Fixed in Version 3.4.2 (2007-08-13):
Setting the soft heap limit to an impractically small value that causes the pager to continuously spill its cache as each page is unpinned, then creating a new table in auto-vacuum mode and doing a ROLLBACK can result in database corruption. See Ticket #2565. This bug is so unbelievably obscure and requires such a bizarre database configuration that we do not believe it could ever occur in the wild. We discovered this problem during stress testing of the soft heap limit mechanism
Fixed In Version 3.4.1 (2007-07-20):
A feature was introduced in version 3.3.14 that avoids flushing the database page cache if nothing in the database has changed. This feature depends on a change counter in the header of the database file being incremented whenever the file is modified.
Unfortunately, a bug causes the VACUUM command to reset the change counter back to zero rather than increment it. So if two connections A and B have the database open and cached and the change counter happens to be at zero, and one connection changes the database then vacuums, the other connection will not see the change. It will use an obsolete cache. This might lead to corruption.
This is an unlikely scenario. We have never actually observed it to occur in the wild. We discovered the problem while doing a code analysis due to ticket #2518.
Fixed In Version 3.4.0 (2007-06-15):
Two unrelated bugs, both of which could cause database corruption, were fixed in version 3.4.0
In version 3.3.0 through 3.3.17, if an SQLITE_BUSY error occurs within an explicit transaction and that transaction is not rolled back, the database file can be corrupted. See CorruptionFollowingBusyError for additional information.
The second problem (ticket #2418) has existed since version 3.1.0, but it is very difficult to express. The second problem only arises when auto-vacuum is enabled and a malloc() failures and rollback occur following a CREATE TABLE or CREATE INDEX statement that follows in the same transaction with a large UPDATE statement that overflows the pager cache. The way to avoid hitting this bug is to not do CREATE TABLE or CREATE INDEX within a transaction, especially a transaction that has previously run a large UPDATE.
Fixed In Version 3.3.9 (2007-01-04):
If a process is writing to the database file but exits abruptly without finishing its write (perhaps because of a power failure or an OS crash) it leaves behind a "hot journal". Subsequent processes which try to read the database will see this hot journal and use it to restore the database to a consistent state.
If two processes see the hot journal at the same instant, they will both race to try to rollback the hot journal. Only one of these two processes will succeed. The other will back off. Unfortunately, the process that backs off might end up with an inconsistent cache in versions of SQLite prior to 3.3.9. If that process later goes on to make additional changes to the database, it might cause database corruption. The chances of corruption are increased if auto_vacuum is enabled.
The race condition in which two processes are both trying to rollback the same hot journal is very tight and is exceedingly unlikely to occur on single-processor systems. And, the problem only occurs if there are hot journals that need to roll back. So we consider this problem to be obscure. Nevertheless, we recommend that all users upgrade to SQLite version 3.3.9 or later just to be safe.
Fixed in Version 3.2.8 and 2.8.17 (2005-12-19):
If a large INSERT or UPDATE makes many changes to the database (more than will fit in the in-memory cache) and then fails due to a uniqueness constraint, but that INSERT or UPDATE was part of a larger transaction enclosed within BEGIN...COMMIT and the larger transaction later commits successfully, the database might be corrupted.
Fixed in Version 3.2.6 (2005-09-17):
If you have a database larger than 1GiB and you try to VACUUM that database but the VACUUM fails (perhaps because it ran out of temporary disk space) and has to rollback, the rollback can cause database corruption.
Fixed in Version 3.1.6 (2005-03-16):
Database corruption might occur when inserting records into a table that has approximately 125 columns. Inserts always work when the table has more columns than this. If some of the fields being inserted are large strings or BLOBs, then corruption might occur with fewer columns. For example, of two fields are large strings with 1000s of characters, then the insert might cause corruption on a table with 123 columns instead of 125.
This probably is exceedingly unlikely to appear on a well-normalized schema where tables have no more than a dozen or so columns.
Fixed in Version 3.1.4 (2005-03-10):
If autovacuum is turned on and you attempt to do a CREATE UNIQUE INDEX but the index creation fails because some preexisting data is not unique, then the database can be left in an inconsistent state. The problem only occurred if autovacuum was enabled and autovacuum was a relatively new feature at the time.
Fixed in Version 3.1.2 and 2.8.16 (2005-02-15):
If the same database is opened twice (for example by two different programs) and one program does a VACUUM and the other program makes some simple change to the database, the second program might not realize that the VACUUM had occurred and reorganized the database and might therefore cause database corruption.
Fixed in Version 3.0.4 (2004-08-08):
If the same database connection (the same thread in the same process) is trying to do two things at once: If it is in the middle of a query against an indexed table and then at the same time it tries to delete a row out of that indexed query, and if the query does not use the index, then the deletion will remove the row from the index but not from the main table. This will result in an index that is missing an entry. The problem can be fixed by dropping and recreating the index.