Small. Fast. Reliable.
Choose any three.
The Default complilation use -O6 which doen't exist with gcc (=-O0) using -O3 -Os improve the performance of sqlite.

Michel Weinachter 2004-10-6

This is wrong, gcc -O6 is equivalent to -O3 [1] --Tomasz Wegrzanowski


Benchmarks already run show that SQLite has outstanding performance, even compared with MySQL, which has been the consistent speed demon of choice for web applications. However, these benchmarks have the caveat that they were run on single connections to the databases in question and no attempt was made to see how well SQLite or the other RDBMSs handled concurrent connections. This is a big deal with web apps. From experience I can testify that MySQL already performs very well with hundreds of concurrent connections. Does anyone have experience with SQLite in this regard?

--Carl Youngblood

SQLite is the speed demon of choice for systems that don't make heavy use of concurrent connections and complicated database features. If you need a lot of concurrent connections, SQLite probably isn't the best choice. But if you need a lot of speed and a simple setup for a single connection, SQLite is the best thing I've seen.

-- dec

So do concurrent reads suffer somehow or just waiting for writes? -- rb Just writes -- drh


Transactions and performance

When doing lots of updates/inserts on a table it is a good idea to contain them within a transaction,

  begin;
  insert into table values (..);
  insert into table values (..);
  insert into table values (..);
  ....
  commit;

This will make SQLite write all the data to the disk in one go, vastly increasing performance.

However, if you are writing to a temporary table, transactions have less effect because disk writes are not flushed to the table after each write.

I did a timed test with inserting 1000 records into a table in various ways to compare performance:

So, performance is still vastly quicker with a transaction when writing to a temporary, but transactions have the drawback of locking the entire database file for the duration of the transaction, even though only a temporary file is being written to, so, in multithreaded applications, it may be worth putting up with the lower performance to avoid this database locking behaviour.

--Paul Smith


D. Richard Hipp on 2003-09-05:

I tried a similar experiment on RedHat 7.3 using SQLite 2.8.6 and got numbers like this:

So writing to the main database is about 100x slower without a transaction. Writing to a temporary table is about 4x slower. I had expected writing to a temporary table to be the same speed regardless of whether or not it is within a transaction. The 4x slowdown is puzzling. I will be looking into this....


E. Russel Harvey on 2004-09-25: Does a transaction caused file locking prevent not only writing but also reading from other access, which may be from a thread of the same process that SQLLite is running?


Database size and performance

Charles Thayer on 2005-02-17:

I tried some small tests to see if sqlite3 would do a decent job for a project. (files attached as test-query.sh and test-insert.sh)

I found that inserts into a single table with a few indices scaled well. At first I was able to do 72 inserts/sec, and that number went down linearly for a while but then stopped degrading. After 8 million rows (1.6GB), I was still inserting at about 34 rows/sec.

With the same schema, I did a select test which does a "select count(*)" and then "select count(*),somecolumn from sometable group by somecolumn". (There was an index on somecolumn). In this case I found that the queries took around 2 seconds per 100,000 rows, and scaled linearly with db size.

Environment:

Comment by Oliver Leu: Please post your shell scripts. Are you creating a process for every record you insert into the db? This would slow down the process of inserting records extremly.


Database size and performance on a CD

Vis Naicker on 2007-05-104:

I struggled to get good performance outta a CD with 200K files, it reads the blobs pretty well , but for the same recordset where I kept the data separate from the blob, it was extremely slow, like 1min+ until the cd was cached by windows during the query. By caching to the drive before querying OTOH, it was pretty good.

It is quite embarrassing, at the clients end, where sometimes 1 database seems to hang the machine, while another takes 3 seconds or so. I think my solution will have to be is to merge to blob with the field data, there it is admirable - it performs as well as the HD/network stored db. Some hints I have used is to change the cache to 16K from the default 2k.