Small. Fast. Reliable.
Choose any three.
Script used to run these tests is modified version of the one used here. My modified version is attached to this page. Please note that I'm not a TCL programmer and my mods are just poor hacks that were needed to make tests run on all databases. If you see any errors in there that would influence the outcome of tests, please post your findings here.

All databases were installed with default settings. In case of SQLite, I used binaries available from this site.

Tests were run on 1.6GHz Sempron with 1GB of ram and 7200rpm SATA disk running Windows 2000 + SP4 with all updates applied.

Original test script was changed to allow for some slight modifications of generated SQL to enable all db engines to understand it. In case of MySQL I had to change CREATE TABLE statements to be able to USE InnoDB and MyISAM storage engines and to turn off autocommit mode in case of InnoDB engine (even though docs claim that BEGIN should do that automatically, but it seems that's not the case). In case of Firebird I had to change BEGIN to SET TRANSACTION.
Other then that, all databases get exactly the same SQL to execute.

All SELECTs are run 3 times in a row and average time is displayed in results.

I don't know how to flush disk buffers on windows. This script used sync command available on Linux to do that. If you know how to do something like that on windows, please post here. You might check out NTSync from from here. On Windows you could use Sysinternals sync.

If you want an explanation next to numbers, I'd recommend reading through original tests. Most of what's said there applies here too.

I suspect what servers are doing behind the scenes in test 6 is something like this:

  CREATE INDEX t1_b ON t1(b);
  CREATE INDEX t2_b ON t2(b);
  SELECT t1.a FROM t1 INNER JOIN t2 ON t1.b=t2.b;
  DROP INDEX t1_b;
  DROP INDEX t2_b;

Above query, when run on same db as query from test 6, completed in 0.5 seconds instead of 14+ seconds. Actually it was even a tiny bit faster then the time from test 7, but this could be attributed to varying runtime conditions.

I have no idea what's wrong with Postgres in test 8. If you do, and you think that SQL being fed to it makes that particular test unfair to Postgres, please speak up.

A Speak UP: Postgres is designed to run on a shoebox for the default install. It will always perform badly when used in the default configuration for test suites like this. Postgres only has what you call sync mode to ensure ACID compliance. Also Postgres decides how to execute a query based on the data stored in its stats database. That data is only updated by running the "Analyze" command. Other databases may track those statistics differently. In non transactional insert test, it is difficult to see how a 7200 RPM drive can do 1000 I/O operation in 0.7 secords. The sync should force each transaction to disk, which is each INSERT operation.

Reply: conf file for Postgres has been tuned a bit since test have been run. I'm attaching conf files for all databases that have it so anyone can review them and suggest changes to make all databases use approximately same hardware resources and function same in terms of ACID compliance for sync tests. Once some input on all of the databases has been gathered, I'll rerun the tests.
Is it common for Postgres in real world scenario to run ANALYZE before each query? If so, I'll update the test script to run ANALYZE before each test.
Test 1 is very strange, I agree. Hypothetical perfect database would take more then 8 seconds to complete that test if it used only one fsync per transaction.

Another speak UP :-) : Analyze tables after bulk inserts or a few % or more rows being updated etc. The stats generated will allow the planner to choose the most effective query plan for SELECTs - a classic example is a bulk loaded table with a few million records; if never previously analyzed, the planner might think there are only 100 rows, and thus choose a sequential scan of the table over the index scan that might be far more appropriate given the actual table size. Also be aware that PostgreSQL shines under high levels of concurrent load. Consider expanding the tests so that you have tens of connections both reading and writing data in multiple transactions simultaneously. Expand that to hundreds or thousands of connections on more hefty hardware.

Reply: As for ANALYZE, I'll see what I can do to add it to test script after significant table changes. That should be OK I guess. I plan to include the time cost of ANALYZE as part of those tests for Postgress. I consider this to be fair since no other db gets any special treatment like this and AFAICT ANALYZE is very fast.
No doubt, Postgres would shine in high concurrency tests but you're unlikely to be interested in SQLite if you need hundreds of simultaneous writers/readers. So, for now, I have no intention of running such tests.

Few notes about the results:

Comment: You can turn off syncing (ie: nosync) in PostgreSQL by setting 'fsync = false' in the postgresql.conf. This isn't a setting to be done lightly, of course, but it might make for a more sensible comparison to the other databases where you're turing sync on/off. Additionally, it is reasonably common to run analyze frequently on tables which are changing alot. PostgreSQL in general thinks about a table 'in use' where rows are inserted, updated, and deleted during the lifetime of the table while the general 'running size' and statistics are reasonably constant. What that's about is that you might not run analyze before every query because the overall statisitcs don't change that much, but you might run 'vacuum analyze' nightly to mark old rows as being replacable and update the statistics. For a test like this it does make sense to analyze after changes to the tables are made. If tables are reused (and rows are updated or deleted) then vacuum and/or vacuum full should be run. Lack of running analyze is probably the problem with Test 8, Postgres isn't using the index because it's going off the 'default' statistics, which indicates there's only 100 rows or some such and in a case like that a sequential scan is faster. An additional nicety might be to re-run the Postgres tests with 'explain analyze' for each query and provide that output as well to verify the query plan being used.

Reply: nosync tests were run simply to give better insight to SQLite developers and users. MySQL was thrown into the mix there because, AFAIK, its very common setup for MySQL to be run like that. Other then that, I'm mostly interested in sync performance since that's the most common mode of operation. Except maybe for MySQL, I'm not sure if this changed in last 3 years or so.

Comment: I pulled down your postgresql.conf and noticed you turned row level statistics on. This will have a negative impact on all tests which have a lot of queries. Why did you do this?
On another note I would suggest running prepared statement versions of some of the tests on databases that support them.

Reply: I don't think I changed it. It probably installed like that by default. Only changes I made are the ones suggested on sqlite mailing list. Here they are for a reference:

  shared_buffers=10000
  effective_cache_size=100000
  work_mem=10000
  vacuum_cost_delay=50
  autovacuum=on
  autovacuum_vacuum_scale_factor=0.2

Comment on Reply: The default installation of Postgres 8.x definitely (I just checked it) have

  #stats_row_level = false

in their default configs, so you should disable the row stats collector and rerun the tests.

On to the tests now.

Test 1: 1000 INSERTs

CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100));
INSERT INTO t1 VALUES(1,13153,'thirteen thousand one hundred fifty three');
INSERT INTO t1 VALUES(2,75560,'seventy five thousand five hundred sixty');
... 995 lines omitted
INSERT INTO t1 VALUES(998,66289,'sixty six thousand two hundred eighty nine');
INSERT INTO t1 VALUES(999,24322,'twenty four thousand three hundred twenty two');
INSERT INTO t1 VALUES(1000,94142,'ninety four thousand one hundred forty two');
SQLite 3.3.3 (sync):   3.823
SQLite 3.3.3 (nosync):   1.668
SQLite 2.8.17 (sync):   4.245
SQLite 2.8.17 (nosync):   1.743
PostgreSQL 8.1.2:   4.922
MySQL 5.0.18 (sync):   2.647
MySQL 5.0.18 (nosync):   0.329
FirebirdSQL 1.5.2:   0.320

Test 2: 25000 INSERTs in a transaction

BEGIN;
CREATE TABLE t2(a INTEGER, b INTEGER, c VARCHAR(100));
INSERT INTO t2 VALUES(1,298361,'two hundred ninety eight thousand three hundred sixty one');
... 24997 lines omitted
INSERT INTO t2 VALUES(24999,447847,'four hundred forty seven thousand eight hundred forty seven');
INSERT INTO t2 VALUES(25000,473330,'four hundred seventy three thousand three hundred thirty');
COMMIT;
SQLite 3.3.3 (sync):   0.764
SQLite 3.3.3 (nosync):   0.748
SQLite 2.8.17 (sync):   0.698
SQLite 2.8.17 (nosync):   0.663
PostgreSQL 8.1.2:   16.454
MySQL 5.0.18 (sync):   7.833
MySQL 5.0.18 (nosync):   7.038
FirebirdSQL 1.5.2:   4.280

Test 3: 25000 INSERTs into an indexed table

BEGIN;
CREATE TABLE t3(a INTEGER, b INTEGER, c VARCHAR(100));
CREATE INDEX i3 ON t3(c);
... 24998 lines omitted
INSERT INTO t3 VALUES(24999,442549,'four hundred forty two thousand five hundred forty nine');
INSERT INTO t3 VALUES(25000,423958,'four hundred twenty three thousand nine hundred fifty eight');
COMMIT;
SQLite 3.3.3 (sync):   1.778
SQLite 3.3.3 (nosync):   1.832
SQLite 2.8.17 (sync):   1.526
SQLite 2.8.17 (nosync):   1.364
PostgreSQL 8.1.2:   19.236
MySQL 5.0.18 (sync):   11.524
MySQL 5.0.18 (nosync):   12.427
FirebirdSQL 1.5.2:   6.351

comment: Unfortunately SQLite seems to have a scalability problem with indices, causing this speed advantage to be only present for small tables: On my system SQLite is about 2 times faster than MySQL on 25000 inserts in an indexed table. on the same test with 250000 inserts, SQLite is about 3 times slower than MySQL. With 500000 inserts, SQLite is about 6 times slower.

Test 4: 100 SELECTs without an index

SELECT count(*), avg(b) FROM t2 WHERE b>=0 AND b<1000;
SELECT count(*), avg(b) FROM t2 WHERE b>=100 AND b<1100;
SELECT count(*), avg(b) FROM t2 WHERE b>=200 AND b<1200;
... 94 lines omitted
SELECT count(*), avg(b) FROM t2 WHERE b>=9700 AND b<10700;
SELECT count(*), avg(b) FROM t2 WHERE b>=9800 AND b<10800;
SELECT count(*), avg(b) FROM t2 WHERE b>=9900 AND b<10900;
SQLite 3.3.3 (sync):   3.153
SQLite 3.3.3 (nosync):   3.088
SQLite 2.8.17 (sync):   3.993
SQLite 2.8.17 (nosync):   3.983
PostgreSQL 8.1.2:   5.740
MySQL 5.0.18 (sync):   2.718
MySQL 5.0.18 (nosync):   1.641
FirebirdSQL 1.5.2:   2.976

Test 5: 100 SELECTs on a string comparison

SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%one%';
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%two%';
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%three%';
... 94 lines omitted
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%ninety eight%';
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%ninety nine%';
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%one hundred%';
SQLite 3.3.3 (sync):   4.853
SQLite 3.3.3 (nosync):   4.868
SQLite 2.8.17 (sync):   4.511
SQLite 2.8.17 (nosync):   4.500
PostgreSQL 8.1.2:   6.565
MySQL 5.0.18 (sync):   3.424
MySQL 5.0.18 (nosync):   2.090
FirebirdSQL 1.5.2:   5.803

Test 6: INNER JOIN without an index

SELECT t1.a FROM t1 INNER JOIN t2 ON t1.b=t2.b;
SQLite 3.3.3 (sync):   14.473
SQLite 3.3.3 (nosync):   14.445
SQLite 2.8.17 (sync):   47.776
SQLite 2.8.17 (nosync):   47.750
PostgreSQL 8.1.2:   0.176
MySQL 5.0.18 (sync):   3.421
MySQL 5.0.18 (nosync):   3.443
FirebirdSQL 1.5.2:   0.141

Test 7: Creating an index

CREATE INDEX i2a ON t2(a);
CREATE INDEX i2b ON t2(b);
SQLite 3.3.3 (sync):   0.552
SQLite 3.3.3 (nosync):   0.526
SQLite 2.8.17 (sync):   0.650
SQLite 2.8.17 (nosync):   0.605
PostgreSQL 8.1.2:   0.276
MySQL 5.0.18 (sync):   1.159
MySQL 5.0.18 (nosync):   0.275
FirebirdSQL 1.5.2:   0.264

Test 8: 5000 SELECTs with an index

SELECT count(*), avg(b) FROM t2 WHERE b>=0 AND b<100;
SELECT count(*), avg(b) FROM t2 WHERE b>=100 AND b<200;
SELECT count(*), avg(b) FROM t2 WHERE b>=200 AND b<300;
... 4994 lines omitted
SELECT count(*), avg(b) FROM t2 WHERE b>=499700 AND b<499800;
SELECT count(*), avg(b) FROM t2 WHERE b>=499800 AND b<499900;
SELECT count(*), avg(b) FROM t2 WHERE b>=499900 AND b<500000;
SQLite 3.3.3 (sync):   1.872
SQLite 3.3.3 (nosync):   1.853
SQLite 2.8.17 (sync):   2.444
SQLite 2.8.17 (nosync):   2.478
PostgreSQL 8.1.2:   199.823
MySQL 5.0.18 (sync):   3.763
MySQL 5.0.18 (nosync):   3.725
FirebirdSQL 1.5.2:   5.187
* Performance of PostgreSQL in this test is most probably heavily impacted by psql command line utility. Same test when run form pgAdmin III GUI completed in 5 seconds.

Test 9: 1000 UPDATEs without an index

BEGIN;
UPDATE t1 SET b=b*2 WHERE a>=0 AND a<10;
UPDATE t1 SET b=b*2 WHERE a>=10 AND a<20;
... 996 lines omitted
UPDATE t1 SET b=b*2 WHERE a>=9980 AND a<9990;
UPDATE t1 SET b=b*2 WHERE a>=9990 AND a<10000;
COMMIT;
SQLite 3.3.3 (sync):   0.562
SQLite 3.3.3 (nosync):   0.573
SQLite 2.8.17 (sync):   0.543
SQLite 2.8.17 (nosync):   0.532
PostgreSQL 8.1.2:   1.663
MySQL 5.0.18 (sync):   1.930
MySQL 5.0.18 (nosync):   4.656
FirebirdSQL 1.5.2:   1.804

Test 10: 25000 UPDATEs with an index

BEGIN;
UPDATE t2 SET b=271822 WHERE a=1;
UPDATE t2 SET b=28304 WHERE a=2;
... 24996 lines omitted
UPDATE t2 SET b=442549 WHERE a=24999;
UPDATE t2 SET b=423958 WHERE a=25000;
COMMIT;
SQLite 3.3.3 (sync):   1.883
SQLite 3.3.3 (nosync):   1.894
SQLite 2.8.17 (sync):   1.994
SQLite 2.8.17 (nosync):   1.973
PostgreSQL 8.1.2:   23.933
MySQL 5.0.18 (sync):   16.348
MySQL 5.0.18 (nosync):   17.383
FirebirdSQL 1.5.2:   15.542

Test 11: 25000 text UPDATEs with an index

BEGIN;
UPDATE t2 SET c='four hundred sixty eight thousand twenty six' WHERE a=1;
UPDATE t2 SET c='one hundred twenty one thousand nine hundred twenty eight' WHERE a=2;
... 24996 lines omitted
UPDATE t2 SET c='thirty five thousand sixty five' WHERE a=24999;
UPDATE t2 SET c='three hundred forty seven thousand three hundred ninety three' WHERE a=25000;
COMMIT;
SQLite 3.3.3 (sync):   1.386
SQLite 3.3.3 (nosync):   1.365
SQLite 2.8.17 (sync):   1.168
SQLite 2.8.17 (nosync):   1.121
PostgreSQL 8.1.2:   24.672
MySQL 5.0.18 (sync):   16.469
MySQL 5.0.18 (nosync):   15.491
FirebirdSQL 1.5.2:   21.583

Test 12: INSERTs from a SELECT

BEGIN;
INSERT INTO t1 SELECT * FROM t2;
INSERT INTO t2 SELECT * FROM t1;
COMMIT;
SQLite 3.3.3 (sync):   1.179
SQLite 3.3.3 (nosync):   1.116
SQLite 2.8.17 (sync):   1.864
SQLite 2.8.17 (nosync):   1.526
PostgreSQL 8.1.2:   1.091
MySQL 5.0.18 (sync):   0.986
MySQL 5.0.18 (nosync):   0.933
FirebirdSQL 1.5.2:   4.782

Test 13: INNER JOIN with index on one side

SELECT t1.a FROM t1 INNER JOIN t2 ON t1.b=t2.b;
SQLite 3.3.3 (sync):   0.371
SQLite 3.3.3 (nosync):   0.369
SQLite 2.8.17 (sync):   0.273
SQLite 2.8.17 (nosync):   0.275
PostgreSQL 8.1.2:   5.981
MySQL 5.0.18 (sync):   0.408
MySQL 5.0.18 (nosync):   0.603
FirebirdSQL 1.5.2:   1.099

Test 14: INNER JOIN on text field with index on one side

SELECT t1.a FROM t1 INNER JOIN t3 ON t1.c=t3.c;
SQLite 3.3.3 (sync):   0.383
SQLite 3.3.3 (nosync):   0.376
SQLite 2.8.17 (sync):   0.309
SQLite 2.8.17 (nosync):   0.291
PostgreSQL 8.1.2:   1.324
MySQL 5.0.18 (sync):   0.404
MySQL 5.0.18 (nosync):   0.558
FirebirdSQL 1.5.2:   0.454

Test 15: 100 SELECTs with subqueries. Subquery is using an index

SELECT t1.a FROM t1 WHERE t1.b IN (SELECT t2.b FROM t2 WHERE t2.b>=0 AND t2.b<1000);
SELECT t1.a FROM t1 WHERE t1.b IN (SELECT t2.b FROM t2 WHERE t2.b>=100 AND t2.b<1100);
SELECT t1.a FROM t1 WHERE t1.b IN (SELECT t2.b FROM t2 WHERE t2.b>=200 AND t2.b<1200);
... 94 lines omitted
SELECT t1.a FROM t1 WHERE t1.b IN (SELECT t2.b FROM t2 WHERE t2.b>=9700 AND t2.b<10700);
SELECT t1.a FROM t1 WHERE t1.b IN (SELECT t2.b FROM t2 WHERE t2.b>=9800 AND t2.b<10800);
SELECT t1.a FROM t1 WHERE t1.b IN (SELECT t2.b FROM t2 WHERE t2.b>=9900 AND t2.b<10900);
SQLite 3.3.3 (sync):   7.877
SQLite 3.3.3 (nosync):   8.040
SQLite 2.8.17 (sync):   4.387
SQLite 2.8.17 (nosync):   4.381
PostgreSQL 8.1.2:   6.245
MySQL 5.0.18 (sync):   16.891
MySQL 5.0.18 (nosync):   38.447
FirebirdSQL 1.5.2:   37.439

Test 16: DELETE without an index

DELETE FROM t2 WHERE c LIKE '%fifty%';
SQLite 3.3.3 (sync):   0.528
SQLite 3.3.3 (nosync):   0.429
SQLite 2.8.17 (sync):   1.228
SQLite 2.8.17 (nosync):   0.984
PostgreSQL 8.1.2:   0.336
MySQL 5.0.18 (sync):   0.394
MySQL 5.0.18 (nosync):   0.532
FirebirdSQL 1.5.2:   0.404

Test 17: DELETE with an index

DELETE FROM t2 WHERE a>10 AND a<20000;
SQLite 3.3.3 (sync):   0.866
SQLite 3.3.3 (nosync):   0.627
SQLite 2.8.17 (sync):   1.275
SQLite 2.8.17 (nosync):   0.817
PostgreSQL 8.1.2:   0.283
MySQL 5.0.18 (sync):   0.541
MySQL 5.0.18 (nosync):   1.336
FirebirdSQL 1.5.2:   5.033

Test 18: A big INSERT after a big DELETE

INSERT INTO t2 SELECT * FROM t1;
SQLite 3.3.3 (sync):   0.973
SQLite 3.3.3 (nosync):   0.865
SQLite 2.8.17 (sync):   1.680
SQLite 2.8.17 (nosync):   1.336
PostgreSQL 8.1.2:   0.727
MySQL 5.0.18 (sync):   0.762
MySQL 5.0.18 (nosync):   1.088
FirebirdSQL 1.5.2:   4.171

Test 19: A big DELETE followed by many small INSERTs

BEGIN;
DELETE FROM t1;
INSERT INTO t1 VALUES(1,29676,'twenty nine thousand six hundred seventy six');
... 2997 lines omitted
INSERT INTO t1 VALUES(2999,37835,'thirty seven thousand eight hundred thirty five');
INSERT INTO t1 VALUES(3000,97817,'ninety seven thousand eight hundred seventeen');
COMMIT;
SQLite 3.3.3 (sync):   0.155
SQLite 3.3.3 (nosync):   0.133
SQLite 2.8.17 (sync):   0.160
SQLite 2.8.17 (nosync):   0.255
PostgreSQL 8.1.2:   2.635
MySQL 5.0.18 (sync):   1.402
MySQL 5.0.18 (nosync):   1.133
FirebirdSQL 1.5.2:   0.667

Test 20: DROP TABLE

DROP TABLE t1;
DROP TABLE t2;
SQLite 3.3.3 (sync):   0.138
SQLite 3.3.3 (nosync):   0.392
SQLite 2.8.17 (sync):   0.188
SQLite 2.8.17 (nosync):   0.257
PostgreSQL 8.1.2:   0.229
MySQL 5.0.18 (sync):   0.125
MySQL 5.0.18 (nosync):   0.058
FirebirdSQL 1.5.2:   0.133

Attachments:

  • speedtest.tcl 9602 bytes added by anonymous on 2006-Feb-07 04:52:02 UTC.
    TCL script used to run the tests
  • my.ini 9249 bytes added by anonymous on 2006-Feb-12 02:54:22 UTC.
  • postgresql.conf 14045 bytes added by anonymous on 2006-Feb-12 02:54:35 UTC.
  • firebird.conf 19729 bytes added by anonymous on 2006-Feb-12 02:54:51 UTC.