Small. Fast. Reliable.
Choose any three.
A variation of this page is now an official part of the SQLite documentation. See whentouse.html

SQLite is different from most other SQL database engines in that its primary design goal is to be simple:

Many people like SQLite because it is small and fast. But those qualities are just happy accidents. Users also find that SQLite is very reliable. Reliability is a consequence of simplicity. With less complication, there is less to go wrong. So, yes, SQLite is small, fast, and reliable, but first and foremost, SQLite strives to be simple.

Simplicity in a database engine can be either a strength or a weakness, depending on what you are trying to do. In order to achieve simplicity, SQLite has had to sacrifice other characteristics that some people find useful, such as high concurrency, fine-grained access control, a rich set of built-in functions, stored procedures, esoteric SQL language features, XML and/or Java extensions, tera- or peta-byte scalability, and so forth. If you need these kinds of features and don't mind the added complexity that they bring, then SQLite is probably not the database for you. SQLite is not intended to be an enterprise database engine. It's not designed to compete with Oracle or PostgreSQL.

The basic rule of thumb for when it is appropriate to use SQLite is this: Use SQLite in situations where simplicity of administration, implementation, and maintenance are more important than the countless complex features that enterprise database engines provide. As it turns out, situations where simplicity is the better choice are more common than many people realize.

Situations Where SQLite Works Well

Situations Where Another RDBMS May Work Better


Note:
client/server usually means that there are multiple client processes and at least one server process. If there is no server process, it is not truly a "client/server" setting. Usually, the server process will access the database at the behest of all clients. In such circumstances, there is no reason for clients to access SQLite (directly) over a network filesystem. Therefore, you can perfectly well use SQLite for client/server applications.

Note:
First, what is "high-volume"?

Second, scaling is always problematic, regardless of what technology you use.

The first problems in scaling that you will hit, are usually not related to the backend technology, but to the application design itself. For example, if you're sending back a list with all customers to the web browser, it may work well for 50 customers. In a situation where you have 10,000 customers, you will need to introduce at least some kind of <page 1 2 ...> and/or <previous/next> logic and limit the query to (e.g.) 50 customers at a time. Sending too much data for a request and consuming too many server resources in doing so, is a very typical scaling problem; and it is totally unrelated to the backend technology. Conclusion: before you will hit the SQLite limitations, you will hit a string of design issues that limit the scaleability of your application well below the SQLite limits.



Note:
Just as if Oracle, SQL server or any other rdbms don't consume huge amounts in memory when managing large databases ... I seriously doubt that they limit themselves to 256 bytes per megabyte of data.

Note:
How do other database servers tackle this problem?

Well, for a starters, you will find in Oracle or SQL Server that only one thread or process will write to the filesystem at a time. Therefore, it's not that other database servers allow multiple writes to the filesystem concurrently.

How do these other database servers manage high concurrency?

They actually apply a relatively simple trick. All changes are done to memory (and appended to the journal) first, while the thread that commits changes to the filesystem will commit these changes only much later, while on his round across dirty memory records, he will pick up the changes and commit them to the filesystem.

In this way, for example, you may get a counter-intuitive "Out of memory" error on SQL Server, when you delete all records from a large table. Why? Because, there is not enough space available to keep all these deleted records in memory, until the writing thread can commit these deletes to disk.

This has several serious implications: records may never be served directly from the filesystem itself, but must be served from memory (where you will find the records' latest versions). Second, there must be a strategy to clean out records from memory that are not being accessed any longer or not sufficiently (to prevent this shared memory from growing without bounds). Third, all clients must forward their requests to this shared memory custodian process; this implies a separate process to manage this shared memory.

Changing SQLite to incorporate such custodian logic would defeat the object of having an embeddable database that doesn't need its own separate process.

Therefore, a solution would probably be to develop such shared memory custodian as as a separate addon, along with a protocol to format the request/response conversation between this custodian and the processes that need to access a common database with high concurrency.

By the way, at a lower level you will find that the operating system already intelligently caches disk sectors that are being accessed regularly. Even though the database server itself would probably outperform the underlying OS in knowing how and what exactly to cache, the underlying caching by the OS already contributes substantially to improving SQLite's performance under high concurrency.


What about multi-tier usage? Anybody had success using it from appservers? (By what I read I guess it should work pretty well there).


wiki add-ons