There is a list of features that SQLite does *not* support at http://www.sqlite.org/omitted.html. If you find additional features that SQLite does not support, you may want to list them below. ---- *: 2006.03.06 : select without left join with the *= operator SELECT t1.code, t2.code FROM table1 t1, table2 t2 WHERE t1.t2_ref_id *= t2.id This is Sybase ASE syntax. Related to "Oracle's join syntax" mentioned below *: This appears to be unsupported: updating multiple columns with subselect update T1 set (theUpdatedValue, theOtherValue) = (select theTop, theValue from T2 where T2.theKey = T1.theID) *: free text search capabilities in select statements: Mysql does free text search Match(field_list) Against(keyword) *: Multiple databases are not supported. For example, the following construct for creating a table in a database db1 based on a table in database db2 won't work: create table db1.table1 as select * from db2.table1; _: But I often need this. So, it should looks like a schemas in ORACLE database. _:isn't this supported by ATTACH DATABASE? *: Hierarchical Queries. START WITH CONNECT BY [PRIOR] (ORACLE) *: SQL92 Character sets, collations, coercibility. *: Stored Procedures *: Rollup and Cube - _Who can tell me what this means?_ _::: Rollup and Cube are OLAP terms. See for example http://en.wikipedia.org/wiki/OLAP_cube _::: I don't know much about it myself, but a quick google on the subject gives me... http:// www.winnetmag.com/SQLServer/Article/ArticleID/5104/5104.html and http://databases.about.com/ library/weekly/aa070101a.htm _::: both of these imply that the CUBE operator causes new rows to be generated to give a wildcard value to non-numeric columns and summing the numeric columns which match those wildcards. The potential for generating a huge amount of data with cube is implicit, I think - hence its name. ROLLUP appears to be related but removes some of the wildcards; I couldn't determine what from the limited information in the articles. I could not find, on brief examination any more definitive reference. Anyone got something more definitive than those articles ? It seems to me that you can do with sum() everything you can do with CUBE. _::: CUBE an ROLLUP provide addition subtotal rows. Lets say you are doing a query "SELECT x, y, SUM(z) FROM t GROUP BY x, y" lets also say x and y each have two values. This query will give you the sums for all records with x1 y1, x1 y2, x2 y1, and x2 y2. Rollup and cube both provide addition subtotals. Rollup adds 3 new sums: for all x1, for all x2, and the grand total. You can imagine that the GROUP BY list is being rolled up, so that it goes from being x, y; to being just x; to being empty. The result of the select for the column that is rolled up becomes NULL. CUBE will do all combinations of sums in the group by list: sum of all x1, all x2, all y1, all y2, and grand total. No idea what that has to do with a cube, though I do sort of picture a hyper-cube in my mind for no good reason. If you ever add ROLLUP and CUBE, I also recommend adding the GROUPING() function so that you can filter out the additional computations you don't want, or do somthing like SELECT CASE WHEN GROUPING(name) THEN 'Total' ELSE name END, hours FROM timesheets GROUP BY name. I've used the feature plenty doing reports, but then I'm a chronic SQL abuser. *: CREATE DATABASE, DROP DATABASE - _Does not seem meaningful for an embedded database engine like SQLite. To create a new database, just do sqlite_open(). To drop a database, delete the file._ *: ALTER VIEW, ALTER TRIGGER, ALTER TABLE *: Schemas - See: http://www.postgresql.org/docs/8.1/static/ddl-schemas.html _::: The idea is that multiple users using the same database can cleanly separate their tables, views (stored procs, etc) by prefixing them with their login, so jack's jack.importantTable is distinct from jill's jill.importantTable. There are administrative benefits ('Jack left and we don't like his work; can we kill everything he did?' Ans: 'Yes, let me just drop his schema..', with aliases, jill.importantTable can be made available to everybody as 'importantTable', permissions can be hung off schemas). The common notation (jill.importantTable) would map to databasename.tablename in the current sqlite arrangement. _:::: This doesn't really make a lot of sense for an embedded database. _::::: I could use this. Im trying to use sqlite as a 'fake database' that i can use in testing suite. For sqlite to be a good 'fake' of something like Oracle it would help a lot if it had the ability to do stuff like 'select * from blah.PERSON'. In this case the 'blah' is the schema name. PERSON is the table name. Another example: 'select zipcode from blorg.ADDRESS'. blorg is the schema name, ADDRESS is the table name. right now it is giving 'no such database as blah' or 'no such database as blorg'. i try to 'create database blah' but of cousre that doesnt work either. _::::: at the very least, it could accept table names that have a '.' in them. this would fake schemas good enough for me. right now it doesnt seem to allow it. _:::::: You can fake this syntax if you split the "schemas" off into seperate files, then do an ATTACH DATABASE blorg.db AS blorg; SELECT zipcode FROM blorg.address; *: TRUNCATE (MySQL, Postgresql and Oracle have it... but I dont know if this is a standard command) - _SQLite does this automatically when you do a DELETE without a WHERE clause. You can use also VACUUM command_ *: ORDER BY myfield ASC NULLS LAST (Oracle) *: CREATE TRIGGER [BEFORE | AFTER | INSTEAD OF] (Oracle) *: UPDATE with a FROM clause (not sure if this is standard, Sybase and Microsoft have it). _:: Postgres also allows "UPDATE ... FROM ... ", BTW. (As does Ingres --CAU) _:: I was working on something where I really wanted to use this construct with SQLite, so I came up with the following hack: -- -- SQLite does not allow "UPDATE ... FROM" -- but this is what it might look like -- UPDATE t1 SET measure = t2.measure FROM t2, t1 WHERE t2.key = t1.key ; -- -- emulating "UPDATE ... FROM" in SQLite -- -- n.b.: it assumes a PRIMARY KEY ! -- -- the INSERT never succeeds because -- the JOIN restricts the SELECT to -- existing rows, forcing the REPLACE -- INSERT OR REPLACE INTO t1( key, measure ) SELECT t2.key, t2.measure FROM t2, t1 WHERE t2.key = t1.key ; _:: Since that works, maybe SQLite could be made to support the "UPDATE ... FROM" construct directly, so we would not have to rely on conflict resolution to do essentially the same thing (not exactly the same, since REPLACE is DELETE and INSERT, but sometimes close enough). *< gifford hesketh::2004- Oct-26* _:: I've managed successfully to do this an alternative way, works in version 3.2.1 (--CAU:18-Aug -2005) ... -- -- emulating "UPDATE ... FROM" in SQLite -- -- UPDATE t1 SET measure = ( SELECT measure FROM t2 WHERE t2.key = t1.key ) ; *: Multi-column IN clause (ie. SELECT * FROM tab WHERE (key1, key2) IN (SELECT...) *: INSERTing fewer values than columns does not fill the missing columns with the default values; if fewer values than columns in the table are supplied, all columns filled have to be named before the keyword values *: INSERTing one record with all VALUES to DEFAULT: INSERT INTO example () VALUES (); *: DISTINCT ON (expr,...) - this is from Postgres, where expr,... must be the leftmost expressions from the ORDER BY clause *:a password('') function to mask some values (as used in MySQL) would be fine, I need it, if I give the db out of the house, or is there something I didn't find? Or a simple MD5 function to obscure data using a one way hash. See the MySQL function MD5 or Password for examples. *:FLOOR and CEILING functions, e.g. "SELECT FLOOR(salary) FROM personnel;" *: MEDIAN and standard deviation... are they standard? Essential for sqlite standalone executable for shell script users. _:_MEDIAN is difficult because it cannot be done "on-line," i.e., on a stream of data. Following is a solution to MEDIAN credited to David Rozenshtein, Anatoly Abramovich, and Eugene Birger; it is explained here: http://www.oreilly.com/catalog/transqlcook/chapter/ch08.html SELECT x.Hours median FROM BulbLife x, BulbLife y GROUP BY x.Hours HAVING SUM(CASE WHEN y.Hours <= x.Hours THEN 1 ELSE 0 END)>=(COUNT(*)+1)/2 AND SUM(CASE WHEN y.Hours >= x.Hours THEN 1 ELSE 0 END)>=(COUNT(*)/2)+1 *: Oracle's join syntax using (+) and (-): SELECT a1.a, a1.b, a2.a, a2.b FROM a1 LEFT JOIN a2 ON a2.b = a1.a _:...can be written in Oracle as: SELECT a1.a, a1.b, a2.a, a2.b FROM a1, a2 WHERE a1.a = a2.b(+); *: Oracle's Named Parameter output syntax. In Oracle, one can declare parameters and select into them as such _:Select A1, A2, A3 into (:p1, :p2, :p3) from TableA *: name columns in views (i.e. CREATE VIEW (foo, bar) AS SELECT qux, quo FROM baz;) *:More than one primary key per table, I can specify this with MySQL for example and SQLite returns me an error: more than one primary key specified... _: "More than one primary key" is an oxymoron when you're talking about the relational data model. By definition, a primary key uniquely identfies a row. What's the real problem you're trying to solve? _:: A combined primary key is possible in SQLite, for example: CREATE TABLE strings ( string_id INTEGER NOT NULL, language_id INTEGER NOT NULL, string TEXT, PRIMARY KEY (string_id, language_id) ); *:UPDATE t1, t2 SET t1.f1 = value WHERE t1.f2 = t2.fa *:SHOW TABLES and DESCRIBE [tablename] would be nice - not sure if they're standard, but they are a rather nice feature of MySQL... -------- No, it's not standard. The standard says it should be a special database called INFORMATION_SCHEMA, wich contains info about all databases, tables, columns, index, views, stored procedures, etc. Can someone tell me how to fake describe until something like this is implemented? Sorry, I'm too dependent on Oracle apparently :( *:SELECT ... FOR UPDATE OF ... is not supported. This is understandable considering the mechanics of SQLite in that row locking is redundant as the entire database is locked when updating any bit of it. However, it would be good if a future version of SQLite supports it for SQL interchageability reasons if nothing else. The only functionality required is to ensure a "RESERVED" lock is placed on the database if not already there. *:DELETE from table ORDER BY column LIMIT x,y is not supported. I worked around it by using a second query and deleting, eg: SELECT timestamp from table LIMIT x,1; DELETE from table where timestamp < ..... *:The corollary to above, UPDATE table SET x WHERE y ORDER BY z is not supported. Haven't tried the LIMIT addition to that form. *:Named parts of natural joins. For example: SELECT a.c1 FROM T1 a NATURAL JOIN T1 b. Because sqlite reduces the number of columns kept, the name is lost. *:The ALL and ANY quantifiers for comparisons with subquery results aren't supported. *:create table wg ( cpf numeric not null, id numeric not null, nome varchar(25), primary key (cpf) foreign key (id) ); the foreign key (id) generate an error, a chance to be supported in the future? foreign key dont supported in sqlite? or generater automaticaly or ? That's not a legal FOREIGN KEY clause; you have to specify what the foreign key references. SQLite parses, but does not enforce, syntactically-legal FOREIGN KEY specifications; there's a PRAGMA that will retrieve foreign-key information from table definitions, allowing you to enforce such constraints with application code. *: Analytical functions -> UnsupportedSqlAnalyticalFunctions ==== FEATURES ADDED IN RECENT VERSIONS ==== *:IF EXISTS function, e.g. "DROP TABLE IF EXISTS temp;" _:Added in 3.3 *: Extended POSIX regular expressions (should be easy, man 3 regcomp, or http:// mirbsd.bsdadvocacy.org/man3/regcomp.htm for reference) SELECT * FROM table WHERE name REGEX '[a-zA-Z]+_{0,3}'; The infrastructure for this syntax now exists, but you have to create a user-defined regex matching function. *: The EXISTS keyword is not supported (IN is, but IN is only a special case of EXISTS). And what about corelated subqueries ? _:Both supported as of 3.1. *: Inserting blob using X'AABBCCDD' syntax. (note: supported in Sqlite3) *: CURRENT-Functions like CURRENT_DATE, CURRENT_TIME are missing _Try "SELECT date('now');" or "SELECT datetime('now','localtime');"_ _:Added as of 3.1 *: ESCAPE clause for LIKE _:Added as of 3.1 *:AUTO_INCREMENT field type. SQLite supports auto_incrementing fields but only if that field is set as "INTEGER PRIMARY KEY". _:Oh god no! Stop the evil from spreading! AUTO_INCREMENT is possibly the worst way of doing unique ids for tables. It requires cached per-connection-handle last_insert_id() values. And you're probably already familiar with how much of a hack THAT is. _:A much better solution would be to give SQLite proper SEQUENCE support. You already have a private table namespace, so using sqlite_sequences to store these wouldn't be such a big deal. This is created when the database is created, and looks something like this, taken from a perl MySQL sequence emulation module. create table mysql_sequences ( sequence_name char(32) not null primary key, sequence_start bigint not null default 1, sequence_increment bigint not null default 1, sequence_value bigint not null default 1 ) _:In fact, why don't you just take a look at the original module (DBIx::MySQLSequence): http:// search.cpan.org/~adamk/DBIx-MySQLSequence-0.1/MySQLSequence.pm. In fact, why don't you just copy that module, and rewrite using code inside the database. _:The main reason for doing this, is that if you want to insert multiple records which reference each other, and these references are not null, you cannot insert one record until you have inserted the one to which it refers, then fetched the last_insert_id(), added it to the other record, then insert that, and so in. In trivial cases this isn't too bad, but imagine the cases where you have circular references, or don't know the structure of the data in advance at all. _:With sequence support and access to ids before inserting, there are algorithms to resolve these cases. Without it, you are left with things like just outright suspending contraints checking, inserting everything incorrectly, then hoping you can find all the cases of broken values, and fixing them. Which sucks if you don't know the structure beforehand. _:To resolve compatibility issues, just do what you do now with the INTEGER PRIMARY_KEY fields with no default, but allow a DEFAULT SEQUENCENAME.NEXTVAL() or something... _::For better or worse, the requested feature was added in 3.1 *:SELECT t1.ID, (SELECT COUNT(*) FROM t2 WHERE t2.ID=t1.ID) FROM t1{linebreak} _:In other words, in a subselect backreferencing to a field in its parent select. _::Now supported as of 3.1 =========== REMARK =========== NOT EXISTS remarks (off topic) -> UnsupportedSqlRemarkOffTopic Sqlite is finally a database product that values performance and minimal footprint (disk and memory) above a trashcan strategy that would add whatever feature to make the result so-called 'feature rich', say, a bloated piece of software. Therefore, I would vehemently reject all additions listed above, except for one. It's quite difficult to obtain the result for a correlated 'NOT EXISTS' subquery in any alternative way; which is the choice way to determine a subset of data that doesn't satisfy criteria contained in another table. In my experience I have found 'NOT EXISTS' (or is it 'NOT IN') to be extraordinarly slow. Being that SQLite provides 'EXCEPT' the much faster construct can be used to the same end (at least it was faster with Oracles's equvalent: 'MINUS', to wit: select name,addr from employee where id not in (select id from sales) becomes select name,addr from employee where id in ( select id from employee except select id from sales ) -- Are you calling Oracle 'a bloated piece of software'?. LOL. I would love to see a comparison of Oracle and SQLite (latest stable or bleeding edge SQLite version Vs Oracle 10g). I would love it. [This comparison idea is as valid as comparing a novel to a short story.] Anyway, SQLite seems a lil' database engine for lil' works. Sorry, not enough for me :). -- Why would anyone compare Oracle to sqlite other than to say "can you add support for this Oracle syntax to make migration between them easier"? -- Someone might mistakenly compare Oracle to SQLite because they fail to comprehend that the two products solve very different problems. *: Up to this moment I thought that Postgree was smallest possible free DB engine (since MySQL is *NOT* free), so if you are looking for something to distribute along with your application, SQLite seem to win against 10g, MySQL, Postgree, or whatever (by Makc). - Just to be awkward, what about berkelydb etc? And when you say free you mean free of restrictions don't you (rather than Free software)? -- Berkeley DB does not include SQL support. *: To above paragraph. Sometimes it is better pay few hundreds of bucks (just few hours of my work rate) and get much more powerful commercial solution which is royalty free. For example I very love Valentina database - (http://www.paradigmasoft.com). Valentina beats anything in 10-100+ times, especially on big dbs . It is not expensive, royalty free. Really full SQL92, yet they have cool Object-Relational features. *: To above paragraph: 'Valentina beats anything in 10-100+ times'. Ok, 10-100+ times of WHAT?. RAM Usage?, CPU Usage?, CPU Cycles?, CPU Count?, Consistent Gets?, Concurrent Users?, Concurrent Transactions?, Parses?, Executes?, Fetches?, Recursive Calls?, Physical Reads?, Documentation? ... If you don't have arguments based on real data in a production environment with real data load (users & transactions) your comment is useless. Benchmark it, prove it and then show us your results, not your 'thoughts'. *: Let me answer. I have talk about speed :-) About time of queries execution in seconds. Example. I have bench Table with million records. 10 fields of all types. 100MB size. Next query "SELECT DISTINCT * From T1" Valentina do in 1.7 seconds, SqlLite in 280 seconds. Difference 180 times. Or query: "SELECT * FROM T1 ORDER BY fld_byte, fld_varchar Desc" Valentina do in 1.99 seconds, SqlLite in 115 seconds. Difference 55 times. This is just single table queries. I do not mention joins on 3-5-7 tables with, with GROUP BY, HAVING, ... More, if to make database in 10 million records, so db grow to 1Gb the you will see even better Valentina wins. If you want get information about real production power then just go to their users testimonials page and read quotes starting from 1998 year it seems. *: As to the above - those numbers and the queries are meaningless without the database schema and the source used to conduct the test. If you'd like to lend some credibility to your assertions, you need to provide a link where that information can be downloaded or viewed. *: (1) About the 'speed results'. Speed?, are you talking about speed?, ouch. I can give you the fastest queries on Oracle with 'FIRST_ROWS' hints, i can tweak to the death some internal parameters to give users their results on a fraction of a second, heck, i can even cheat with my indexes to achieve this results ... but know what?: these results don't mean anything. Like yours. What kind of value has a querie that executes on 2 seconds and uses 50% of CPU if another one do it on 10 seconds and uses only 10% of CPU?. Do you know how to answer to this question?. Easy: it depends. Always depends. C,mon man, don't pretend to be a Lord Sith on the first day, complete your Padawan lessons before ;). *: (1, continued) How many parses and fetches are generating your queries on your Database?, what kind of 'stress' level are suffering your disks?, and your CPU?, what about the locks?, and the waits?, are you doing 'implicit conversions' of the SQLite side?, are you using the same amount and cardinality of data on SQLite?, are you using some kind of index on SQLite?, do you know how to create indexes on SQLite?, WHY are you querying against a table and not against a view?, do you know something about views and database schemas?, do you have the ER diagram of both schemas in order to have 'something' to show us?, do you have REAL benchmarks that we could tests on our environments? ... etc. *: (1, continued) HOW COULD YOU (sorry for the caps) guide me to the 'users testimonials page' in order to check real data load (on the 'wide' meaning) on production environments?. Are you kidding me?. Oh, god!. This is like "hey, i would like to know how your Database behaves on a 3000 concurrent users environment with 2000 transactions per second. I am mainly interested on the cluster solution that you could give me and the efficient ways to maximize CPU, RAM and HD resources" and your answer being "yeah, go to www.nice-database.com/testimonials.htm and read my happy-customers histories". LOL. Are you serious?. * Ha-Ha-Ha. Be sure I am not dummy users, so I know answers to all questions you asks. I have more than 15 years of db experience. All your blames that I have not take into account something are really foolish! Because let me repeat: I have made the same table with the same fields with the same data in records, and run the same queries on the same hardware in the same *clean* environment (no other apps was run to eat CPU or disk). And I have not use any tricks like "FIRST_ROWS". Both dbs was on default parameters. You still claim that this is not fair bench ???!!! Tell this to somebody else. *: Well, if you (a DBA with 15 years of experience) can't show us a 'miserable' execution plan or a report with timed statistics (like Oracle's Tkprof), then, my friend, your 15 years of work as a DBA has been totally and dramatically wasted. Sincerelesy, i can't imagine any senior DBA (heh, even an Access 'Senior DBA' if this job exists, by the way) posting on forums, wikis or blogs things like 'hey, this RDBMS is 100x faster than yours' or 'hey, the testimonials page is where you will find the answers of all your tuning and performance related questions'. Sorry, i just can't imagine it. Anyway, discussion ended for my side. Sorry to all SQLite users & developers for these non-sense paragraphs. I wonder how useful these "remarks" are... They don't really pertain to the question of how well SQLite supports either standard or "extended" SQL features; I'd suggest that if the participants in this debate want to continue it, they create a new wiki page specifically for it; copy everything not related to feature support over to it, and delete it from this page. What about Apache Derby? It uses the Apache 2.0 license and is easy to embed in Java applications (http://db.apache.org/derby/). -- See SqliteVersusDerby ---- Tcl related ---- *: Tcl variable bindings for list types? i.e.: set values [list a b c] db eval { SELECT * FROM table WHERE x IN ($values) } SQLite does its own variable interpolation which avoids the (messy) need to do value quoting/escaping (to protect against SQL injection attacks, etc.) but in the case where it's an "IN ($variable)" clause, it treats $variable as a single value instead of a Tcl list of values. Or, maybe I'm doing something wrong. If I am, please let me know: dossy@panoptic.com.