bgcolor # Status Created By Subsys Due Date SCR Assigned Svr Pri Title _Description _Remarks
#cfe8bd 154 active 2002 Sep drh Pager drh 3 3 Prohibit links on database files. If a database file is aliased using either hard or symbolic links, it can happen that an aborted transaction will not roll back correctly. Consider this scenario. The database file is named both a.db and a.db. Application one opens a.db and starts to make a change. This creates a journal file a.db-journal. But application one crashes without completing the transaction. Later, application two attempts to open the database as b.db. App two looks for a journal file to rollback, but it thinks the journal should be named b.db-journal. So it fails to see the a.db-journal that app one left and fails to rollback the transaction. The only way I can think of to prevent this kind of thing it to refuse to open any database file that contains two or more hard links and to refuse to open a file through a symbolic link. _2004-Mar-16 20:46:17 by anonymous:_ {linebreak} What if the journal file name wasn't based on the database name, but instead was based on the starting inode of the database file? For instance, "journal-10293" would be used if the starting inode for the associated database file was 10293. ---- _2004-Mar-20 17:17:41 by anonymous:_ {linebreak} Using Inode-numbers to solve this problem is a dangerous proposition, as disk defragmenters can alter the inode the db starts at in between a crash and a subsequent roll-back attempt. ---- _2007-Jun-05 03:57:17 by anonymous:_ {linebreak} On unix, you can use ftok() to solve this problem. It guarantees to return the same key for all paths to the same file, including symbolic and hard links. I have to experience programming in Windows, but have no doubt a similar function call exists in that API.
#cfe8bd 301 active 2003 Apr anonymous Unknown 2 3 Can't acquire lock for database on Mac OS X AppleShare volume I'm using SQLite on Mac OS X 10.2.5. If I try to do a SELECT from a database that resides on an AppleShare volume (from my code or from sqlite), SQLite says that it's locked, even if no other process is using it. It appears that sqliteOsReadLock always returns SQLITE_BUSY for files on AppleShare volumes. I've temporarily solved the problem here by disabling SQLite's locking code and implementing higher-level protections in my application. You may find this link helpful: http://developer.apple.com/technotes/tn/tn2037.html _2004-Mar-22 11:21:56 by anonymous:_ {linebreak} under OSX fcntl returns ENOTSUP (45) = Operation Not Supported when trying to open a DB on AFP or SMB. Local HD is fine (HFS) as well as on USB devices HFS and UFS. Even with the recommendations from Apple's technote tn2037 the problem persists.
#e8e8bd 369 active 2003 Jun anonymous 3 2 Testsuite fails on btree-1.1.1 (Mac OS X, SQLite 2.8.4) On Mac OS X the testsuite fails: btree-1.1.1..../src/btree.c:2687: failed assertion `pPage->isInit' make: *** [test] Abort trap SQLite version: 2.8.4, OS Version: 10.2.6 Obtained same result. Mac OS X 10.2.6, Developer Tools Dec 2002, SQLite 2.8.5. ---- Shared libraries are busted on Macs. As far as I can tell, this appears to be Apple's fault. Until a workaround is devised, do not attempt to compile using shared libraries. Add the --disable-shared option to the configure script: ../sqlite/configure --disable-shared ---- On 2.8.5+, this shows up on 2689. Also, configure does not allow the use of --disable-shared (probably requries a fix in the configure scripts). On a G5 in 10.3.2, this error shows up as a Bus Error. Builds work fine otherwise. This issue may be related to the warnings received in src/test1.c thru src/test4.c and in src/tclsqlite.c regarding Tcl_SetVar, Tcl_GetInt, Tcl_GetBoolean, Tcl_GetIndexFromObj. All warnings are regarding promotion of arguments to pointers of invalid type. oso2k/Louis ---- _2004-Feb-11 22:57:17 by anonymous:_ {linebreak} I did some quick testing of 2.8.12 on the machines I have available to me. In general, there seems to be more warnings than I remember (I believe I was testing 2.8.9 from cvs before it went live).{linebreak} {linebreak} *:G3 700MHz/640MB iBook 10.2.8{linebreak} Same results as we last spoke. Fails make test at:{linebreak} btree-1.1.1{linebreak} {linebreak} *:Dual G4 800MHz/1.25GB 10.2.8{linebreak} Same results as we last spoke. Fails make test at:{linebreak} btree-1.1.1{linebreak} {linebreak} *:G5 1.6GHz/1.25GB 10.3.2{linebreak} Something really weird happens here. There is no longer a bus error. Right after make test gets past bigfile-1.1, the machine seems to enter an infinite loop or something.
#cfe8bd 575 active 2004 Jan anonymous VDBE drh 3 3 pragma (default_)temp_store implementations seems incomplete This problem is a conflict between documented behaviour and actual behaviour, and could fall in the 'Documentation' category as well. There seems to be a problem with 'pragma default_temp_store'. In pragma.c code exists to handle it, and that code stores the provided value in Cookie 5 (as VDBE instruction argument; that is the _sixth_ metadata integer, and would correspond to meta[6] in sqliteInitOne() in main.c). However, the code loading a database (the aforementioned sqliteInitOne() in main.c) never looks at that value, and the setting is ignored. (Also, Vacuum doesn't seem to copy it.) A related problem is that using the default_temp_store or temp_store pragma's doesn't work as advertised, at least not in the precompiled commandline tool sqlite.exe: you will always get the following error, even if you use the pragma at init time: SQL error: The temporary database already exists - its location cannot now be changed Trying to set the flag (the value at offset 0x50 in a database file) to 2 (attempting to force an in-memory database for temp tables) with a hex editor has only partial success: the handcrafted value is reported by _pragma default_temp_store;_ but typing _.databases_ still shows a file name for the temporary data base, and using the filemon tool (a windows file activity monitor, downloadable from www.sysinternals.com) shows that the temp file is actually accessed when giving a 'create temp table' command (not surprising, if there is no code to actually ever initialize the db->temp_store from the Cookie). If for some reason it is infeasible to circumvent the issue that the temp table will always be open before executing the pragma, I suggest changing the semantics of _pragma default_temp_store_ to _only_ change the default (as stored in the file), but _not_ change the current value. This would allow executing _pragma default_temp_store_ even while a temp table is open (though its effect will only be visible when the database is opened again). Note that this issue has a few documentation issues: *: lang.html suggests that _pragma default_temp_store_ and _pragma temp_store_ are currently working. At least in the commandline tool they aren't (I didn't make a dedicated test program to see if the problem already exists at the C-API level) *: fileformat.html doesn't document the location where the temp_store flag is stored. In fact, I consider the fact that the fifth meta value (meta[5] a.k.a. Cookie 4) is seemingly not used anywhere slightly suspicious. *: the number of metadata values is documented inconsistently in fileformat.html: in one place it mentions there are 6 values including the two leading values (which makes 4 metavalues), a bit later 9 metavalues are mentioned...
#cfe8bd 608 active 2004 Feb anonymous 3 3 Problem with "pragma show_datatypes = on" and busy timeout When a busy timeout is set, pragma show_datatypes = on and SQLite sleeps some time on the lock, no datatypes are passed to the exec callback function. The attachment is an archive with a Makefile, a shell script and a program that reproduce the error. _2004-Feb-12 21:05:28 by anonymous:_ {linebreak} This problem breaks the auto-typing feature of PySQLite when a busy timeout is used.
#cfe8bd 627 active 2004 Feb anonymous 3 3 sqliteRunVacuum returning wrong code? The last 3 lines of sqliteRunVacuum, as of the version checked in on Feb 12 2004, are: if( rc==SQLITE_ABORT ) rc = SQLITE_ERROR; if( sVac.rc!=SQLITE_OK ) rc = sVac.rc; return sVac.rc; It seems suspicious to set a local variable, rc, that one is never going to use again. I suspect that the last line should be return rc; _2004-Feb-27 00:54:03 by anonymous:_ {linebreak} The fix by check-in 1271 still doesn't look right to me. If one of the execsql calls returns SQLITE_CANTOPEN (which I have seen happen), then rc will be SQLITE_CANTOPEN and sVac.rc will be 0, and sqliteRunVacuum will return 0.
#e8e8bd 684 active 2004 Apr anonymous Unknown 3 2 Incorrect function result type when using SQLITE_ARGS I registered a function using the SQLITE_ARGS return type. I then execute the statement "select test('sample')". The type information returned from sqlite_step in the pazColName is incorrectly reported as "NUMERIC". If I use a "0", specifying the first column, instead of SQLITE_ARGS when registering the function, the return value is correctly set to "TEXT".
#cfe8bd 685 active 2004 Apr anonymous CodeGen 1 3 SELECT from a VIEW with GROUP BY When you SELECT from a VIEW (which is having a GROUP BY statement) and try to apply another GROUP BY statement you get: $ sqlite ../../db/main.db SQLite version 2.8.13 Enter ".help" for instructions sqlite> .dump prod_elem_totals BEGIN TRANSACTION; CREATE VIEW prod_elem_totals AS SELECT pe.elem_id AS elem_id, p.prod_id AS prod_id, e.name AS name, p.name AS p_name, pe.count AS count, SUM(b.count) / pe.count AS p_max, SUM(b.count) AS total, SUM(b.price * b.count) / SUM(b.count) AS price, e.min AS min FROM products AS p, elements AS e, batches AS b, prod_elems AS pe WHERE p.prod_id = pe.prod_id AND pe.elem_id = b.elem_id AND pe.elem_id = e.elem_id GROUP BY p.prod_id, pe.elem_id ORDER BY e.name; COMMIT; sqlite> SELECT * FROM prod_elem_totals GROUP BY elem_id; sqlite: src/select.c:1775: flattenSubquery: Assertion `p->pGroupBy==0' failed. Aborted It seams it doesn't matter which column I GROUP BY. I can prepare a full test case if needed. Maybe somehow connected with #678. After further investigation I found that when I add a aggregate function like "SUM (count * 10) AS min" it works...
#f2dcdc 691 active 2004 Apr anonymous Unknown drh 1 1 OS X File Sharing Hello Sir: This ticket may be considered a duplicate of ticket #301. I am unable to access SQLite databases from HFS or SMB network shares when using Mac OS X (10.3.3) as a client. The more technical aspects of the problem are explained well in ticket #301. I am using SQLabs SQLite plugin for RealBasic 5.5, and would like to use SQLite exclusively as my DB.
I am concerned that the original ticket was submitted approximately one year ago. So I am submitting this to see if this issue is being addressed, and if there is a timetable set for its resolution. Thank you, Tony Dellos Milwaukee WI
#cfe8bd 698 active 2004 Apr anonymous Unknown 1 3 .mode list - not going to next line To create a comma delimited output file:{linebreak} ------------------------------------------------{linebreak} C:\SQLite>sqlite locate.db{linebreak} SQLite version 2.8.13{linebreak} Enter ".help" for instructions{linebreak} sqlite> .mode list{linebreak} sqlite> .separator ", "{linebreak} sqlite> .output data.cdf{linebreak} sqlite> select * from parts;{linebreak} sqlite> .quit{linebreak} {linebreak} That should create a text file of something like this:{linebreak} 1st rec field 1, 1st rec field 2, 1st rec field 3, 1st rec field 4{linebreak} 2nd rec field 1, 2nd rec field 2, 2nd rec field 3, 2nd rec field 4{linebreak} 3rd rec field 1, 3rd rec field 2, 3rd rec field 3, 3rd rec field 4{linebreak} {linebreak} but it does not provide a line break after each record, so the output looks like this:{linebreak} {linebreak} 1st rec field 1, 1st rec field 2, 1st rec field 3, 1st rec field 42nd rec field 1, 2nd rec field 2, 2nd rec field 3, 2nd rec field 43rd rec field 1, 3rd rec field 2, 3rd rec field 3, 3rd rec field 4{linebreak} {linebreak} Each record is butted up against the previous record, without even a space. This is inconsitant with the instruction on how it is supposed to work, via this page:{linebreak} {linebreak} http://www.sqlite.org/sqlite.html {linebreak} {linebreak} Also, can you please refer me to somewhere that would explain how I can use SQLite with a batchfile, EG: using a batchfile to add a record, delete a record, query, Etc... {linebreak} {linebreak} Thanks,{linebreak} Tom
#cfe8bd 700 active 2004 Apr anonymous VDBE 2 3 Solaris-sparc segfaults on sum() On Solaris (sparc) trying to do a sum() (sometimes) SEGVs: this is because the result is placed in a chunk of memory which is allocated as a char * and therefore isn't aligned to 16-byte boundaries (which SPARC-Solaris seems to want). One fix for this which seems to work for me is to change vdbeInt.h:118 to char zShort[NBFS] __attribute__ ((__aligned__(16))); /* Space for short strings */ and change sqlite_aggregate_context to assign p->pAgg to zShort rather than z (since z is malloc()ed you can't align it) - I don't know if this would cause problems elsewhere though. _2004-Apr-22 16:10:37 by dougcurrie:_ {linebreak} malloc() should always return memory aligned for any purpose; I don't think this is the problem. Looking at the function sqlite_aggregate_context though, I wonder: *:where is p->z initialized? *:where is p->pAgg sqliteFree()d? *:what happens when sqlite_aggregate_context and sqlite_set_result_string share Mem.zShort? ---- _2004-Apr-23 09:58:03 by anonymous:_ {linebreak} > malloc() should always return memory aligned for any purpose; I don't think this is the problem. I didn't make it clear: I'm getting a Bus Error, not just a normal SEGV. There are several places mentioned on the web which suggests that solaris' malloc() aligns memory to 8-byte boundaries, while (on 64-bit, I assume) a double is 128 bits... however you're correct, the manpage does insist that all malloc()s are aligned to data large enough for any purpose. I suppose if s.z isn't even assigned at this point (but hasn't been cleared at initialisation) it might contain something completely non-aligned. However I now can't reproduce the problem, although that's not to say that it means the thing isn't broken... I'll try and break it again and let you know. As to your other point: the reason I posted was because of exactly this: I don't know the code well enough (I'd never heard of it until yesterday!) to be able to say whether .zShort could be used elsewhere at the same time as a sum() function. ---- _2004-Apr-23 10:35:11 by anonymous:_ {linebreak} Aha. A core file lying around may well help.
... Program terminated with signal 10, Bus Error. ... #0 0xff33d4a4 in sumStep (context=0x2a158, argc=203556, argv=0x3ac08) at src/func.c:421 421 p->sum += sqliteAtoF(argv[0], 0); ... (gdb) list 416 static void sumStep(sqlite_func *context, int argc, const char **argv){ 417 SumCtx *p; 418 if( argc<1 ) return; 419 p = sqlite_aggregate_context(context, sizeof(*p)); 420 if( p && argv[0] ){ 421 p->sum += sqliteAtoF(argv[0], 0); 422 p->cnt++; 423 } 424 } .... (gdb) print &p.sum $5 = (double *) 0x31b24
Now my basic maths (a hex 16-byte aligned number should end in 0, right?) says that somehow p.sum has become misaligned by 4 bytes. sqlite_aggregate_sum simply assigns p->pAgg to p->s.z and returns p->pAgg, which means that p->s.z is misaligned also. Further investigation makes more worrying reading: (gdb) print *context $18 = {pFunc = 0x75736572, s = {i = 0, n = 0, flags = 16, z = 0x0, r = 3.6586602629506839e-309, zShort = '\000' , "\020\000\000\000\000\000\002ˇ\230", '\000' }, pAgg = 0x10, isError = 0 '\000', isStep = 0 '\000', cnt = 172464}
Note that pAgg is 0x10 (!?) and s.z is 0. Something seriously unhappy going on there: I think it's likely there's some corruption going on due to some specific set of events which was being run. I'll rerun the exact scenario and try again. ---- _2004-May-03 02:27:25 by anonymous:_ {linebreak} i have seen this exact same problem on sparc/solaris. my core looks exactly the same. it really does look like an alignment issue. ---- _2004-Jul-12 13:09:26 by anonymous:_ {linebreak} I had this exact problem on solaris 8 with gcc3.3.4 and well, every version of sqlite over 2.5. Heres my solution, hopefully it will help someone with more time and IQ points to figure out the real problem After forcing PTR_FMT to %x in test1.c (so i can run all the tests) I changed src/vbdeInt.h #define NBFS from 32 to 15 (one less than a long double on a sparc) thus forcing all long doubles to be malloced. This allowed me to run all the tests (and my application) bus error free. 5 of the tests failed, which looks like a precision problem and seems harmless in my applications. date-1.19... Expected: [2451545.00000116] Got: [2451545.00000] date-1.20... Expected: [2451545.00000012] Got: [2451545.00000] date-1.21... Expected: [2451545.00000001] Got: [2451545.00000] expr-2.4... Expected: [0.525641025641026] Got: [0.52564102564] expr-2.5... Expected: [1.90243902439024] Got: [1.90243902439] ---- _2004-Jul-17 12:26:31 by anonymous:_ {linebreak} I found a better solution than my changing NBFS solution. from what little info i found, doubles in a structure are aligned to 8. so align zShort to 8 and it works with NBFS as 32. change PTR_FMT to %X instead of %x and it passes all the tests.
#f2dcdc 709 active 2004 Apr anonymous Unknown drh 1 1 Unable to unregister or replace functions I believe this started with 2.8.13, as it did work previously. There appears to be a show-stopper with unregistering or replacing existing functions. Specifically, if one tries to replace (or remove by passing nulls) one of the built-in functions, for example "like" or "upper", the function does not get replaced, and is in fact still called and available. The odd thing is that if you try to replace one of the functions with an underscore, such as "change_count", it works fine! This is causing problems as we replace a lot of the existing functions, and allow users to add and replace their own functions, which are now failing. _2004-Apr-26 20:56:19 by anonymous:_ {linebreak} Alright, turns out this is due to mismatch in argument count when unregistering functions. It would be useful if we could unregister all instances of a function name, irregardless of argument counts. ---- _2004-Apr-26 23:11:43 by anonymous:_ {linebreak} Turns out there really is a bug... the problem is in the sqliteFindFunction function's matching of inexact argument counts, when being called from sqliteExprCheck with >0 argument count. This is the scenario. I override the "upper" function with my own, but first removing the old "upper", specifying 1 argument and null for the function. I then register a new "upper" with -1 for the argument count, and a valid function. When sqliteFindFunction attempts to locate the upper function, It locates the new function, but because it is registered with -1, it tries to find a better match. It then runs into the original one, and because it has a null function pointer, it fails. This causes sqliteExprCheck to try again with -1 as the count, and since that matches, it reports an error of wrong_num_args. Unfortunately this means there is no way to override an existing method (it would be good if we could just delete them, rather than override them, although I still think this behaviour with -1 is wrong). ---- _2004-Apr-26 23:29:23 by anonymous:_ {linebreak} I've applied the following patch in the sqliteFindFunction function, that I believe addresses the problem: /* Change this if( p && !createFlag && p->xFunc==0 && p->xStep==0 ){ return 0; } */ /* To this */ if( p && !createFlag && p->xFunc==0 && p->xStep==0 ){ return pMaybe;
By returning pMaybe we provide a function that will work, while returning 0 if no variable argument function was found.
#e8e8bd 721 active 2004 May anonymous Shell drh 3 2 empty .databases information in file shell.c around line 570 the callback_data structure needs to be added: data.cnt = 0; so the column widths are correctly used. or else, it will display empty lines. this problem is visible after executing one sql command
#cfe8bd 735 active 2004 May anonymous Shell 4 3 .sqliterc not processed if running on a driver other than C: In shell.c there is a snippet that reads: if (!home_dir) { home_dir = getenv("HOMEPATH"); /* Windows? */ } The HOMEPATH environment variable does not include the drive letter and needs to be concatenated with the HOMEDRIVE environment variable. _2004-May-12 14:43:40 by anonymous:_ {linebreak} That should read "drive" in the title, not "driver"
#e8e8bd 744 active 2004 May anonymous BTree anonymous 2 2 make test seg faults on x86_64 Linux I'm running the 64 bit version of Gentoo Linux on an AMD Opteron system. Ordinarily I'd install software with "emerge " but "emerge sqlite" only gives me version 2.8.11. I downloaded the 2.8.13 source and did the usual ./configure; make; make test. The configure and make steps went OK but make test fails half way through: bind-1.99... Ok btree-1.1... Ok btree-1.1.1... Ok btree-1.2... Ok btree-1.3... Ok btree-1.4... Ok btree-1.4.1... Ok btree-1.5... Ok btree-1.6...make: *** [test] Segmentation fault The code was built with GCC 3.3.3. As sqlite is a known 'emerge' option for 64 bit Gentoo I'm guessing sqlite is known to work on 64 bit platforms? I didn't mark this as a severe error because in theory I should be able to create a statically linked executable on a 32 bit linux system and run this on the Opteron box. Haven't been successful at that yet however. _2004-May-25 04:35:32 by anonymous:_ {linebreak} It's pretty clear that sqlite has never been compiled on a 64-bit system, much less run. The test problems are fatal bugs caused by type conversions between 64-bit and 32-bit values, including truncating pointers and other sins. The fixes look quite involved.
#cfe8bd 754 active 2004 Jun anonymous Shell drh 2 3 problem opening a dbfile in the upper directory (../dbname) there is a problem in the calculation of a full path name based on a relative path name in an uproot location (../). i have fixed this during my porting to dos, and the relevant diff is at http://www.sqlite.org/cvstrac/tktview?tn=524. best regards alex
#cfe8bd 783 active 2004 Jun anonymous Unknown 3 3 Build on MAC with -DSQLITE_DEBUG=1 compile error MacOS 10.3.4 gcc 3.3 Compileing with -DSQLITE_DEBUG=1 gives following error ./libtool --mode=compile gcc -g -O2 -DOS_UNIX=1 -DHAVE_USLEEP=1 -DSQLITE_DEBUG=1 -I. -I../sqlite/src -DTHREADSAFE=0 -c ../sqlite/src/os_unix.c gcc -g -O2 -DOS_UNIX=1 -DHAVE_USLEEP=1 -DSQLITE_DEBUG=1 -I. -I../sqlite/src -DTHREADSAFE=0 -c ../sqlite/src/os_unix.c -fno-common -DPIC -o .libs/os_unix.o ../sqlite/src/os_unix.c: In function `sqlite3OsRead': ../sqlite/src/os_common.h:31: error: inconsistent operand constraints in an `asm' make: *** [os_unix.lo] Error 1
#f2dcdc 798 active 2004 Jul anonymous Unknown 1 1 Unable to run tests on Tru64 bit Linux platform I was able to compile SQLite 3.0.2 on a RedHat 64-bit Linux system; however, when running the tests I would get a segmentation fault when executing a blob test. I was wondering if anyone has attempted to build SQLite for a 64-bit architecture and run all tests successfully. If so I was hoping to get any configuration parameters needed.
#e8e8bd 841 active 2004 Aug anonymous Unknown Pending 3 2 inner group by query isn't honored by outer count(*) aggregate CREATE TEMP TABLE A(a int NOT NULL, b int NOT NULL, c int NOT NULL); INSERT INTO A VALUES (1, 1, 1); INSERT INTO A VALUES (1, 2, 1); INSERT INTO A VALUES (2, 1, 1); -- typical behaviour is for this to behave like the DISTINCT query below -- but instead it shows a=1 as having occured twice (but it was grouped in the inner query) SELECT a, count(*) FROM ( SELECT a, c FROM A GROUP BY 1, 2) GROUP BY a; Result: 2|1 1|2 -- shows a=1 as having occured once (correctly) SELECT a, count(*) FROM ( SELECT DISTINCT a, c FROM A) GROUP BY a; Result: 2|1 1|1 -- the top query performs better, which is why I am reporting this bug _2004-Aug-08 18:42:00 by drh:_ {linebreak} SQLite ignores the ORDER BY clause if there are no aggregate functions.
#e8e8bd 923 active 2004 Sep anonymous Pending anonymous 3 2 Missing quotes in 2.8.15 .dump cause data loss when loading in sqlite3 When converting a database by means of the command: sqlite old.db .dump | sqlite3 new.db the content of char/varchar fields is dumped by sqlite without quotes (e.g. 00001) and then when reloaded by sqlite3 it looses the heading zeroes (i.e. becomes '1', which is a really different thing for an alphanumeric field). This could be solved by a new release (sqlite 2.8.16 ?) which add quotes to alphanumeric fields (as sqlite3 does), or by a filter script that adds the quotes to the sqlite2 .dump output (I used a quick and dirty perl script to fix my dump...). _2005-Jul-11 20:08:11 by anonymous:_ {linebreak} This does not appear to be solved in sqlite 2.8.16.
#cfe8bd 969 new 2004 Oct anonymous TclLib Pending 3 3 PRAGMA empty_result_callbacks not working in tclsqlite-3.0.8.so part 2 Referencing ticket # 967, I stated that it was the tclsqlite code that was not functioning properly, not the ./sqlite3 executable.{linebreak} Here is a script that you can use to reproduce the issue. As you can see the results are quite different. #-----------> load ./lib/tclsqlite-3.0.8.so sqlite3 puts [info patchlevel] sqlite3 db :memory: db eval "create table t1(a,b);" puts "before 3.0.8 select, no pragma" db eval "select * from t1;" x { puts "x(*) = $x(*)" } db eval "PRAGMA empty_result_callbacks=1" puts "before 3.0.8 select, yes pragma" db eval "select * from t1;" x { puts "x(*) = $x(*)" } db close load ./lib/tclsqlite-2.8.15.so Tclsqlite sqlite db2 :memory: db2 eval "create table t1(a,b);" puts "before 2.8.15 select, no pragma" db2 eval "select * from t1;" x { puts "x(*) = $x(*)" } db2 eval "PRAGMA empty_result_callbacks=1" puts "before 2.8.15 select, yes pragma" db2 eval "select * from t1;" x { puts "x(*) = $x(*)" } db2 close puts "done" # <-------------------
and the results: $ tclsh test_sqlite.tcl 8.4.3 before 3.0.8 select, no pragma before 3.0.8 select, yes pragma before 2.8.15 select, no pragma before 2.8.15 select, yes pragma x(*) = a b done
_2006-May-16 18:29:44 by anonymous:_ {linebreak} This is still a problem in the 3.3.5 version of the tclsqlite library. The tclsqlite.c code never calls the callback code on empty results when PRAGMA empty_result_callbacks=1 is set.
#cfe8bd 1026 active 2004 Dec anonymous Unknown Pending 3 3 sqlite automake sqlite3.pc file does not have version information When the configure of sqlite has been done, the sqlite3.pc file does not have information in the Version: section. This means there's no way to check for versions in other autogen/configure files concerning the sqlite version in the system, style: PKG_CHECK_MODULES(SQLITE, sqlite >= 3.0.3, AC_MSG_ERROR([$SQLITE_PKG_ERRORS]))
#cfe8bd 1030 active 2004 Dec anonymous Shell Pending 2 3 impossible to import a file and do other things in the same invocation (strongly affects scripting) The sqlite2 shell can be scripted in 2 ways: 1. sqlite mydb 'commands' 2. commands | sqlite mydb The first form has a bug, which I have not submitted, and which also occurs in sqlite3: o dot commands are not fully intermixable with sql commands It will also fail for long command lines. To work around this, it is necessary to use the second form of scripting. However: o for entering data via COPY or .import, a separator is necessary to allow sql after the data o COPY has \. as a separator Here is the sqlite3 bug: o .import has no separator that I know of The only possible workaround seems to be to call the executable more than once. However: o this makes :memory: databases impossible Therefore, I said this is major with a workaround. The workaround is to use temporary files instead of memory databases, and: o inconveniently kludge the sqlite2 behavior using multiple calls to the executable However, files are far slower than memory, making the use of sqlite as an awk-like filter less attractive than it was with sqlite2. Here are 2 solutions to fix the problem, both of which are desirable: o allow dot commands and sqlite commands to be intermixable on the command line such as with "sqlite :memory: '.show;.import ...;select ...;.import ...; select ...'" o allow a separator for .import Here are some related bugs, also not submitted: o .separator is overloaded to mean input and output separators, but for scripting it would be useful to have them separate o csv mode and tabs mode are lightly documented. what are the exact syntaxes for them (line continuation, quoting, etc.), and what is the difference between tabs mode and .separator set to tab? o it might also be useful to have a .with command so that you can do .with .separator ","; .import ...; .endwith to effectively emulate (let ((...)) ...) in lisp. i.e. temporarily set a value to something without having to know what to set it back to when you are done. Sqlite rocks. Thanks.
#cfe8bd 1053 active 2004 Dec anonymous Pager Pending 3 3 SQLITE_IOERR and strange rollback when db is busy Environment on which bug was found:{linebreak} Windows XP, both SP1 and SP2, on different computers. The SQLite library was built using the precompiled source from the download page (as static library). Description of bug scenario: One process performs very long reads from a db (multiple joins, so the cartesian product is *very* large, and the reader needs a while to complete). Another process performs a _BEGIN TRANSACTION_ , then executes lots of _INSERT INTO ... VALUES_ .{linebreak} At some point, this process will end up in sqlite3pager_get, when it tries to read some page from the database file (the main file, not a temp file or a journal). It detects that the page is not in the page cache (it ends up in the 'else' branch of _if( pPg==0 )_ ). It runs down to the block of code covered by the following comment: /* Write the page to the database file if it is dirty. */ In this block, pager_write_pagelist( pPg ) returns with SQLITE_BUSY. As a consequence, the changes are rolled back and SQLITE_IOERR is returned. And here seems to be the problem: First, the database file is locked, so I don't understand why the SQLITE_BUSY value isn't propagated back to the caller. If SQLITE_BUSY would be returned, then the application could restart the command. Seconds, sqlite3VdbeHalt decides to perform a sqlite3BtreeRollbackStmt, so only the last command should be rolled back. However, this is not what happens! In fact, all commands back to the beginning of the transaction are rolled back; the transaction, however is not closed. Doesn't this violate the default rollback behaviour (roll back last command, keep transaction open)? As a consequence, even if the application would get SQLITE_BUSY, it couldn't properly react on it. There are other places in sqlite3pager_get where SQLITE_IOERR are returned; I've not checked whether these can also be triggered by the db being locked or if they indicate serious problem. I will attach the code I used to reproduce and track down the problem, together with a Visual Studio 2003 project. If you extract the archive, on toplevel you will find the following: *: Reader: the directory containing the source for the reader *: Writer: the directory contaiing the source for the writer *: SQLite: A directory in which to place the precompiled source for windows users, which is used to build the library. If you want to use the provided project file with Visual Studio, just copy the source in there and everything will build with a single mouse click. *: BugDemo.sln: The Visual Studio project file. *: bugdemo.sql: The SQL statements used to create the test database. How to reproduce: *: Create a database using bugdemo.sql *: Adapt reader.cpp and writer.cpp to include the sqlite3 headers, and set the define at the top of the files to the path of the test database. *: Compile everything. *: Start the reader. *: Start the writer, and wait until it reports an error (for me, it takes < 30 seconds). I tried to keep the source portable, so it shouldn't be too hard to make it compile on Unix.
#cfe8bd 1056 active 2004 Dec anonymous Shell Pending 3 3 test pragma-9.4 fails during second pass in "make fulltest" During a "make fulltest" run, the pragma tests appear to run twice. On the first run, pragma-9.4 runs properly. On the second run, it gives an error: pragma-9.4... Expected: [] Got: [/Volumes/Local/Users/sqlite/test/bld] (where the path listed is the build directory for this build of sqlite). The pragma-9.4 test is a recent addition to sqlite. This is currently the only failure I'm seeing in a "make fulltest" of the current cvs tree on Mac OS X when the build/test directory is on a hard drive. 1 errors out of 68411 tests Failures on these tests: pragma-9.4 make: *** [fulltest] Error 1
#cfe8bd 1058 active 2005 Jan anonymous BTree Pending 3 3 btree.c pageSize -> usableSize Check-in [2125] was a fix for Ticket #1010, but left out some of the fixes proposed in the ticket. I'm not sure whether this was an oversight or an intentional omission. I tried to re-open the ticket, but those edits didn't persist (I'm not sure why). Here are the remaining instances of pageSize that I believe should be changed to usableSize in btree.c: --- sqlite/src/btree.c.ORIG Wed Nov 24 18:54:30 2004 +++ sqlite/src/btree.c Wed Nov 24 20:29:21 2004 @@ -220,13 +220,13 @@ /* The following value is the maximum cell size assuming a maximum page ** size give above. */ -#define MX_CELL_SIZE(pBt) (pBt->pageSize-8) +#define MX_CELL_SIZE(pBt) (pBt->usableSize-8) /* The maximum number of cells on a single page of the database. This ** assumes a minimum cell size of 3 bytes. Such small cells will be ** exceedingly rare, but they are possible. */ -#define MX_CELL(pBt) ((pBt->pageSize-8)/3) +#define MX_CELL(pBt) ((pBt->usableSize-8)/3) /* Forward declarations */ typedef struct MemPage MemPage; @@ -1745,7 +1745,7 @@ Pgno finSize; /* Pages in the database file after truncation */ int rc; /* Return code */ u8 eType; - int pgsz = pBt->pageSize; /* Page size for this database */ + int pgsz = pBt->usableSize;/* Usable bytes on each page */ Pgno iDbPage; /* The database page to move */ MemPage *pDbMemPage = 0; /* "" */ Pgno iPtrPage; /* The page that contains a pointer to iDbPage */
#cfe8bd 1063 active 2005 Jan anonymous Pending 1 3 Lemon bug: Strings in rule code should not be interpreted There are two related bugs in the lemon parser related to processing code snippets defined in rule actions. Here is a simple grammar that demonstrates the problem: %include { extern int line_number; extern const char *file_name; } result(r) ::= TOKEN(s). { printf("BAD: Got a token on line '%d'\n", line_number); printf("BAD: \tFile = '%s'\n", file_name); r = s; } The first bug is that the "%d" in the first printf is interpreted by the append_str function, when it shouldn't be, producing code that looks like: printf("BAD: Got a token on line '0d'\n", line_number); I believe that the solution is to have append_str() NOT do %d substitution when it is copying the code. The second bug is that the "s" in the "%s" format is being interpreted as a symbolic name, producing code that looks like: printf("BAD: \tFile = '%yymsp[0].minor.yy0'\n", file_name); I believe that the solution is to have translate_code() ignore symbolic names inside of quoted strings.
#cfe8bd 1078 active 2005 Jan anonymous Pending 2 3 Lemon destructor bugs that don't affect sqlite I found a few bugs Lemon's destructor handling code. I don't think that they affect sqlite, but the do affect other grammars. - The code that collapses cases for default destructors erroneously assumes that all symbols have the same type. - If a reduction rule doesn't have code, then the RHS symbols will not have their destructors called. - The default destructor shouldn't be called on the auto-generated "error" symbol - In the internal function "append_str", zero-length strings may be returned un-terminated. I have some proposed fixes that I'll try to attach to this ticket. _2005-Jan-14 13:33:52 by drh:_ {linebreak} Do you also have some test grammars? That would really be helpful. ---- _2005-Jan-14 17:14:15 by anonymous:_ {linebreak} Sure. Here is one grammar that will demonstrate the "Tokens leak when rule has no code" bug: %token_type { char * } %token_destructor { printf("Deleting token '%s' at %x\n", $$, (int)$$); free($$); } result ::= nt. nt ::= FOO BAR.
Running the following code against the grammar should theoretically show 2 allocations and two destructions. It won't though, unless you modify the rule for nt to have an empty body, like: {linebreak} nt ::= FOO BAR. {} char *mkStr(const char *s) { printf("Allocating '%s' at 0x%x\n", s, (int)(s)); return strdup(s); } int main(int argc, char **argv) { void *parser = ParseAlloc(malloc); Parse(parser, FOO, mkStr("foo")); Parse(parser, BAR, mkStr("bar")); Parse(parser, 0, 0); ParseFree(parser, free); return 0; }
---- _2005-Jan-14 17:50:26 by anonymous:_ {linebreak} Here is another test grammar. This one demonstrates (a) default destructors being called on the 'error' symbol, and (b) problems with default destructors being called on the wrong symbol type. %token_type { char * } %token_destructor { delete [] $$; } %default_destructor { delete $$; } %type result { int } %destructor result { } result ::= fooStruct barStruct. { } %type fooStruct { Foo * } fooStruct(lhs) ::= FOO(f). { lhs = new Foo(f); } %type barStruct { Bar * } barStruct(lhs) ::= BAR(b). { lhs = new Bar(b); }
Here is the code generated by lemon (with comments added & removed for clarity): typedef union { ParseTOKENTYPE yy0; int yy4; Bar * yy5; Foo * yy7; int yy15; } YYMINORTYPE; static const char *const yyTokenName[] = { "$", "FOO", "BAR", "error", "result", "fooStruct", "barStruct", }; static void yy_destructor(YYCODETYPE yymajor, YYMINORTYPE *yypminor){ switch( yymajor ){ case 1: case 2: { delete [] (yypminor->yy0); } break; case 3: /* error */ case 5: /* fooStruct of type "Foo *" */ case 6: /* barStruct of type "Bar *" */ #line 3 "typeBug.y" { delete (yypminor->yy5); } /* Yikes! yy5 is a "Bar *" */ #line 308 "typeBug.c" break; case 4: #line 6 "typeBug.y" { } #line 313 "typeBug.c" break; default: break; /* If no destructor action specified: do nothing */ } }
#cfe8bd 1085 active 2005 Jan anonymous Pending 2 3 pragma full_column_names and short_column_names still broken the following statement : SELECT T1.*,D1.* FROM test T1,dt D1 WHERE T1.id=D1.id does not give "long" column names, even if full_column_names is ON. But, the following does: SELECT T1.ID,D1.NAME FROM test T1,dt D1 WHERE T1.id=D1.id in other words, tablename prefix is applied only to explicit columns, not to "*" selected columns.
#cfe8bd 1100 active 2005 Feb anonymous Pending 3 3 make test segfaults at capi2-7.12 on amd64 system System is Gentoo Linux 2004.1 on Opteron processor; gcc v3.3.3, creating 64 bit binaries. Here is a traceback: capi2-7.11... Ok capi2-7.11a... Ok capi2-7.12... Program received signal SIGSEGV, Segmentation fault. 0x0000002a95b25830 in strlen () from /lib/libc.so.6 (gdb) where #0 0x0000002a95b25830 in strlen () from /lib/libc.so.6 #1 0x000000000043991c in sqlite3VdbeList (p=0x5aab60) at src/vdbeaux.c:528 #2 0x0000000000438737 in sqlite3_step (pStmt=0x13000a6023d0064) at src/vdbeapi.c:207 #3 0x0000000000416a80 in test_step (clientData=0x13000a6023d0064, interp=0x55a450, objc=0, objv=0x4) at src/test1.c:2070 #4 0x0000002a956978fe in TclEvalObjvInternal () from /usr/lib/libtcl8.4.so #5 0x0000002a956ba181 in TclCompEvalObj () from /usr/lib/libtcl8.4.so #6 0x0000002a956b9648 in TclCompEvalObj () from /usr/lib/libtcl8.4.so #7 0x0000002a956e4f66 in TclObjInterpProc () from /usr/lib/libtcl8.4.so #8 0x0000002a956978fe in TclEvalObjvInternal () from /usr/lib/libtcl8.4.so #9 0x0000002a956982d8 in Tcl_EvalEx () from /usr/lib/libtcl8.4.so #10 0x0000002a95698097 in Tcl_EvalTokensStandard () from /usr/lib/libtcl8.4.so #11 0x0000002a95698273 in Tcl_EvalEx () from /usr/lib/libtcl8.4.so #12 0x0000002a95698767 in Tcl_EvalObjEx () from /usr/lib/libtcl8.4.so #13 0x0000002a956e4a93 in Tcl_UplevelObjCmd () from /usr/lib/libtcl8.4.so #14 0x0000002a956978fe in TclEvalObjvInternal () from /usr/lib/libtcl8.4.so #15 0x0000002a956ba181 in TclCompEvalObj () from /usr/lib/libtcl8.4.so #16 0x0000002a956b9648 in TclCompEvalObj () from /usr/lib/libtcl8.4.so #17 0x0000002a956e4f66 in TclObjInterpProc () from /usr/lib/libtcl8.4.so #18 0x0000002a956978fe in TclEvalObjvInternal () from /usr/lib/libtcl8.4.so #19 0x0000002a956982d8 in Tcl_EvalEx () from /usr/lib/libtcl8.4.so #20 0x0000002a95698767 in Tcl_EvalObjEx () from /usr/lib/libtcl8.4.so #21 0x0000002a956e4a93 in Tcl_UplevelObjCmd () from /usr/lib/libtcl8.4.so #22 0x0000002a956978fe in TclEvalObjvInternal () from /usr/lib/libtcl8.4.so #23 0x0000002a956ba181 in TclCompEvalObj () from /usr/lib/libtcl8.4.so #24 0x0000002a956b9648 in TclCompEvalObj () from /usr/lib/libtcl8.4.so #25 0x0000002a95698815 in Tcl_EvalObjEx () from /usr/lib/libtcl8.4.so #26 0x0000002a9569c486 in Tcl_CatchObjCmd () from /usr/lib/libtcl8.4.so #27 0x0000002a956978fe in TclEvalObjvInternal () from /usr/lib/libtcl8.4.so #28 0x0000002a956ba181 in TclCompEvalObj () from /usr/lib/libtcl8.4.so #29 0x0000002a956b9648 in TclCompEvalObj () from /usr/lib/libtcl8.4.so #30 0x0000002a95698815 in Tcl_EvalObjEx () from /usr/lib/libtcl8.4.so #31 0x0000002a9569ef7e in Tcl_IfObjCmd () from /usr/lib/libtcl8.4.so #32 0x0000002a956978fe in TclEvalObjvInternal () from /usr/lib/libtcl8.4.so #33 0x0000002a956ba181 in TclCompEvalObj () from /usr/lib/libtcl8.4.so #34 0x0000002a956b9648 in TclCompEvalObj () from /usr/lib/libtcl8.4.so #35 0x0000002a956e4f66 in TclObjInterpProc () from /usr/lib/libtcl8.4.so #36 0x0000002a956978fe in TclEvalObjvInternal () from /usr/lib/libtcl8.4.so #37 0x0000002a956982d8 in Tcl_EvalEx () from /usr/lib/libtcl8.4.so #38 0x0000002a956d1e41 in Tcl_FSEvalFile () from /usr/lib/libtcl8.4.so #39 0x0000002a956d120e in Tcl_EvalFile () from /usr/lib/libtcl8.4.so #40 0x0000000000424641 in main (argc=2, argv=0x7fbffff138) at src/tclsqlite.c:1744 (gdb) _2005-Feb-04 23:11:02 by anonymous:_ {linebreak} Some more information from a gdb session (I added the printf at line 527 and recompiled; the line numbers below will be off by one from the stack trace above): (gdb) up #1 0x000000000043996d in sqlite3VdbeList (p=0x5aab60) at src/vdbeaux.c:529 529 pMem->n = strlen(pMem->z); (gdb) list 525,535 525 526 pMem->flags = MEM_Static|MEM_Str|MEM_Term; 527 printf("src/vdbeaux.c sqlite3VdbeList pOp->opcode= %d\n", pOp->opcode); 528 pMem->z = sqlite3OpcodeNames[pOp->opcode]; /* Opcode */ 529 pMem->n = strlen(pMem->z); 530 pMem->type = SQLITE_TEXT; 531 pMem->enc = SQLITE_UTF8; 532 pMem++; 533 534 pMem->flags = MEM_Int; 535 pMem->i = pOp->p1; /* P1 */ (gdb) p pOp->opcode $1 = 40 '(' (gdb) p pMem->z $2 = 0x175002100200173
#f2dcdc 1111 active 2005 Feb anonymous Unknown Pending 3 1 no such column error with a subselect as a table Execute the following script. In 3.0.8, you get the following results: line 0 line 2 in 3.1.1 beta, you get the following error: SQL error: no such column: bb.a SCRIPT: create table test1 (a int); insert into test1 values (0); insert into test1 select max(a)+1 from test1; insert into test1 select max(a)+1 from test1; create table test2 (a int, b text); insert into test2 select a,'line ' || a from test1; select test2.b from (select test1.a from test1 where a%2 = 0) as bb join test2 on bb.a = test2.a; _2005-Feb-21 17:48:20 by anonymous:_ {linebreak} Tested in 3.1.3, issue still exists ---- _2005-Jun-12 22:09:08 by drh:_ {linebreak} Workaround: select test2.b from (select test1.a as a from test1 where a%2=0) as bb join test2 on bb.a=test2.a;
#cfe8bd 1134 active 2005 Feb anonymous Pending drh 3 3 select command with a view and a condition gets no result In a view with more than one tables it is no longer possible to use that view within a select command; the columns of the view are not found.
#f2dcdc 1142 active 2005 Feb anonymous Parser Pending danielk1977 1 1 Column names create table a (id, x); create table b (id, y); insert into a values (1,1); insert into b values (1,2); select * from a inner join b; column names returned: id,x,id,y How am I supposed to use such column names? Ouwey. _2005-Mar-15 07:28:43 by anonymous:_ {linebreak} This bug breaks existing applications where 'SELECT rowid, * FROM table' was used to open any table and then precompiled statements were used to update the changed record by using 'UPDATE table SET =?, =? ... WHERE rowid=?'. We cannot update to sqlite v3.1.x because of this. ---- _2005-Mar-15 08:36:31 by drh:_ {linebreak} There are lots of people telling me the current behavior is wrong. But nobody has yet suggested what the correct behavior should be. Until I know what the correct behavior should be, there is little I can do to fix the problem. Rather than just telling me the current behavior is wrong, please offer an explanation of what the correct behavior should be. What do Oracle and PostgreSQL do in the same situation? ---- _2005-Mar-16 09:04:30 by anonymous:_ {linebreak} Dr. Hipp, very sorry for the confusion I've caused! My comments were actually targeted at bug #1141! I've included the same remarks there. As for this bug: I can only tell you how SQL Server reacts: select * from a inner join b Line 1: Incorrect syntax near 'b'. select * from a inner join b on a.id = b.id | id | x | id | y | ------------------- | 1 | 1 | 1 | 2 | So SQL Server behaves like sqlite in returning column names, although it doesn't accept the syntax 'select * from a inner join b' I guess this behavior is normal and it's the programmer's resposability to use aliases in sql statement. Just my personal oppinion, though.
#f2dcdc 1149 active 2005 Feb anonymous Pending 2 1 VACUUM, DUMP/RESTORE fail in certain cases When a database has interacting views, such as the following: CREATE VIEW test1 AS SELECT * FROM tableA; CREATE VIEW test AS SELECT COUNT(*) FROM test1;
then a dump and restore fails, because the views will be created in the alphabetical order of the table names, rather than the order of their dependence. Thus, the create of table test will fail. Because the script otherwise runs to completion, the restore will usually be adequate except that the views are not recreated. VACUUM appears to fail the same way, probably for the same reason. In this case, however, the VACUUM fails without clearly alerting the user. I ran into this problem trying to VACUUM a large file, which produced the otherwise inexplicable error message about the syntax of a dependent view definition. Users can workaround, once the problem is understood, either by renaming views so that their names have alphabetical sequence consistent with their dependency, or by dropping them prior to a vacuum or dump/restore and then recreating them later. Of course, the view order can be repaired in the dump file, but this usually requires patience with a powerful text editor, and some capacity to understand the problem.
#f2dcdc 1158 active 2005 Mar anonymous VDBE Pending 1 1 core dump on solaris, some sqlite functions core dumped on solaris. I tracked this back to a byte alignment problem in Vdbeint.h. I added the __attribute__ ((__aligned__(16))) to the zshort character string. /* ** A single level of the stack or a single memory cell ** is an instance of the following structure. */ struct Mem { int i; /* Integer value */ int n; /* Number of characters in string value, including '\0' */ int flags; /* Some combination of MEM_Null, MEM_Str, MEM_Dyn, etc. */ double r; /* Real value */ char *z ; /* String value */ char zShort[NBFS] __attribute__ ((__aligned__(16))); /* Space for short strings */ }; typedef struct Mem Mem;
_2005-Mar-09 22:06:47 by anonymous:_ {linebreak} I think this is a duplicate of bug #700.... simply adding the attribute seems to reslove the bug... select the sum of an integer column, to reproduce.... sqlite> select sum(seqval) from tbl; Bus Error(coredump)
#cfe8bd 1191 active 2005 Mar anonymous Unknown Pending 2 3 last_insert_id() Does Not Work after Insert on View If I have a view with an INSERT trigger attached to it, and then use the view to insert a record, last_insert_rowid() does not return the ID inserted. For example, given this DDL: CREATE TABLE _simple ( id INTEGER NOT NULL PRIMARY KEY, name TEXT ); CREATE VIEW simple AS SELECT _simple.id AS id, _simple.name AS name FROM _simple; CREATE TRIGGER insert_simple INSTEAD OF INSERT ON simple FOR EACH ROW BEGIN INSERT INTO _simple (name) VALUES (NEW.name); END; This is what I get when I use the view to insert: sqlite> insert into simple (name) values ('foo'); sqlite> select last_insert_rowid(); last_insert_rowid() ------------------- 0 It does work if I insert directly into the table, of course: sqlite> insert into _simple (name) values ('foo'); sqlite> select last_insert_rowid(); last_insert_rowid() ------------------- 2 _2005-Mar-31 06:04:59 by anonymous:_ VIEWs are read-only so you can not INSERT into them. See: http://www.sqlite.org/omitted.html Regards, Bartosz. ---- _2005-Mar-31 18:12:10 by anonymous:_ {linebreak} Bartosz, Please note that triggers have been applied to the view, so you can actually insert into them. I do it all the time. For example, given my previous examples, this insert will work: sqlite> insert into simple (id, ame) values (1, 'foo'); Pretty coole, eh? See: http://www.sqlite.org/lang_createtrigger.html --Theory ---- _2005-Mar-31 18:21:08 by drh:_ {linebreak} The last_insert_rowid() routine *does* return the correct insert rowid while you are still within the trigger. Once you leave the trigger, last_insert_rowid() returns the rowid of the most recently inserted row outside of any trigger. This is by design. If last_insert_rowid() were to be responsive to inserts done by triggers, then any AFTER INSERT trigger that happened to update a logfile would overwrite the last_insert_rowid() from the actual INSERT. One could argue, I suppose that last_insert_rowid() should work for inserts performed by an INSTEAD OF trigger but not by other kinds of triggers. I will ponder that notion and might implement it if I cannot think of any objections. ---- _2005-Mar-31 18:25:14 by anonymous:_ {linebreak} Allowing it to persist past the trigger in an INSTEAD OF trigger would certainly do what I need. I think that'd make a lot of sense. All of this should probably be well-documented somewhere. I'd be happy to add a page to the wiki once you've decided how to proceed with this issue. Thanks! Theory ---- _2005-Apr-27 02:21:21 by anonymous:_ {linebreak} Have you had a chance to think more on this issue? Thanks, Theory ---- _2005-Nov-08 03:06:51 by anonymous:_ {linebreak} Just checking in on this issue again. Do you think that it's something that will be resolved, one way or the other, soon? Thanks, Theory ---- _2005-Nov-08 17:44:14 by anonymous:_ {linebreak} The instead of insert trigger can't, in general, update the last_insert_rowid value automatically and do it correctly, since a trigger may do multiple inserts into multiple tables. SQLite has no idea which rowid should be reported back outside the trigger. This is left to the user (i.e. the author of the trigger). You can get the last_insert_rowid after the appropriate insert in the trigger and then save that value into an auxillary table that is visible outside the trigger. This is especially important when your view is created from entries in several joined tables. Your instead of insert trigger must do inserts into the individual tables, and may need to use the last_insert_rowid function to link the records in the tables together correctly. It then needs to return the "master" rowid for the table. SQL doesn't have any syntax to specify which value is returned as the rowid of the instead of trigger, so SQLite doesn't do anything automatically. You need to create a sperate last_rowid table, and inside the trigger you update the value of a row in that table (possibly the only row) with the value you want to return. Then the code that does the insert into the view needs to get the value of that row instead of calling last_insert_rowid. This type of behavior is needed since there is no limit to the number of nested instead of triggers that could be executed by a single SQL statement (i.e. one instead of trigger could do an insert into another view... and so on). The current behavior isn't quite as convenient in the simplest case, but it works correctly in the more complicated general case, where simply updating the last_insert_rowid value would be wrong. ---- _2005-Nov-08 22:51:56 by anonymous:_ {linebreak} I think that there's an argument to be made that last_insert_rowid() should return the last inserted row ID, even if a trigger inserted a bunch. It should simply return the last one entered. However, I agree with your analysis. Could there perhaps be a set_insert_id() function, or some such, that the trigger could use to tell last_insert_row_id() what to return?
#cfe8bd 1200 active 2005 Apr anonymous Pending 2 3 new versions of SQLite return different (incorrect) results. Newer versions of SQLite are returning different, and I believe incorrect, results compared to those returned by older versions. Using version 3.0.8 the following query returns the correct results given the attached database. sqlite> select * from device_property_list where device_property_value = 0; 1|Station|3|Initial Volume|0 1|Station|1|Template|0 2|Station|3|Initial Volume|0 2|Station|1|Template|0 3|Station|3|Initial Volume|0 3|Station|1|Template|0 This same query does not return any results using versions 3.1.6 and 3.2. If the query is changed slightly, by quoting the zero in the where condition, then both of the newer versions return the same set of results as 3.0.8. sqlite> select * from device_property_list where device_property_value = '0'; 1|Station|3|Initial Volume|0 1|Station|1|Template|0 2|Station|3|Initial Volume|0 2|Station|1|Template|0 3|Station|3|Initial Volume|0 3|Station|1|Template|0 The device_property_list is a view that hides a complex join of several tables, and a long case expression that select the value to return for device_property_value; the field that is being tested by the condition. I have attached a sample database and the sql script used to create it for testing.
#f2dcdc 1201 active 2005 Apr anonymous CodeGen Pending danielk1977 1 1 erro defined type UINT8_TYPE In file sqliteInt.h line 203 typedef UINT8_TYPE i8; /* 1-byte signed integer */ it should be INT8_TYPE.
#f2dcdc 1205 active 2005 Apr anonymous Unknown Pending 3 1 accent ecu etc. in connectionstring Hi Whenever I use a accent grave or accent ecu or whatever accented characters in the connectionstring sqlite turns this into weird characters (in the file name of the database. Since my database name is dependent on user input I can't eliminate this situation... bart@arlanet.com _2005-Apr-13 12:16:08 by drh:_ {linebreak} What version of SQLite? What operating system? ---- _2005-Apr-14 12:22:32 by anonymous:_ {linebreak} Windows 2000 (dutch version) Sqlite version 3. the word privé in the connectionstring becomes privĂ©.db3 on the disk, which causes errors afterwards when opening the database ---- _2005-Apr-14 12:23:06 by anonymous:_ {linebreak} I see something goes wrong with the accents as well on the previous remark ---- _2005-Apr-14 12:29:41 by anonymous:_ {linebreak} It really sounds to me like you're taking an accented character encoded in ISO8859-1 or some similar single-byte encoding and feeding it to something that's expecting a UTF-8 encoded string. In UTF-8, any byte that has its high bit set *must* be part of a multi-byte sequence. ---- _2005-Apr-14 14:00:56 by anonymous:_ {linebreak} After converting my string to an UTF8 encoded string, it still does the same thing.... ---- _2005-Apr-14 14:06:41 by anonymous:_ {linebreak} using sqlite.net
#e8e8bd 1213 active 2005 Apr anonymous Parser Pending 3 2 Problem with alias columns in subqueries The following query create an SQL error while it seems to me to be SQL complient (in fact it works fine with a lot database servers): sqlite> select i from (select 0 as i union all select 1) as tmp; SQL error: no such column: i I found a workaround writting the following SQL query (it works well if you explicits the implicit variable i) : select i from (select 0 as i union all select 1 as i) as tmp; Thanks for your help Jerome _2005-Apr-17 11:03:52 by anonymous:_ {linebreak} SQLite appears to take the column names from the last clause in a UNION (your originally query works if you reverse the order of the clauses). IIRC, the standard says that the column names in a UNION of clauses with different column names is DBMS-dependent and while many DBMSs take them from the first clause, this is not something to count on. ---- _2005-Apr-18 19:53:09 by anonymous:_ {linebreak} You're perfectly right, but it seems to me that it should be better that sqlite do respect the "de facto" standard.
#cfe8bd 1214 active 2005 Apr anonymous Unknown Pending 2 3 sqlite3_column_bytes returns 0 on p3 column with EXPLAINed selects Both functions sqlite3_column_bytes() sqlite3_column_bytes16() do not return the correct length for the p3 text column of EXPLAIN select queries. Both functions always return 0, even if the p3 column contains text. The bug can be easily reproduced with the following query: EXPLAIN SELECT 'text'; The p3 column, row 2, contains the word 'text', but the functions return 0 regardles. I have not seen this bug with non EXPLAIN queries, but it breaks code which relies on the fact that sqlite3_column_bytes always retun the correct length of the text and needs to preallocate memory accordingly.
#cfe8bd 1228 active 2005 Apr anonymous Unknown Pending 4 3 problem with select+union on a view with aliased columns CREATE TABLE tbl1 (col1 VARCHAR PRIMARY KEY); INSERT INTO tbl1 VALUES ('1'); INSERT INTO tbl1 VALUES ('2'); CREATE VIEW view1 AS SELECT col1 AS col1a FROM tbl1; This creates a view with one column (col1a). Normal SELECTs on this view return results as expected, but the following: SELECT col1a FROM view1 WHERE col1a = 1 UNION SELECT col1a FROM view1 WHERE col1a = 2; Produces: col1 1 2 When the column name should in fact be col1a (this is the behaviour in postgres). You can work around this by doing: SELECT col1a AS col1a FROM view1 WHERE col1a = 1 UNION SELECT col1a AS col1a FROM view1 WHERE col1a = 2; But that shouldn't be necessary. Thanks. --Sebastian Kun
#cfe8bd 1235 active 2005 May anonymous Unknown Pending drh 4 3 inconsistent pragma handling The pragma user_version and schema_version are handled inconsistently in this respect : the result set returned contains a single column that has no name. all other pragmas return named columns. since some high-level languages complain about db fields with no names (most wrappers will gag at this), I suggest that a simulated "column_name" is also generated here, as with all other pragmas.
#f2dcdc 1248 active 2005 May anonymous Unknown Pending 3 1 sqlite3_get_table returns garbage for BLOB data BLOB data returned by sqlite_get_table() is garbage when it contains data bytes holding '0' values. Code: Windows, 3.2.1, VC 6.0 char data[128]; for(int i = 0;i<128;i++) { data[i] = i+1; } sqlite3_exec(thedb,"CREATE TABLE test (b BLOB)",NULL,NULL,NULL); result = sqlite3_prepare(thedb,"INSERT INTO test (b) VALUES (?)",-1,&cstate,NULL); sqlite3_bind_blob(cstate,1,data,128,SQLITE_STATIC); // bind the data to the statement result = sqlite3_step(cstate); result = sqlite3_finalize(cstate); sqlite3_get_table(thedb,"SELECT * FROM test",result,rows,columns,NULL); this sqlite3_get_table() returns a properly layed out table. However, it looks like sqlite_get_table() is converting blob data to another type of data (string?) as it processes the command. The BLOB data it returns using this call is completely corrupted, due to processing '0' valued bytes as EOS characters. Instead it should be inferring BLOB data where appropriate, and returning a correct data block. As a workaround I've been using the prepare/step functions instead. However, the sqlite3_get_table() is a neccessity for many users of this library, as it allows a simple and elegant SQL query mechanism, and should be fixed ASAP to support BLOB data properly. _2005-Sep-27 01:44:39 by anonymous:_ {linebreak} sqlite3_get_table is considered legacy code, intended to make porting of sqlite2 applications (which never had to deal with BLOBs) easier. Its use in new applications is deprecated. If you need something like it that can handle BLOBs, you're best off writing a wrapper function for the prepare/step interface (you can use the sqlite3_get_table code as a template).
#cfe8bd 1255 active 2005 May anonymous Pending 4 3 Decrease number of warnings with Microsoft Visual C++ Add this to sqliteInt.h to decrease the number of warnings produced by sqlite: #if defined(_MSC_VER) #pragma warning (disable: 4018) // signed/unsigned mismatch #pragma warning (disable: 4244) // conversion from 'unsigned __int64 ' to 'unsigned char ', possible loss of data #pragma warning (disable: 4761) // integral size mismatch in argument; conversion supplied #endif _2005-May-20 19:27:50 by drh:_ {linebreak} Is there no command-line option on microsoft to disable these warnings? ---- _2005-May-21 08:56:15 by anonymous:_ {linebreak} It's possible to lower the warning level, from say 3 to 2 using the command line. But this is not as selective and will remove more warnings than those #pragmas.
#cfe8bd 1264 active 2005 May anonymous Unknown Pending 3 3 access() undefined on MSVC shell.c(1705) : warning C4013: 'access' undefined; assuming extern returning int Add this: #if defined(_WIN32) && defined(_MSC_VER) # include #endif
#e8e8bd 1278 active 2005 Jun anonymous Pending 1 2 sqlite3_finalize doesn't clear previous error code or message A call to sqlite3_finalize(), after an error during sqlite3_prepare() of another statement, returns the correct result SQLITE_OK, but does not reset the error code or error message returned by sqlite3_errcode() and sqlite3_errmsg(). The error reporting functions still return the error code and message associated with the error that occurred during the previous prepare. The attached code demonstrates the problem. One statement is prepared successfully. Then an second statement is prepared. This one fails and returns an error result. The correct error code and message are retrieved using the error reporting API functions. Next, the first statement is finalized, which returns SQLITE_OK. Calling the sqlite_errcode() function at this point still returns the error code from the previous error. I believe the error code and message should be cleared by the successful call to the sqlite3_finalize() API function.
#cfe8bd 1298 active 2005 Jun anonymous Shell Pending drh 4 3 sqlite3.exe not recognizing newline as comment terminator The sqlite3.exe program does not recognize comments that end in a newline if there is valid SQL before them on the same line. The statement: {linebreak} "SELECT 1; -- this is a comment" {linebreak} should be valid SQL, but sqlite3.exe still requires a semicolon after the comment. Please look at the test below: C:\Temp\SQLite>sqlite3 SQLite version 3.2.1 Enter ".help" for instructions sqlite> select 1; 1 sqlite> select 1; -- test comment ...> ; 1 sqlite> select 1; -- test; 1 sqlite> _2005-Jun-22 18:20:37 by anonymous:_ {linebreak} The string "SELECT 1; -- this is a comment" is not valid SQL. SQL does not use the semicolon character inside SQL statements except to seperate the constituent statements of a trigger, stored procedure, or an embedded program. The semicolon is used by sqlite3.exe (and standard SQL) as a end of statement marker that triggers it to parse and execute the input up to that point. From the SQL 2003 standard: 21 Direct invocation of SQL 21.1 Function Specify direct execution of SQL. Format ::= This SQL statement should be "SELECT 1 --this is a comment" followed by a semicolon to mark the end of the statement. Putting the semicolon in the middle terminates the first statement at that point, executes it, and then starts collecting input for a second statement (which in this case does not have a terminating semicolon). Note that sqlite3 also accepts the word GO (used by SQL server) or the Oracle compatible character "/" as the end of statement marker, but only when they appear on a line of input by themselves (i.e. at the continuation prompt in the shell). sqlite> SELECT 1 --this is a comment ...> go 1 sqlite> SELECT 1 --this is a comment ...> / 1 sqlite>
#f2dcdc 1305 active 2005 Jun anonymous TclLib Pending 1 1 Tcl installs pkgIndex with wrong path Correct script below: # This script attempts to install SQLite3 so that it can be used # by TCL. Invoke this script with single argument which is the # version number of SQLite. Example: # # tclsh tclinstaller.tcl 3.0 # set VERSION [lindex $argv 0] set LIBFILE .libs/libtclsqlite3[info sharedlibextension] if { ![info exists env(DESTDIR)] } { set DESTDIR "" } else { set DESTDIR $env(DESTDIR) } set LIBDIR [lindex $auto_path 0] set LIBNAME [file tail $LIBFILE] set LIB $LIBDIR/sqlite3/$LIBNAME file delete -force $DESTDIR$LIBDIR/sqlite3 file mkdir $DESTDIR$LIBDIR/sqlite3 set fd [open $DESTDIR$LIBDIR/sqlite3/pkgIndex.tcl w] puts $fd "package ifneeded sqlite3 $VERSION \[list load $LIB sqlite3\]" close $fd # We cannot use [file copy] because that will just make a copy of # a symbolic link. We have to open and copy the file for ourselves. # set in [open $LIBFILE] fconfigure $in -translation binary set out [open $DESTDIR$LIB w] fconfigure $out -translation binary puts -nonewline $out [read $in] close $in close $out
#f2dcdc 1312 active 2005 Jun anonymous Shell Pending 1 1 CSV file import / export is all wrong! Four problems: Importing a proper CSV file (which delimits strings within double-quotes) is impossible. The '.import' command just treats the double-quotes as ordinary text characters. So (1) Error "expected 2 columns of data but found 3" can occur if one of the strings contains a comma (2) the delimiting double-quotes are NOT stripped off before inserting the data into the table as it should (3) it doesn't understand the standard convention that to represent a double-quote character within a double-quoted string you use TWO double-quote (eg. "3.5"" Floppy Drive") (4) outputting data in CSV mode also doesn't use this standard convention -------------------------------------------- _Product.csv contains: "A001","McVities" "B001","Heinz" "C001","Callard,Bowser" sqlite> .mode csv sqlite> .import _Product.csv Product _Product.csv line 3: expected 2 columns of data but found 3 _2005-Aug-12 23:13:54 by anonymous:_ {linebreak} Commas are also not accounted for when using the sqlite3_mprintf() function(s). I'm guessing the %q flag should also escape these.
#f2dcdc 1323 active 2005 Jul anonymous Pending 1 1 misuse-4.4...gmake: *** [test] Segmentation Fault (core dumped) misuse-4.4...gmake: *** [test] Segmentation Fault (core dumped) on both solaris 8 and 9 build env export CPPFLAGS="-I/tps/include" export LDFLAGS="-L/tps/lib -R/tps/lib" export PKG_CONFIG_PATH=/tps/lib/pkgconfig CC=/tps/bin/gcc CXX=/tps/bin/g++ LD_LIBRARY_PATH=/tps/lib:/tps/lib/sparcv9:/lib:/usr/lib:/usr/local/lib:\ /usr/ccs/lib:/usr/dt/lib:/usr/ucblib:/usr/openwin/lib PATH=/tps/bin:/tps/java/bin:/dsw/source/bin:/dsw/depot-5.13/bin:\ /usr/ccs/bin:/usr/bin:/usr/openwin/bin:/bin:/usr/local/bin:/sbin:\ /usr/sbin:/usr/ucb:/etc:.:/sfoc/bin:/usr/dt/bin:\ /dsw/source/harvest/bin:/usr/afsws/bin:/dsw/pgp-2.6.2s/bin export CC CXX LD_LIBRARY_PATH PATH where /tps is my version of /usr/local where I put all the configuration controlled open source and licensed s/w for my network. [525]$ ../configure --prefix=/dsw/sqlite-3.2.2 --with-tcl=/tps/lib gmake gmake test then got error gcc -v Reading specs from /dsw/gcc-3.4.0/lib/gcc/sparc-sun-solaris2.9/3.4.0/specs Configured with: ../configure --prefix=/dsw/gcc-3.4.0 --disable-nls --enable-languages=c,c++,f77,objc --disable-libgcj --srcdir=/export/build/gcc-3.4.0 --with-ld=/usr/ccs/bin/ld Thread model: posix gcc version 3.4.0 using ActiveTcl8.4.5.0
#cfe8bd 1327 active 2005 Jul ghaering VDBE Pending 2 3 UNION causes sqlite3_column_decltype to always return NULL create table test(foo bar); select foo from test union select foo from test; Normally, sqlite3_column_decltype() will return 'bar', but with the UNION, it will always return NULL. This is quite annoying for pysqlite users, because it renders the type detection useless for UNION queries (and EXCEPT and INTERSECT).
#f2dcdc 1342 active 2005 Jul anonymous Pending 1 1 sqlite 3.2.2 will not load on Suse Linux 9.3 when trying to load the sqlite 3.2.2 .so lib with tcl I get this problem: couldn't load file "/usr/lib/sqlite3/tclsqlite-3.2.2.so": /usr/lib/sqlite3/tclsqlite-3.2.2.so: undefined symbol: sqlite3_version Sqlite 3.2.1 does not give an error, with the same script
#f2dcdc 1351 active 2005 Aug anonymous Shell Pending 1 1 Unable to parse UTF8 input I'm in the process of writing a program which parses in UTF8 data, and then processes it and writes a UTF8 output into a text file. This textfile needs to be imported into SQLite. However the commandline SQLite program doesnt support UTF8 input text files for its ".read" command. Considaring the database itself supports UTF8, would it be possible to allow UTF8 text file input. I can't progress much further on my program if this can't be fixed. _2005-Aug-08 13:56:24 by drh:_ {linebreak} Please attach an example UTF-8 script that ".read" is not reading correctly. ---- _2005-Aug-08 17:58:44 by anonymous:_ {linebreak} I can't seem to attach a text file, so instead i'll put it on my FTP, and it should be accessible from there. If you have trouble with that, i could email the text file. So if you have trouble, give me an email to send it to. There will be a sample of a text file that won't ".read" available here: ftp://62.231.38.73/ in approximately 10 minutes. Thanks for the fast reply. I'd be surprised if it was a problem with my text file generation, but stranger things have happened :p ---- _2005-Aug-11 01:40:08 by drh:_ {linebreak} Attach files using the [Attach] hyperlink at the top-right of this page. Please do not send RAR files since that is an obscure archive format. If you want to use a compressed archive, make it either ZIP or GZIP. ---- _2005-Aug-11 09:46:23 by anonymous:_ {linebreak} I can only attach one because neither winzip or gzip are great at compressing text files. The files keep ending up over 100kb except for the artists.txt file. Thats the only one that went below 100kb. I'm unwilling to try editing the file to remove lines of text from it because i want you to have the exact output that i'm getting from my program. Not the output that i'd get from notepad if i edited it. That will help identify whether it is a problem in my program, or a problem with SQLite. Its always possible its a proglem with my source data, and its not actually proper UTF8, but i doubt that as SQLite doesn't seem to read the normal text correctly (such as the first line which is supposed to start a transaction, but SQLite instead ignores the line, and throws an error when it reachs the commit; at the end.
#cfe8bd 1365 active 2005 Aug anonymous Pending 3 3 64 bit types not completely overridable The current 64 bit types in sqlite3.h and sqliteInt.h do not allow the type to be overriden using a preprocessor definition, unlike all the other base types. The current 64 bit typedefs assume that a "long long" is 64 bits - this is not guaranteed (and on PS2 it is wrong, long long is 128 bits). Here are some minor patches that should allow these types to be overriden, but keep the old behavior if they are not: ==== //sqlite-3.2.2/src/sqlite3.h#1 - sqlite-3.2.2\src\sqlite3.h ==== 81,83c81,83 < #if defined(_MSC_VER) || defined(__BORLANDC__) < typedef __int64 sqlite_int64; < typedef unsigned __int64 sqlite_uint64; --- > #ifdef INT64_TYPE > typedef INT64_TYPE sqlite_int64; > typedef unsigned INT64_TYPE sqlite_uint64; 85,86c85,93 < typedef long long int sqlite_int64; < typedef unsigned long long int sqlite_uint64; --- > # if defined(_MSC_VER) || defined(__BORLANDC__) > typedef __int64 sqlite_int64; > typedef unsigned __int64 sqlite_uint64; > # else > typedef long long int sqlite_int64; > typedef unsigned long long int sqlite_uint64; > # endif > # define INT64_TYPE sqlite_int64 > # define UINT64_TYPE sqlite_uint64 ==== sqlite-3.2.2/src/sqliteInt.h#1 - sqlite-3.2.2\src\sqliteInt.h ==== 157,163d156 < #ifndef UINT64_TYPE < # if defined(_MSC_VER) || defined(__BORLANDC__) < # define UINT64_TYPE unsigned __int64 < # else < # define UINT64_TYPE unsigned long long int < # endif < #endif 183c176 < typedef UINT64_TYPE u64; /* 8-byte unsigned integer */ --- > typedef sqlite_uint64 u64; /* 8-byte unsigned integer */ _2005-Aug-27 16:45:09 by drh:_ {linebreak} Can someone suggest a suitable #ifdef that will automatically identify a PS and do the right thing to provide a 64-bit integer type, similar to what is down for windows?
#f2dcdc 1382 active 2005 Aug anonymous Pending drh 1 1 Assert nErr==0 on corrupt db I'm working on an embedded filesystem where files can be randomly altered. Sometimes my .db files get messed up. I've attached an example db. I'd like to catch the asserts and return an error rather than crash. $ sqlite corrupt-assert.db 'select count(*) from sensor' sqlite: src/main.c:120: sqliteInitCallback: Assertion `nErr==0' failed. Aborted
#f2dcdc 1397 new 2005 Aug anonymous Shell Pending 1 1 .mode csv creates ASCII output instead of UTF-8 compiled from source .mode csv mixes up the charset IMHO Example "für" becomes "f\37777777703\37777777674r" which makes the output file not usable _2005-Aug-31 13:54:57 by anonymous:_ {linebreak} it's better but not UTF-8 utrac -p says ASCII Güssing now is G\303\274ssing hope it's ok to reopen the bug ---- _2005-Sep-03 15:29:37 by anonymous:_ {linebreak} .mode tab csv list all destroy the output now ---- _2005-Sep-20 08:10:34 by anonymous:_ {linebreak} same problem in 3.2.6 ---- _2005-Oct-06 20:14:37 by anonymous:_ {linebreak} csv of today: .mode cvs still does not work, .mode tab works fine now
#f2dcdc 1415 active 2005 Sep anonymous Unknown Pending drh 1 1 Querying for BLOB type fields How do I query for BLOB type fields? I tried 1: field LIKE 'abc' 2: field LIKE quote('abc') and 3: field LIKE X'616263' but nothing seems to return back the record that I am interested in. _2005-Sep-13 08:00:28 by anonymous:_ {linebreak} Using version 3.2.5, you might use the quote function to convert a blob into a string which you can filter using the like operator: select * from test where quote(text) like '%6263%'; is working and usable but may not work as expected because like '%26%' would find the same and this was not expected, isn't it? select * from test where like(quote(text),'%6263%'); doesn't work and select * from test where like(quote(text),'%6263%','%'); doesn't work either ---- _2005-Oct-04 05:44:08 by anonymous:_ {linebreak} This should really be taken to the mailing list, preferably with descriptions of how other DBMSs handle LIKE as applied to BLOB columns.
#cfe8bd 1428 active 2005 Sep anonymous TclLib Pending 3 3 tclinstaller.tcl script problems The pkgIndex.tcl file generated by the tclinstaller.tcl script contains absolute pathnames to the TCL extension library. This causes problems if the extension subdirectory is moved. A more portable solution is shown below. A second problem is that the shared library does not have executable permission. This is a problem on HPUX operating systems. Adding the 0755 permission mode to the open command solves the problem for HPPA and does not cause problems for the other platforms. Here's a diff of the changes I made to address these two problems: cvs diff tclinstaller.tcl Index: tclinstaller.tcl =================================================================== RCS file: /sqlite/sqlite/tclinstaller.tcl,v retrieving revision 1.2 diff -r1.2 tclinstaller.tcl 17c17 < puts $fd "package ifneeded sqlite3 $VERSION \[list load $LIB sqlite3\]" --- > puts $fd "package ifneeded sqlite3 $VERSION \[list load \[file join \$dir $LIBNAME\]\]" 25c25,26 < set out [open $LIB w] --- > #Some platforms such as the HP requre that libraries have the executable bit set > set out [open $LIB w 0755]
#cfe8bd 1445 active 2005 Sep anonymous Pending 3 3 Errors testing sqlite 3.2.6 (& v3.3.7) $ make test [...] conflict-6.0... Ok conflict-6.1... Ok conflict-6.2... Expected: [0 {7 6 9} 1 1] Got: [0 {7 6 9} 1 0] conflict-6.3... Expected: [0 {6 7 3 9} 1 1] Got: [0 {6 7 3 9} 1 0] conflict-6.4... Ok conflict-6.5... Ok conflict-6.6... Ok conflict-6.7... Expected: [0 {6 7 3 9} 1 1] Got: [0 {6 7 3 9} 1 0] conflict-6.8... Expected: [0 {7 6 9} 1 1] Got: [0 {7 6 9} 1 0] conflict-6.9... Expected: [0 {6 7 3 9} 1 1] Got: [0 {6 7 3 9} 1 0] conflict-6.10... Expected: [0 {7 6 9} 1 1] Got: [0 {7 6 9} 1 0] conflict-6.11... Expected: [0 {6 7 3 9} 1 1] Got: [0 {6 7 3 9} 1 0] conflict-6.12... Expected: [0 {6 7 3 9} 1 1] Got: [0 {6 7 3 9} 1 0] conflict-6.13... Expected: [0 {7 6 9} 1 1] Got: [0 {7 6 9} 1 0] conflict-6.14... Ok conflict-6.15... Ok conflict-6.16... Ok [...] date-3.12... Ok date-3.13... Ok date-3.14... Ok date-3.15... Ok date-3.16... Ok date-3.17... Ok /tmp/sqlite-3.2.6/.libs/lt-testfixture: invalid command name "clock" while executing "clock seconds" invoked from within "clock format [clock seconds] -format "%Y-%m-%d" -gmt 1" invoked from within "set now [clock format [clock seconds] -format "%Y-%m-%d" -gmt 1]" (file "./test/date.test" line 142) invoked from within "source $testfile" ("foreach" body line 4) invoked from within "foreach testfile [lsort -dictionary [glob $testdir/*.test]] { set tail [file tail $testfile] if {[lsearch -exact $EXCLUDE $tail]>=0} continue so..." (file "./test/quick.test" line 45) make: *** [test] Error 1 _2005-Sep-19 23:03:56 by drh:_ {linebreak} The test scripts do not (yet) work with Tcl 8.5. Use Tcl 8.4. ---- _2005-Sep-20 01:59:42 by anonymous:_ {linebreak} FYI, The conflict failures occur even when using tcl-8.4. The problem was reported on the mailing list: http://www.mail-archive.com/sqlite-users%40sqlite.org/msg10203.html Curiously, the failures correspond exactly to the test cases that were changed by the following patch: http://www.sqlite.org/cvstrac/filediff?f=sqlite/test/conflict.test&v1=1.24&v2=1.25 ---- _2006-Aug-31 23:49:40 by anonymous:_ {linebreak} building v337 on OSX 10.4.7 w/ TCL8.5 installed as Framework, 'make test' still fails w/: date-3.16... Ok date-3.17... Ok /usr/ports/sqlite-3.3.7/build/.libs/testfixture: invalid command name "clock" while executing "clock seconds" invoked from within "clock format [clock seconds] -format "%Y-%m-%d" -gmt 1" invoked from within "set now [clock format [clock seconds] -format "%Y-%m-%d" -gmt 1]" (file "../test/date.test" line 142) invoked from within "source $testfile" ("foreach" body line 4) invoked from within "foreach testfile [lsort -dictionary [glob $testdir/*.test]] { set tail [file tail $testfile] if {[lsearch -exact $EXCLUDE $tail]>=0} continue so..." (file "../test/quick.test" line 66) make: *** [test] Error 1 any resolution for this, other than revert to TCL 8.4? ---- _2006-Sep-01 01:26:37 by anonymous:_ {linebreak} SQLite under Cygwin fails all tests that involve integers larger than 32 bits. Sqlite produces the correct 64 bit values, but Tcl as distributed with Cygwin cannot grok 64 bit ints, so the comparisons fail. Would it be possible to change Sqlite's test harness to compare SQL results as strings rather than as integers? Then it would not matter if Tcl worked in 64 bit or not. ---- _2006-Sep-01 15:50:48 by drh:_ {linebreak} The test suite has been revised so that it now works with Tcl8.5. But, no, it is not practical to rewrite the tests to compare the results using strings instead of integers in order to work with the (broken) tcl implementation that comes with cygwin. ---- _2006-Sep-06 02:39:24 by anonymous:_ updating to latest cvs-checkout to get the aforementioned fix for: date-3.17... Ok /usr/ports/sqlite-3.3.7/build/.libs/testfixture: invalid command name "clock" while executing i can verify that _that_ is now ok: ... date-3.14... Ok date-3.15... Ok date-3.16... Ok date-3.17... Ok date-4.1... Expected: [2006-09-01] Got: [2006-09-06] date-5.1... Ok date-5.2... Ok date-5.3... Ok ... but now, 'make test' fails next @: delete-8.4... Ok delete-8.5... Ok delete-8.6... Ok delete-8.7... Ok /usr/ports/sqlite-cvs/build/.libs/testfixture: error deleting "test.db": not owner while executing "file delete -force test.db" (file "../test/tester.tcl" line 62) invoked from within "source $testdir/tester.tcl" (file "../test/delete2.test" line 36) invoked from within "source $testfile" ("foreach" body line 4) invoked from within "foreach testfile [lsort -dictionary [glob $testdir/*.test]] { set tail [file tail $testfile] if {[lsearch -exact $EXCLUDE $tail]>=0} continue so..." (file "../test/quick.test" line 66) make: *** [test] Error 1 ---- _2006-Sep-06 11:11:19 by drh:_ {linebreak} Run the build starting from an empty directory as a non-root user. ---- _2006-Sep-06 13:27:18 by anonymous:_ {linebreak} per INSTALL instructions, i did: cvs -d :pserver:anonymous@www.sqlite.org:/sqlite checkout -d sqlite-cvs sqlite cd /usr/ports/sqlite-cvs mkdir build cd build ../configure \ ... make chown -R myuser:wheel /usr/ports/sqlite-cvs sudo -u myuser make test and, as reported, the error was the result. ---- _2006-Sep-30 21:43:45 by anonymous:_ {linebreak} bump. anyone? ---- _2006-Sep-30 22:19:24 by anonymous:_ {linebreak} If you don't happen to be testing on Linux/gcc or Windows/VC++ I find that the Tcl test results have more than a few failures. It is not always easy to discern which failures are due to some odd quirk of Tcl or whether it is a legitimate SQLite issue on a given platform. Be prepared to change test scripts and tinker with the code.
#f2dcdc 1447 active 2005 Sep anonymous BTree Pending 1 1 Abnormal program termination in src/btree.c line 1339 In some circumstances (after having used wxgrid ..) a call to sqlite gives a strange : Assertion Failed: pCur->idx>=0 && pCur->idx < pCur->pPage->nCell, file src/btree line 1339 abnormal program termination there seems to be non way of making a trace back ... any idea? Thanx Doriaqn Tessore _2005-Sep-23 14:32:27 by drh:_ {linebreak} Not much to go on. What version of SQLite is being used? ("SQLite 2" is kind of vague.)
#cfe8bd 1451 active 2005 Sep anonymous Shell Pending 4 3 .mode insert does not output BLOBs in an usable way sqlite> CREATE TABLE a(b); sqlite> INSERT INTO a VALUES (X'41424300500051'); sqlite> .dump BEGIN TRANSACTION; CREATE TABLE a(b); INSERT INTO "a" VALUES(X'41424300500051'); COMMIT; sqlite> .mode insert sqlite> SELECT * FROM a; INSERT INTO table VALUES('ABC');
It would be nice for ".mode insert" to print a command that would actually re-create the same data, the same as ".dump" (the obvious difference is that .dump can't filter data in any way, it just dumps it all) or, at least, it would be very nice if the already existing function that "prints binary data as X'-encoded-string" were reachable from SQL, so that one could use something like: SELECT xencode(b) FROM a;
and obtain X'41424300500051'
_2005-Sep-25 17:03:40 by anonymous:_ {linebreak} I am not the original ticket poster, but I noticed that this feature request is related to {link: http://lists.gnu.org/archive/html/monotone-devel/2005-09/msg00294.html Monotone} migrating away from Base64 encoding to using straight Sqlite blobs. ---- _2005-Sep-26 01:49:33 by drh:_ {linebreak} The built-in quote() function converts BLOBs into ascii BLOB literals. Will it not server for the requested xencode() function? SELECT quote(b) FROM a ---- _2005-Sep-26 08:57:47 by anonymous:_ {linebreak} Yes, I guess it can perfectly do. What about ".mode insert" output? Is it supposed to print raw data?
#e8e8bd 1455 active 2005 Sep anonymous Shell Pending 3 2 .import error: comma inside a string is read as field separator In Sqlite version 3, when you need to import data into a table you use .import. (You cannot do it with COPY). Well, if you need to import data in 'csv' format, and if there is a string in the input data that contains a inside itself, the reading is impossible since the comma is interpreted as a field separator. Example error message: "Error: There are 10 fields in file, and 8 fields were expected." To me its pretty much a nuisance, since csv format is the most usual format for the data I use, and I've got to change the separating character, or else locate and eliminate those extra commas. Hope this helps Thanks Antoni Francino afrancino@mesvilaweb.com
#f2dcdc 1457 active 2005 Sep anonymous Shell Pending 3 1 non-latin chars not recognized by CLI Hello, If i am trying to execute an SQL statement via the CLI of a self-compiled (Win via MinGW) or the prebuilded exe all chars that are not iso are converted to something ugly. For example, the German Umlaute or the è are not entered correctly into a table name, field value and so on. If the same commands are executed from the sqlite>> promt everything works well. As I need to compile my own sqlite.exe I need to be able to change that in code. Thank you for this great Product and a solution to this problem. btw: This is my first request here, so please be patient with me if I´ve set the priority wrong... It´s obviously highest priority to me ;) _2005-Sep-26 21:44:12 by anonymous:_ {linebreak} è should be the accented e in my post, sorry for that
#e8e8bd 1458 active 2005 Sep anonymous Shell Pending 2 2 Error at .importing in csv format (another) Sqlite3 version 3.2.6 : When importing data in csv format the programa adds commas when importing strings enclosed with commas in the source file. In particular it shouldnt add commas when field data is already enclosed in comma, but it does. Curiously enough, it correctly imports numeric data. Example: Take de demodatabase file, and take the table clients. If you try to add these records to the table with .import, it is impossible (the only workaround is deleting commas in the file and importing it in 504,"New Enterprise","Mr Smart","93-2275400"{linebreak} 505,"Another Enterprise","John Dongu","93-8765432"{linebreak} 506,"And here we are","Mr Strange","973-237131"{linebreak} This file would be impossible to import correctly with .import If you set .mode list, it is imported incorrectly, since it keeps the commas around character fields in the table -which is what it should do anyway, since in this mode the program does not expect commas around field data. But when you set <.mode csv>, it imports them also incorrectly - it adds new commas around character data. Souce data in csv format is important, -probably the most general data format available, and often a last resort format for difficult cases... Thank in advance. I would like to help at fixing problems myself, but I do not understand a word of C. Antoni Francino afrancino@mesvilaweb.com _2005-Sep-27 23:07:43 by anonymous:_ {linebreak} You've got the names of your punctuation characters confused. What's happening is that the _double quote marks_ around the text fields are gettting imported, whereas the usual understanding of "CSV" is that they should be stripped off. ---- _2005-Sep-27 23:11:27 by anonymous:_ {linebreak} On further inspection, this turns out to be the same problem as in ticket #1312.
#e8e8bd 1461 active 2005 Sep anonymous Unknown Pending drh 1 2 3.2.7 DLL can not deal File paths with international characters My platform:winxpsp2 chinese version,my database file under a path with chinese charater,with the dll(3.2.1) sqliteexplore works fine,if I change the dll to 3.2.7,it show sqlite error 14:can't open the file,and then I change the path fully english,it can work fine again,so I think mybe it relate to Check-in:2656 _2005-Sep-28 15:31:34 by anonymous:_ {linebreak} and more,the source version has no such problem. ---- _2005-Sep-28 16:09:04 by anonymous:_ {linebreak} I notice in os_win.c, function "sqlite3OsFileExists" use GetFileAttributesA and GetFileAttributesW,but I tried GetFileAttributes works ok.
#f2dcdc 1465 active 2005 Oct anonymous CodeGen Pending 1 1 fdatasync not available and not yet fixed fdatasync is still there ... i downloaded the current configure files which should check for it. do i have to use something else too? ./libtool --mode=link cc -g -DOS_UNIX=1 -DHAVE_USLEEP=1 -I. -I./src -DNDEBUG -DTHREADSAFE=0 -DSQLITE_OMIT_CURSOR -DHAVE_READLINE=0 \ -o sqlite3 ./src/shell.c libsqlite3.la -lcurses cc -g -DOS_UNIX=1 -DHAVE_USLEEP=1 -I. -I./src -DNDEBUG -DTHREADSAFE=0 -DSQLITE_OMIT_CURSOR -DHAVE_READLINE=0 -o .libs/sqlite3 ./src/shell.c ./.libs/libsqlite3.so -lcurses "./src/shell.c", line 355: warning: argument #1 is incompatible with prototype: prototype: pointer to const unsigned char : "./src/shell.c", line 84 argument : pointer to const char "./src/shell.c", line 523: warning: argument #1 is incompatible with prototype: prototype: pointer to const unsigned char : "./src/shell.c", line 84 argument : pointer to char "./src/shell.c", line 694: warning: argument #2 is incompatible with prototype: prototype: pointer to const char : "./src/shell.c", line 583 argument : pointer to const unsigned char Undefined first referenced symbol in file fdatasync ./.libs/libsqlite3.so ld: fatal: Symbol referencing errors. No output written to .libs/sqlite3 make: *** [sqlite3] Error 1 _2005-Oct-04 21:42:26 by drh:_ {linebreak} Clear out your build directory (or start a new one) and rerun configure from scratch. Save the output of configure. Then rerun make. If you still have a problem, attach the output of configure to this ticket.
#f2dcdc 1485 active 2005 Oct anonymous Unknown Pending jshen 3 1 cyrilic problem(suppose Unicode as a whole, for PPC) I think there is a problem in the utf.c file. _2005-Oct-14 06:18:45 by anonymous:_ {linebreak} Pocket PC support isn't provided in the default SQLite provider. The code necessary to support the Pocket PC is at http://sourceforge.net/projects/sqlite-wince and its there that a bug report should be filed.
#cfe8bd 1487 active 2005 Oct anonymous BTree Pending 1 3 Corrupt database causes indefinite loop in sqlite3_step() I had a database become corrupt (no idea why, 57 other databases of similiar information are fine). When attempting to work with the database and execute a SELECT query the application froze in an endless loop inside sqlite3_step(). Upon further investigation (which is when I found the db was corrupt) it seems to be stuck inside the btree code (as reported by Sample). I tested the same query and alternates from sqlite3 CLI and got the same results. The exact query causes an infinite loop. Leaving off part of the WHERE statement (and making it broader) or removing one of the reporting columns simply causes a corruption error. I have the original database as-is and the SQL query that can be run to cause the problem. OS: Mac OS X 10.4.2
#f2dcdc 1488 active 2005 Oct anonymous Pending 1 1 Collate Reverse does not exists? when I execute below SQL{linebreak} CREATE Unique INDEX index10 On Test2 ({linebreak} F1 Collate BINARY ,{linebreak} F2 Collate REVERSE DESC){linebreak} there is a error message:{linebreak} no such collation sequence: REVERSE {linebreak} but the latest document said that binary ,nocase , reverse is common collate function. what is wrong with my sql?
#e8e8bd 1489 active 2005 Oct anonymous Pending 3 2 Bad permissions on install-sh prevent 'make install' from completing It's a trivial problem - install-sh has incorrect permissions via CVS, resulting in 'make install' failing. Error: make installtclsh ../sqlite/tclinstaller.tcl 3.2../sqlite/install-sh -c -d /usr/local/libmake: execvp: ../sqlite/install-sh: Permission deniedmake: *** [install] Error 127 Permissions: -rw-r--r-- 1 cat other 5598 Sep 28 2001 ../sqlite/install-sh Fix: chmod 755 ../sqlite/install-sh
#cfe8bd 1493 active 2005 Oct anonymous Parser Pending 3 3 lemon: pathsearch uses wrong directory separator under Win32 The pathsearch function in lemon.c uses a semicolon (;) to separate the directories in the path. Under Win32 systems this should be a colon (:). _2005-Oct-18 09:37:10 by anonymous:_ {linebreak} --- lemon.c.orig 2005-10-18 11:27:55.753467000 +0200 +++ lemon.c 2005-10-18 11:29:11.897825400 +0200 @@ -2791,13 +2791,16 @@ { char *pathlist; char *path,*cp; + char ds; char c; extern int access(); #ifdef __WIN32__ cp = strrchr(argv0,'\\'); + ds = ';'; #else cp = strrchr(argv0,'/'); + ds = ':'; #endif if( cp ){ c = *cp; @@ -2812,7 +2815,7 @@ path = (char *)malloc( strlen(pathlist)+strlen(name)+2 ); if( path!=0 ){ while( *pathlist ){ - cp = strchr(pathlist,':'); + cp = strchr(pathlist,ds); if( cp==0 ) cp = &pathlist[strlen(pathlist)]; c = *cp; *cp = 0; ---- _2005-Oct-18 09:39:00 by anonymous:_ {linebreak} More readable version of the patch: --- lemon.c.orig 2005-10-18 11:27:55.753467000 +0200 +++ lemon.c 2005-10-18 11:29:11.897825400 +0200 @@ -2791,13 +2791,16 @@ { char *pathlist; char *path,*cp; + char ds; char c; extern int access(); #ifdef __WIN32__ cp = strrchr(argv0,'\\'); + ds = ';'; #else cp = strrchr(argv0,'/'); + ds = ':'; #endif if( cp ){ c = *cp; @@ -2812,7 +2815,7 @@ path = (char *)malloc( strlen(pathlist)+strlen(name)+2 ); if( path!=0 ){ while( *pathlist ){ - cp = strchr(pathlist,':'); + cp = strchr(pathlist,ds); if( cp==0 ) cp = &pathlist[strlen(pathlist)]; c = *cp; *cp = 0; ---- _2005-Oct-19 14:05:27 by anonymous:_ {linebreak} Cygwin and other Posix emulation layers on Windows require ':' for path separators, so you cannot blindly rely on __WIN32__ to make this determination.
#e8e8bd 1494 active 2005 Oct anonymous Unknown Pending 1 2 intermittent null reference exception in sqlite3_open Doing a c# (.net 1.1 / visual studio 2003 / winxp) project with a (so far) small db (5 tables, max 1000 lines per table). About 10% of startups will fail with a NullReferenceException in sqlite3_open(). This is on a dell optiplex, win xp pro sp 2, 1G ram, P4@2.6GHz, using c# in visual studio 2003 version 7.1.3088 with .net framework 1.1.4322 sp 1. Methods in Sqlite3.dll are imported with dllimport like this: [DllImport("Sqlite3.Dll", EntryPoint="sqlite3_open")] public static extern int sqlite3_open( string filename, out IntPtr dbhandle ); This would seem to be an error on my part or .net, except that this is the first sqlite call in the entire application and it fails only sometimes and the file name parameter is hard-coded. 90% of the time the application works fine. Of course, it could be .net messing things up. Anyway I can't find this on the web, sorry for taking up your time if it's not a bug. And no, I haven't got a program specifically for testing this thing. Below is the stack trace for the System.NullReferenceException, the top line is the call into sqlite3.dll. The filename parameter is not null, it is "C:\DOCUMENTS AND SETTINGS\MARTIN.WANGEL\MY DOCUMENTS\VISUAL STUDIO PROJECTS\SOLUTION\BIN\DEBUG\SOLUTIONDATA.SQLITE" and it is thoroughly checked for null and for emptiness. at Solution.mwDatabaseSQLite.sqlite3_open(String filename, IntPtr& dbhandle)\r\n at Solution.mwDatabaseSQLite.Open() in c:\\documents and settings\\martin\\my documents\\visual studio projects\\solution\\mwdatabaseclasses.cs:line 149\r\n at Solution.DBUtils.VerifyDatabase(Type t, String connstr, dbtype dbtyp) in c:\\documents and settings\\martin\\my documents\\visual studio projects\\solution\\dbutils.cs:line 287\r\n at Solution.FrmMain..ctor() in c:\\documents and settings\\martin\\my documents\\visual studio projects\\solution\\form1.cs:line 1499\r\n at Solution.FrmMain.Main() in c:\\documents and settings\\martin\\my documents\\visual studio projects\\solution\\form1.cs:line 1285 /Martin _2005-Oct-27 21:12:22 by anonymous:_ {linebreak} Maybe you should look at "Wiki", "SQLite wrappers", ".NET Framework" for an other way to do this. ---- _2005-Oct-28 00:39:27 by anonymous:_ {linebreak} the calling conventions of standard built DLL are __cdecl, not __stdcall (or WINAPI)... check if .NET Framework is able to call __cdecl import functions... in my mind it will call only STDCALL routines (which is the default on WIN32 API)
#cfe8bd 1502 active 2005 Oct anonymous Unknown Pending anonymous 2 3 When selected in a union, view column names are incorrect. When used in a union, a view transfers an underlying table's column names into the result set. The expected result is that column names in the second half (or further) of the union needn't match those in the first. Problem is also visible in TCL binding. .mode columns .headers on select 'Create two tables, with nasty column names.' as remark; create table t_a (c_a integer); create table t_b (c_a integer); select 'Create two views which each alias the column names of the above tables.' as remark; create view v_a as select c_a as pretty from t_a; create view v_b as select c_a as pretty from t_b; select 'Insert some data' as remark; insert into t_a values (1); insert into t_b values (2); select 'Notice that the views work fine by themselves.' as remark; select 'The column names are both as we asked.' as remark; select pretty from v_a; select pretty from v_b; select 'Notice that used in concert, with a join, the column name is now wrong.' as remark; select pretty from v_a union select pretty from v_b; select 'Aliasing the name of the column in the first half of the join is no help.' as remark; select pretty as pretty from v_a union select pretty from v_b; select 'Alias the name of the column in the second half of the join "fixes" the result.' as remark; select pretty from v_a union select pretty as pretty from v_b;
_2005-Oct-25 03:05:27 by anonymous:_ {linebreak} same as ticket 1228 ---- _2005-Oct-26 07:59:17 by anonymous:_ {linebreak} see also #1327
#cfe8bd 1504 active 2005 Nov anonymous Pager Pending 1 3 Multithreaded DB lock not released using Begin/Commit between threads When using transaction-based insertion of rows in v3.2.7 in a multi-threaded environment, one thread appears not to release the database lock for any competing threads to be able to issue an "INSERT" statement (the thread issuing the "COMMIT" apparently). This problem does not appear in v2.8.16. The problem also appears in Linux (RH-9) as well. I'm attaching "testsqlite.c", a test application (compiled in a WIN32 environment) that will duplicate the issue (Define SQLITE_2_8_16 or SQLITE_3_2_7 depending on the version of SQLite library to test against). Direct any questions to Erik -> lonepenguin@hotmail.com Thank You.
#cfe8bd 1508 active 2005 Nov anonymous BTree Pending 1 3 sqlite 2.8.16 crashes on 64-bit / strict memory alignment archs A few months ago, sqlite3 was fixed on 64-bit / strict memory alignment architectures. Would it be possible for those fixes to be backported to the version_2 code? I have an OpenBSD/sparc64 machine which I can provide ssh access to (as I did before to drh). I know sqlite2 is mostly unsupported, but as php5 uses sqlite2, it would be nice to have these fixes backported. _2006-Jan-05 02:28:23 by anonymous:_ {linebreak} with the attached patch, sqlite 2.8.17 passes all the regressions tests on openbsd/amd64 and openbsd/sparc64.
#cfe8bd 1515 active 2005 Nov anonymous Unknown Pending 4 3 Quotes incorrect included in colum names. The names of columns returns by sqlite_exec has the quotes which has written in the query. I will try to show with examples in the sqlite command line. create table test("Full Name" varchar(30), "Login" varchar(15), Age integer); insert into test ("Full Name", "Login", Age) values ("Enrique Esquivel", "the_kique", 24); .headers on select * from test; SQLite returns: Full Name|Login|Age Enrique Esquivel|the_kique|24 But when write: select "Full Name", "Login", Age from test; returns: "Full Name"|"Login"|Age Enrique Esquivel|the_kique|24 Moreover when quote all fields: select "Full Name", "Login", "Age" from test; returns: "Full Name"|"Login"|"Age" Enrique Esquivel|the_kique|24 Also: select [Full Name], [Login], [Age] from test; SQLite returns wrong: [Full Name]|[Login]|[Age] Enrique Esquivel|the_kique|24 The quotes should be used for SQLite only for understand the identifiers, the fields in result must be unquoted. Try to test with other dbms and anyone has this behavior.
#e8e8bd 1521 active 2005 Nov anonymous Unknown Pending 3 2 ORDER BY sorts incorrectly with aliased fields When executing a SELECT call and aliasing fields that already exist in the table, sorting does not work correctly on the aliased fields. Here's a quick example: CREATE TABLE sort_table (name, name_alt); INSERT INTO sort_table (name, name_alt) VALUES ("a", "z"); INSERT INTO sort_table (name, name_alt) VALUES ("b", "y"); INSERT INTO sort_table (name, name_alt) VALUES ("c", "x"); This simple query works correctly: sqlite> SELECT name_alt FROM sort_table ORDER BY name_alt; name_alt x y z Aliasing name_alt as name throws off the sorter: sqlite> SELECT name_alt AS name FROM sort_table ORDER BY name; name z y x The results should be the same as in the first query and it works correctly in MySql. I'm trying to use this kind of query for a translation library. The only workaround I can think of is something like this: SELECT name_alt AS name, name_alt FROM sort_table ORDER BY name_alt; This works, but is not ideal. _2005-Nov-12 17:28:20 by anonymous:_ {linebreak} In cases like this you can use ordinal numbers as arguments to ORDER BY; replacing the last 'name' with '1' (no quotes) returns the correct results. ---- _2005-Nov-12 19:20:24 by anonymous:_ {linebreak} Yes, this does indeed work, and it makes for a simpler hack than my original one. This is still not the correct behavior though, and should be fixed so that a hack is not required at all. ---- _2005-Nov-13 15:36:22 by anonymous:_ {linebreak} Citing one database's behavior is interesting, but can you quote the paragraph in the SQL standard that shows the behavior you seek is correct? ---- _2005-Nov-13 17:19:37 by anonymous:_ {linebreak} Whether or not its in the SQL standard is somewhat irrelevent if you keep up to date with the mailing list. "Expected behavior", "how do all the other databases do it" and "makes sense" are often the governing factors in implementing changes to SQLite. In this case, proposing this change meets all 3 criteria. ---- _2005-Nov-13 18:03:49 by anonymous:_ {linebreak} SQLite does it one way. MySQL does it another way. Both behaviors could be argued to be correct. The SQL standard is certainly a good way to decide on a valid behavior. Besides, the way MySQL does it is not necessarily representative of "all other databases". At least list the output of a few major databases before making such an assumption. ---- _2005-Nov-13 20:22:40 by anonymous:_ {linebreak} I'm the guy who originally filed the ticket. I'm just an average guy, without the time or resources to run this on lots of different databases. The sql standards are not free, and I don't have a copy of any of them. I've also had very little luck finding information on the internet. All that said, I believe it would be difficult to argue that the current behavior in SQLite is correct. In all other cases, the aliased column is accepted as an ORDER BY field. If I didn't want to override the original field for the purposes of the query, why would I explicitly do so? On the other hand, there are good reasons for wanting to explicitly override the column, for goals that cannot otherwise be achieved without hacks. If I didn't want to override the field, I could get the desired effect by doing nothing. ---- _2005-Nov-14 00:08:30 by anonymous:_ {linebreak} This should really be taken to the mailing list; there are people there with access to many different databases and some of them own copies of the SQL standard. This issue of name resolution also touches on the concerns expressed in #1111, #1213, and #1228. ---- _2005-Nov-14 03:40:28 by anonymous:_ {linebreak} What should the following query return? SELECT name_alt AS name, * FROM sort_table ORDER BY name; Who knows - it is ambiguous. ---- _2005-Nov-14 03:57:44 by anonymous:_ {linebreak} I don't understand your point. Your example is ambiguous because there are two columns with the same name in the result set. The example in the original ticket is not ambiguous and should give the expected result. ---- _2005-Nov-14 05:08:06 by anonymous:_ {linebreak} The original query is ambiguous because you can refer to columns not explicitly mentioned in the SELECT to ORDER BY. No different from: select a, b, c from foo order by d; ---- _2005-Nov-14 13:41:32 by anonymous:_ {linebreak} Not true: SELECT name_alt AS name FROM sort_table ORDER BY name. The field referenced by ORDER BY does appear in the SELECT clause, after the AS. This is perfectly legitimate and the preferred way of sorting expressions. ---- _2005-Nov-14 14:33:21 by anonymous:_ {linebreak} "ORDER BY name" is abmiguous because it can refer to the name_alt column via the alias *OR* the original name column from sort_table. You can ORDER BY things not mentioned in the SELECT. SELECT name_alt AS name FROM sort_table ORDER BY name ---- _2005-Nov-14 16:26:42 by anonymous:_ {linebreak} Yeah, true -- but it's pretty obvious which one we're referring to, since we *explicitly* aliased the original field for the purposes of this query. ---- _2005-Nov-14 16:48:43 by anonymous:_ {linebreak} It's only obvious that you explicitly created an ambiguous column and that is undefined behaviour. Whether it happens to appear in the SELECT is not relevant. If you can demonstrate that the majority of databases support MySQL's behaviour, then that's another matter. ---- _2005-Nov-14 19:35:25 by anonymous:_ {linebreak} I just verified that this works as I expected on MySQL, PostgreSQL, and MS SQL Server. I don't have access to Oracle.
#cfe8bd 1522 active 2005 Nov anonymous Unknown Pending 1 3 Make test fails in manydb 1.82-3.299 mac os x 10.4.3 ppc OS: Mac OS X 10.4.3 ppc Compiler: powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5247) While running "make test" from a cvs checkout on Sun Nov 13 19:01:54 PST 2005, I get these errors: 653 errors out of 23390 tests Failures on these tests: manydb-1.82 manydb-1.83 manydb-1.84 ................. manydb-3.296 manydb-3.297 manydb-3.298 manydb-3.299 _2005-Nov-15 01:13:13 by drh:_ {linebreak} These failures likely result from running out of file descriptors. The manydb tests need about 1000 file descriptors. Linux provides this many (on most distributions). But perhaps Mac OS X does not. Does anybody know?
#f2dcdc 1541 active 2005 Nov anonymous Unknown Pending 1 1 Ticket #924 not fixed in the 2.8 branch Ticket 924: http://www.sqlite.org/cvstrac/tktview?tn=924 This problem seams to have been fixed on the 3.2 branch but is still in the 2.8.16
#f2dcdc 1546 active 2005 Nov anonymous Pending 1 1 Creating unique index on non-unique column leads to corr. on SQLite2 Like ticket #1115, SQLite version 2 suffer too from the bug where : BEGIN; CREATE TABLE t1(a); INSERT INTO t1 VALUES(1); INSERT INTO t1 VALUES(1); CREATE UNIQUE INDEX i1 ON t1(a); COMMIT; PRAGMA integrity_check fails. When "CREATE UNIQUE INDEX" fails within a transaction, (and within a transaction only), the index is still created.
#cfe8bd 1573 active 2005 Dec anonymous Shell Pending 3 3 Bad CSV output when data contains double quotes SQLite 3.2.7 emits invalid CSV when a field value contains double quotes (or at least, it's CSV that Gnumeric cannot parse). Here's an example: sqlite> create table foo (a text, b text); sqlite> insert into foo values ("hello", "world"); sqlite> insert into foo values ("mr.", "o'reilly"); sqlite> insert into foo values ('12" EP', 'blah'); sqlite> .mode csv sqlite> .output foo.csv sqlite> select * from foo; Here's what foo.csv looks like: $ cat foo.csv "hello","world" "mr.","o'reilly" "12" EP","blah" Note the ambiguous quoting on line 3. If I load this file into Gnumeric, it parses the first two lines just fine. But the last line confuses it. It appears that doubling the quote works -- at least for Gnumeric's CSV parser. That is, if I edit foo.csv to "hello","world" "mr.","o'reilly" "12"" EP","blah" then it's OK. This is basically a duplicate of [1312] and related shell problem reports, though it provides a better test case than the previous reports.
#c8c8c8 1598 review 2006 Jan anonymous Pending 3 2 Incorrect case-insensitive comparison of non-latin UTF-8 characters Sqlite incorrectly compares case-insensitivly UTF-8 non-latin characters. I created a patch that fixes this problem and posted it to the mailing list. I wonder if someone could review my patch and eventually include it in the main project. Regards Stanislav Nikolov _2006-Jan-10 22:49:08 by drh:_ {linebreak} The sqlite3_create_collation() and sqlite3_create_function() APIs exists for the purpose of allowing users to define comparisons and any other operations in any way they see fit. There is no need to make changes to the SQLite core to accomodate cyrillic comparisons. Indeed, there are good reasons not to, namely if we correctly compare cyrillic, we should also need to correctly compare chinese, japanese, and korean to name but a few. Very quickly the comparison functions can grow to be many many times larger than the rest of SQLite. We conclude, therefore, that this is all best left to the discretion of the programmer who uses SQLite in their project. Hence we provide the afore mentioned sqlite3_create_collation() and sqlite3_create_function() APIs. ---- _2006-Jan-11 00:24:50 by anonymous:_ {linebreak} Okay, let me try that again. First, the patch I created does not correspond only to Cyrillic letters, but also to the Greek and the accented characters up to U+044F. According to UNICODE.ORG there are actually five alphabets in the world (of which one does not use cases anymore) that have different cases: (from http://www.unicode.org/reports/tr21/tr21-5.html#Introduction) Case is a normative property of characters in specific alphabets (Latin, Greek, Cyrillic, Armenian, and archaic Georgian) whereby characters are considered to be variants of a single letter. These variants, which may differ markedly in shape and size, are called the uppercase letter (also known as capital or majuscule) and the lowercase letter (also known as small or minuscule). The uppercase letter is generally larger than the lowercase letter. Alphabets with case differences are called bicameral; those without are called unicameral. Therefore, I don't think someone will need support for case-insensitive comparison for Japanese, Chinese or Korean characters and I guess that adding support for the remaining Armenian alphabet is a matter of minutes and will not add up to the complexity of the code. Of course, perhaps it is possible for every project and/or developer to design their own "collation schemes" but I don't find it very practical. I can't really see the reason behind rejecting the patch. Perhaps you could actually look at it ? Regards, Stanislav Nikolov ---- _2006-Jan-11 00:33:19 by drh:_ {linebreak} Please attach the patch to this ticket. ---- _2006-Jan-11 00:52:38 by drh:_ {linebreak} OK, I was able to reconstruct the patch from the mailing list. I observer that as written it increases the size of the SQLite library by a little over 4KiB. That might not seem like much, but embedded device manufacturers (that is to say, most of my paying customers) are _very_ sensitive to this kind of library size growth. I will look into reducing the size somewhat and getting it into a future release as a compile-time option. ---- _2006-Jan-11 03:10:09 by drh:_ {linebreak} Based on what I can glean from http://www.unicode.org/Public/UNIDATA/CaseFolding.txt, the case folding table in the patch seems to be incomplete. A full unicode case folding table would need to be much larger. Perhaps somebody with more experience in unicode case folding can comment. ---- _2006-Jan-11 09:52:38 by anonymous:_ {linebreak} I think that the size of the library could be effectivly shrunk if we don't use an array, because over 50% of the information is redundant. I think that the same effect could be achieved by changing sqlite3UpperToLower[] to a function, and in there to check for the ranges of the capital letters (that is, we need to check for 4-5 different regions and return x+20 ort x+1 for capital letters for example). I can try do that? ---- _2006-Jun-08 10:32:06 by anonymous:_ {linebreak} Has anybody worked on this lately? This is quite an issue if you happen to use non-latin chars. Keep up the good work. Anze ---- _2006-Oct-11 10:58:15 by anonymous:_ {linebreak} could you tell me how to run the patch for WindowXP? Thanks a lot.
#f2dcdc 1622 active 2006 Jan danielk1977 Pending 1 1 Compiling with OMIT_PRAGMA causes an error in the test suite Compiling with OMIT_PRAGMA causes an error in the test suite. The error is a Tcl level error thrown by a [db eval] command when it encounters the unknown SQL keyword "PRAGMA".
#cfe8bd 1633 active 2006 Jan anonymous VDBE Pending 3 3 sqlite3_step returns SQLITE_ERROR when interrupted When interrupting an execution of sqlite3_step, sqlite3_step will return a generic SQLITE_ERROR instead of the SQLITE_INTERRUPT error code I'd expect. This is because although in vdbe.c:4427 the SQLITE_INTERRUPT result code is set to the internal 'rc' field of the database connection, a plain SQLITE_ERROR is returned: vdbe_halt: if( rc ){ p->rc = rc; rc = SQLITE_ERROR; }else{ rc = SQLITE_DONE; } The SQLITE_ERROR returned is then also stored as the errCode inside the db handle by the calling function, so there's no way to find out whether the error occured due to an interrupt or some other error (since sqlite3_error() will return SQLITE_ERROR as well). I'd like to ignore interrupt errors where I call sqlite3_step, since those are not of importance to my users, but with the current scheme, I have no way of finding out whether an sqlite3_interrupt causes the error or whether it's a serious error... You get a more specific error-code (for example SQLITE_INTERRUPT) when you call sqlite3_finalize() or sqlite3_reset() on the statement. There shouldn't be any reason not to call one of these functions after sqlite3_step() returns SQLITE_ERROR, as you cannot do anything else with the statement handle at that point anyway. ---- _2006-Apr-26 11:58:01 by anonymous:_ {linebreak} Well, I get SQLITE_ERROR in sqlite3_finalize(), too after I interrupted the query, so I can't even find out at finalize time. However, it might be interesting to see the real error when stepping: in our C++ wrapper, the finalize occurs very late and we usually raise database exceptions when errors occur while stepping. I would really like to raise the proper exception (some sqlite_execution_interrupted exception) when the query was interrupted instead of raising some generic "sql error" exception... ---- _2006-Apr-26 12:24:02 by anonymous:_ {linebreak} Sorry, my previous comment was a bit too quick: I have to admit that _most_ of the time, I do in fact get SQLITE_INTERRUPTED as result from sqlite3_finalize, but every now and then, I do get SQL_ERROR. Maybe this happens when I try to interrupt just before the actual statement execution has started or when execution has just finished? ---- _2006-Jul-26 13:26:19 by anonymous:_ {linebreak} This might, of course, be related to my incorrect usage of =sqlite3_interrupt= from a different thread (Ticket #1897) ?
#e8e8bd 1700 active 2006 Mar anonymous Parser Pending 2 2 Handling column names for aliased queries is broken The following query does not work, SELECT DISTINCT * FROM (SELECT t1.ID FROM GR_ADDRESS t1 WHERE t1.ID > 1 UNION ALL SELECT t1.ID FROM PERSON t1) t1 ORDER BY t1.ID DESC but this one does, SELECT DISTINCT * FROM (SELECT t1.ID FROM GR_ADDRESS t1 WHERE t1.ID > 1 UNION ALL SELECT t1.ID FROM PERSON t1 ORDER BY t1.ID DESC) Dennis Cote responded with: I think you have found another example of the problems SQLite has handling columns names. The following log first shows what SQLite thinks the column name is for the query without the order by clause (i.e. t1.ID). Then we try to order by that column name, with or without the table alias. Both cases result in an error. Finally there is a work around that you could use that applies an alias to the selected columns in the two tables that are combined by the union operation. SQLite version 3.3.2 Enter ".help" for instructions sqlite> create table GR_ADDRESS(id, data); sqlite> create table PERSON(id, data); sqlite> .mode column sqlite> .header on sqlite> insert into gr_address values(1, 10); sqlite> insert into person values(2, 20); sqlite> insert into gr_address values(3, 30); sqlite> SELECT DISTINCT * ...> FROM ...> (SELECT t1.ID ...> FROM GR_ADDRESS t1 ...> WHERE t1.ID > 1 ...> UNION ALL ...> SELECT t1.ID ...> FROM PERSON t1) ...> t1; t1.ID ---------- 3 2 sqlite> SELECT DISTINCT * ...> FROM ...> (SELECT t1.ID ...> FROM GR_ADDRESS t1 ...> WHERE t1.ID > 1 ...> UNION ALL ...> SELECT t1.ID ...> FROM PERSON t1) ...> t1 ORDER BY t1.ID DESC; SQL error: no such column: t1.ID sqlite> SELECT DISTINCT * ...> FROM ...> (SELECT t1.ID ...> FROM GR_ADDRESS t1 ...> WHERE t1.ID > 1 ...> UNION ALL ...> SELECT t1.ID ...> FROM PERSON t1) ...> t1 ORDER BY ID DESC; SQL error: no such column: ID sqlite> SELECT DISTINCT * ...> FROM ...> (SELECT t1.ID as ID ...> FROM GR_ADDRESS t1 ...> WHERE t1.ID > 1 ...> UNION ALL ...> SELECT t1.ID as ID ...> FROM PERSON t1) ...> t1 ORDER BY t1.ID DESC; ID ---------- 3 2 You may also be interested in the discussion of a similar problem under ticket #1688.
#cfe8bd 1733 active 2006 Mar anonymous VDBE Pending drh 4 3 Unaligned Access on ia64: aggregate_context ptr isn't 16-bytes aligned There is a problem on ia64 with pointer returned by sqlite3_aggregate_context function. If the size requested is less than NBFS bytes, then the pointer returned is 8 bytes aligned while every pointer returned by allocator function must be 16-bytes aligned (the specification requires that the pointer is aligned so that every basic typed can be stored there and long double is 16 bytes on Itanium). So if a user allocates, say, 24 bytes for his context, and the first member in his context happens to be a long double, he will get unaligned access exception. This will lead to performance hit on Linux and to crash on HP-UX, since no default SIGBUS handler is present on HP-UX (IIRC). _2006-Mar-27 10:37:37 by anonymous:_ {linebreak} Additional details can be found in this mailing list thread: http://thread.gmane.org/gmane.comp.db.sqlite.general/18144
#cfe8bd 1735 active 2006 Mar anonymous Unknown Pending 1 3 Encoding problem I use latin2 (iso-8859-2) encoding in my system. When operating on sqlite 3 I can insert data that contains national characters into a database (for example using sqlite3 console) and then when I select them back, I am given the proper result. But when I use sqlite driver from Qt4, which uses sqlite3_column_text16() to fetch data from the database, I don't get the expected result (meaning the conversion to UTF-16 probably messed things up). Now the problem can be in one of two places -- either sqlite3 console application doesn't use a proper conversion to convert from my locale encoding into its internal encoding or the database internal mechanisms mess some things up. In short: sqlite3(somelatin2string) ==> SQLITE DMBS ==> sqlite3_column_text16() ==> garbage != somelatin2string At first I thought this was Qt problem as data stored through sqlite console and retrieved from it was correct and data stored by Qt and retrieved by Qt was also correct whereas data stored by Qt and retrieved by sqlite3 console or stored by the console and retrieved by Qt was not correct. I contacted Qt support guys @ trolltech and talked about it and it looks like Qt side if fine -- it expects a UTF-16 encoded data (because it uses the function mentioned earlier) and it converts from UTF-16 to whatever encoding it needs (and vice versa). So the error is probably somewhere in the line between the console and the database itself or in the database internally. It could be that sqlite3 expects UTF-8 (or UTF-16) encoded data on input but is given ISO-8859-2 data (entered manually by me at the console). _2006-Mar-27 16:36:26 by anonymous:_ {linebreak} The console app doesn't convert from your local code page to UTF-8 (or UTF-16). ---- _2006-Mar-27 22:45:21 by anonymous:_ {linebreak} It probably should, in the documentation of sqlite a suggested method of converting databases between versions 2 and 3 is: sqlite OLD.DB .dump | sqlite3 NEW.DB Now =sqlite= outputs the data in "local" format and if =sqlite3= doesn't encode it properly, such a conversion will be invalid because the incoming data won't be utf encoded. A solution could be to do: sqlite OLD.DB .dump | iconv -f -t UTF-8 | sqlite3 NEW.DB But it is the console which should be responsible for the conversion. Also because otherwise using =sqlite3= console on a non-utf system with a perfectly well UTF-8 encoded database will result in improper output too.
#cfe8bd 1742 active 2006 Mar anonymous Unknown Pending drh 2 3 ORDER BY on more than one column causes a big slowdown Put simply, any query which contains an ORDER BY clause that sorts on more than one column incurs a strange slowdown. Running SQLite 3.3.4 on WindowsXPSP2 and on OS X 1.4.5, the behavior is similar; if the ORDER BY clause contains one column, the query is very fast; on two or more columns, it is terribly slow. _2006-Mar-28 23:58:15 by anonymous:_ {linebreak} Also worth noting that this behavior seems to start with SQLite 3.3.x; earlier versions of SQLite handle multiple ORDER BY columns much faster. ---- _2006-Mar-29 01:22:11 by anonymous:_ {linebreak} Note also that this behavior is being exhibited when sorting on *indexed* columns ---- _2006-Mar-29 01:50:38 by drh:_ {linebreak} Some examples would be helpful. ---- _2006-Mar-29 18:11:43 by anonymous:_ {linebreak} Most definitely! I will attach a sample 3.3.4 database dump, that displays this behavior.
#cfe8bd 1743 active 2006 Mar anonymous Parser Pending 3 3 A very very deep IN statement failure Ok the problem is simple. I need to create a VERY VERY large IN statement. The problem is SQLite seems to have a limit on either query length or depth of an IN statement. Here is my example See attached 1 That would be a 2 levels deep In statement. I can only get up to 9 with SQLite but I need to get to 20. Since it works for 9 I can only assume that my 10 is correct even though the error is a syntax error. Below is the code that creates the select statement. See attached 4 Attachment 2 and 3 show a 9 and 10 level respectively. Thanks for your help _2006-Mar-30 21:30:51 by anonymous:_ Select "Wow !!" from "Wow !!" :-) Maybe VIEWs could help ?? ---- _2006-Mar-30 21:37:50 by anonymous:_ {linebreak} This may be a work around for your problem. From looking at your sample SQL: SELECT * FROM xs where classname like '%Bonus_Pay_Weight_Entry%' or classname in ( select parentname from xs where classname in ( select parentname from xs where classname in ( select parentname from xs where classname like '%Bonus_Pay_Weight_Entry%' ) ) ) or classname in ( select parentname from xs where classname in ( select parentname from xs where classname like '%Bonus_Pay_Weight_Entry%' ) ) or classname in ( select parentname from xs where classname like '%Bonus_Pay_Weight_Entry%' ) ; It seems you are trying to find all the parent classes of all the classes with this magic string in their name. If so, I think there is another way to do this. Instead of using a C program to build a huge SQL statement and then collect the results, use a different C program to execute a series of small SQL commands that generate the same result set. The following series of SQL statements should generate the same set of results. create temp table xt as select classname from xs where classname like '%Bonus_Pay_Weight_Entry%'; insert into xt select parentname from xs where classname in xt and parentname not in xt; select changes(); insert into xt select parentname from xs where classname in xt and parentname not in xt; select changes(); insert into xt select parentname from xs where classname in xt and parentname not in xt; select changes(); ... repeat until changes returns zero select * from xs where classname in xt; drop table xt; This can be execute by code that looks something like the following pseudo-C code. string sql; sql = "create temp table xt as select classname from xs where classname like '%Bonus_Pay_Weight_Entry%'"; sqlite3_exec(db, sql); sql = "insert into xt select parentname from xs where classname in xt and parentname not in xt"; sqlite3_stmt* extend = sqlite3_prepare(db, sql); sql = "select changes()" sqlite3_stmt* check = sqlite3_prepare(db, sql); int changes = 0; do { sqlite3_step(extend); sqlite3_reset(extend); sqlite3_step(check); changes = sqlite3_column_int(check, 0); sqlite3_reset(check); } while (changes > 0); sqlite3_finalize(extend); sqlite3_finalize(check); sql = "select * from xs where classname in xt"; sqlite3_stmt* get = sqlite3_prepare(db, sql); int rc; do { rc = sqlite3_step(get); if (rc == SQLITE_DONE) break; // process a result row } while (1); sqlite3_finalize(get); sql = "drop table xt"; sqlite3_exec(db, sql); ---- _2006-Apr-05 17:25:30 by anonymous:_ {linebreak} Where did you find the select changes(); function? I would like to find all the functions that SQLite has and their uses. (and no I dont want the C API. I found that) ---- _2006-Apr-05 18:53:08 by anonymous:_ {linebreak} There is no complete listing of the functions in the documentation that I am aware of. Most are documented on this page http://www.sqlite.org/lang_expr.html but some are missing. The ultimate list of the predefined functions is the source file func.c which implements all the functions. You can view it here http://www.sqlite.org/cvstrac/rlog?f=sqlite/src/func.c
#e8e8bd 1754 active 2006 Apr anonymous Pager Pending anonymous 2 2 Version 3.3.5 error if SQLITE_OMIT_MEMORYDB is defined My solution is to move the following code to de line the the syncJournal is "forwarded" at line 1809. #ifndef SQLITE_OMIT_MEMORYDB /* ** Clear a PgHistory block */ static void clearHistory(PgHistory *pHist){ sqliteFree(pHist->pOrig); sqliteFree(pHist->pStmt); pHist->pOrig = 0; pHist->pStmt = 0; } #else #define clearHistory(x) #endif
#f2dcdc 1775 active 2006 Apr anonymous Unknown Pending 1 1 strftime() not working in Windows Mobile 2005 The strftime() is not working in windows mobile 2005 pocket pc. I am using the beta version of visual studio 2005. _2006-Apr-17 13:10:20 by anonymous:_ {linebreak} Is it a compile/link problem or a runtime problem? ---- _2006-Apr-18 07:26:08 by anonymous:_ {linebreak} It's a runtime problem. I am not able to get the dates formated using strftime(). datetime(), date() and time() are working properly.
#f2dcdc 1782 active 2006 Apr anonymous Unknown Pending 1 1 journal file exclusion I have a long running process which opens a connection to a sqlite3 database, let's called it a.rdb. The connection is never closed during the life time of the process, and it's set to auto commit mode. Now, if I delete a.rdb, and re-create it again(the long running process is still holding a fd to the deleted file at this point), the long running process is still creating a.rdb-journal from time to time. To make it worse, if at that time, I use the sqlite3 command line to modify the database when the file a.rdb-journal exists, the file in a.rdb-journal is also played back into the new a.rdb file, which doesn't seem to be the correct behavior. Is it the intended design? Thanks, John
#e8e8bd 1783 active 2006 Apr anonymous Unknown Pending 3 2 insert times increase with growing table size (when indexed) The time needed to insert (or update) entries in a table with an index on one of the fields increases with the size of the table. For large databases inserts become very slow (which I suppose is likely the problem in ticket #1547). sqlite2 does not have this scaling problem on inserts. (Some of our queries do not scale on sqlite2 however, making its use also impossible.) ----- example code ----- package require dbi package require dbi_sqlite3 dbi_sqlite3 db db create /tmp/test.db db open /tmp/test.db db exec {create table "region" ( "id" integer not null primary key, "start" integer, "end" integer )} db exec {create index "region_index" on "region"("start")} set num 1 for {set j 1} {$j < 20} {incr j} { puts [lindex [time { db begin for {set i 1} {$i < 100000} {incr i} { set s [expr {round(rand()*1000000)}] set e [expr {round(rand()*1000000)}] db exec { insert into "region"("id","start","end") values(?,?,?) } $num $s $e incr num } db commit }] 0] } ----- timings ----- 5712186 6621934 9492997 13234978 14881322 19119044 25296162 26670866 35378986 35877042 44383517 54576510 53317621 63516664 76587973 73791188 88460462 101650099
#cfe8bd 1790 active 2006 May anonymous Pager Pending 3 3 :memory: performance difference between v2 and v3 Please see the following link for details: http://www.mail-archive.com/sqlite-users%40sqlite.org/msg14937.html Possible fix? RCS file: /sqlite/sqlite/src/pager.c,v retrieving revision 1.266 diff -u -r1.266 pager.c --- pager.c 7 Apr 2006 13:54:47 -0000 1.266 +++ pager.c 3 May 2006 19:02:17 -0000 @@ -1663,7 +1663,7 @@ pPager->memDb = memDb; pPager->readOnly = readOnly; /* pPager->needSync = 0; */ - pPager->noSync = pPager->tempFile || !useJournal; + pPager->noSync = pPager->tempFile || !pPager->useJournal; pPager->fullSync = (pPager->noSync?0:1); /* pPager->pFirst = 0; */ /* pPager->pFirstSynced = 0; */ _2006-May-03 19:32:12 by drh:_ {linebreak} The suggested change makes no difference in performance when I try it. ---- _2006-May-03 21:41:24 by anonymous:_ {linebreak} If transactions are not used, 85% of the time of this memory database benchmark is spent in pager_get_all_dirty_pages(). Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 85.25 31.20 31.20 100002 0.31 0.31 pager_get_all_dirty_pages 1.39 31.71 0.51 100011 0.01 0.20 sqlite3VdbeExec 1.17 32.14 0.43 10487713 0.00 0.00 parseCellPtr 0.63 32.37 0.23 12943618 0.00 0.00 sqlite3VdbeSerialGet 0.61 32.59 0.23 3432951 0.00 0.00 pager_lookup 0.52 32.78 0.19 4849544 0.00 0.00 sqlite3VdbeRecordCompare 0.44 32.95 0.16 400006 0.00 0.00 sqlite3BtreeMoveto 0.41 33.09 0.15 2064924 0.00 0.00 sqlite3pager_get 0.40 33.24 0.14 6471807 0.00 0.00 sqlite3MemCompare 0.06 31.25 100002/100002 sqlite3BtreeCommit [4] [5] 85.6 0.06 31.25 100002 sqlite3pager_commit [5] 31.20 0.00 100002/100002 pager_get_all_dirty_pages [6] 0.05 0.00 389365/389365 clearHistory [65] ----------------------------------------------- 31.20 0.00 100002/100002 sqlite3pager_commit [5] [6] 85.2 31.20 0.00 100002 pager_get_all_dirty_pages [6] ---- _2006-May-03 21:51:30 by anonymous:_ {linebreak} Stats with BEGIN/COMMIT enabled: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 11.88 0.34 0.34 4849544 0.00 0.00 sqlite3VdbeRecordCompare 8.16 0.56 0.23 10487713 0.00 0.00 parseCellPtr 7.80 0.79 0.22 12943618 0.00 0.00 sqlite3VdbeSerialGet 6.38 0.96 0.18 100013 0.00 0.03 sqlite3VdbeExec 4.26 1.08 0.12 29816 0.00 0.02 balance_nonroot 3.90 1.20 0.11 6471807 0.00 0.00 sqlite3MemCompare 3.19 1.28 0.09 1964925 0.00 0.00 sqlite3pager_get 3.19 1.38 0.09 400006 0.00 0.00 sqlite3BtreeMoveto 2.84 1.46 0.08 19170231 0.00 0.00 get2byte 2.66 1.53 0.07 700015 0.00 0.00 sqlite3VdbeSerialPut 2.13 1.59 0.06 600993 0.00 0.00 sqlite3Malloc 1.77 1.64 0.05 4400155 0.00 0.00 sqlite3pager_unref 1.77 1.69 0.05 3332952 0.00 0.00 pager_lookup 1.77 1.74 0.05 1418379 0.00 0.00 decodeFlags 1.77 1.79 0.05 1332302 0.00 0.00 initPage 1.60 1.83 0.04 5270826 0.00 0.00 findOverflowCell 1.42 1.88 0.04 12181181 0.00 0.00 findCell 1.42 1.92 0.04 4849549 0.00 0.00 fetchPayload 1.42 1.96 0.04 359548 0.00 0.00 insertCell 1.24 1.99 0.04 4896877 0.00 0.00 parseCell 1.06 2.02 0.03 5284245 0.00 0.00 cellSizePtr 1.06 2.05 0.03 3227291 0.00 0.00 binCollFunc 1.06 2.08 0.03 2616113 0.00 0.00 _page_ref 1.06 2.11 0.03 1368027 0.00 0.00 reparentPage 1.06 2.14 0.03 934205 0.00 0.00 sqlite3GenericMalloc 1.06 2.17 0.03 300010 0.00 0.00 sqlite3BtreeCursor 0.89 2.19 0.03 2536689 0.00 0.00 get4byte 0.89 2.22 0.03 1864920 0.00 0.00 getPage ... 0.00 2.82 0.00 3 0.00 0.00 pager_get_all_dirty_pages 0.00 0.00 3/3 sqlite3BtreeCommit [116] [119] 0.0 0.00 0.00 3 sqlite3pager_commit [119] 0.00 0.00 6551/6551 clearHistory [118] 0.00 0.00 3/3 pager_get_all_dirty_pages [370] ----------------------------------------------- 0.00 0.00 3/3 sqlite3pager_commit [119] [370] 0.0 0.00 0.00 3 pager_get_all_dirty_pages [370] ---- _2006-May-03 22:27:35 by anonymous:_ {linebreak} with the outer BEGIN/COMMIT disabled, the memory database benchmark stats: static PgHdr *pager_get_all_dirty_pages(Pager *pPager){ // this point is reached 100,002 times PgHdr *p, *pList; pList = 0; for(p=pPager->pAll; p; p=p->pNextAll){ // this point is reached 322,956,271 times if( p->dirty ){ // this point is reached 389,365 times p->pDirty = pList; pList = p; } } return pList; } ---- _2006-May-04 05:23:08 by anonymous:_ {linebreak} This patch makes the test (with transaction) run 7% faster for gcc 3.4.4 with -O2. At -O3, gcc performs the inlining of these functions even without the inline hint, so this patch has no effect. RCS file: /sqlite/sqlite/src/btree.c,v retrieving revision 1.324 diff -u -3 -p -r1.324 btree.c --- btree.c 4 Apr 2006 01:54:55 -0000 1.324 +++ btree.c 4 May 2006 05:12:35 -0000 @@ -439,17 +439,17 @@ static int checkReadLocks(BtShared*,Pgno /* ** Read or write a two- and four-byte big-endian integer values. */ -static u32 get2byte(unsigned char *p){ +inline static u32 get2byte(unsigned char *p){ return (p[0]<<8) | p[1]; } -static u32 get4byte(unsigned char *p){ +inline static u32 get4byte(unsigned char *p){ return (p[0]<<24) | (p[1]<<16) | (p[2]<<8) | p[3]; } -static void put2byte(unsigned char *p, u32 v){ +inline static void put2byte(unsigned char *p, u32 v){ p[0] = v>>8; p[1] = v; } -static void put4byte(unsigned char *p, u32 v){ +inline static void put4byte(unsigned char *p, u32 v){ p[0] = v>>24; p[1] = v>>16; p[2] = v>>8; ---- _2006-May-04 19:44:57 by anonymous:_ {linebreak} I just want to confirm that a _file database is faster_ than a memory database for 3.3.5+. Are these numbers correct? 43,478 inserts/second best case for file for 3.3.5+ and 40,000 inserts/second best case for memory? Even with the OS caching the entire database file entirely in RAM, this finding is quite surprising. Test DB IDX TX RC 3.3.5+ 3.3.5 2.8.17 1 mem n y 1000000 40000 33333 76923 2 mem y y 1000000 27027 22727 58824 3 mem n n 1000000 35714 5263 83333 4 mem y n 1000000 24390 2778 62500 5 file n y 1000000 43478 35714 40000 6 file y y 1000000 28571 24390 23256 7 file n n 1000 11 11 13 8 file y n 1000 9 10 13 http://www.sqlite.org/cvstrac/attach_get/256/sqlite_speed.txt ---- _2006-May-04 20:19:18 by anonymous:_ {linebreak} I'm seeing slightly different results. The memory database using a transaction is (slightly) faster than the file-based database using a transaction. Timings on 3.3.5+ on Windows XP, gcc 3.4.4 -O3 -fomit-frame-pointer IDX TX # inserts wall time inserts/sec --- --- --------- ---------- ----------- mem no no 100,000 4.8s 20,833 mem no yes 100,000 4.3s 23,255 file no yes 100,000 4.7s 21,276 file no no 1,000 99.8s 10 ...things get worse for :memory: as you increase the number of inserts, while the file database numbers remain constant: IDX TX # inserts wall time inserts/sec --- --- --------- ---------- ----------- mem no yes 1,000,000 48.5s 20,638 mem no yes 2,000,000 118.6s 16,863 mem no yes 4,000,000 364.7s 10,967 file no yes 1,000,000 46.8s 21,354 file no yes 2,000,000 93.8s 21,321 file no yes 4,000,000 187.5s 21,333 Do Linux users get similar results? Considering I have 512K CPU L2 cache, I wonder if there's some CPU cache effect going on here with the way the :memory: db is allocated. ---- _2006-May-04 21:35:07 by anonymous:_ {linebreak} It seems there is some quadratic behavior in pager_lookup (latest CVS). 52% of the time is spent in that function. Profile data from :memory: db, TX on, no IDX, 4 million inserts: /* ** Find a page in the hash table given its page number. Return ** a pointer to the page or NULL if not found. */ static PgHdr *pager_lookup(Pager *pPager, Pgno pgno){ PgHdr *p = pPager->aHash[pager_hash(pgno)]; while( p && p->pgno!=pgno ){ p = p->pNextHash; } return p; } % cumulative self self total time seconds seconds calls ms/call ms/call name 51.97 118.31 118.31 119658386 0.00 0.00 pager_lookup 4.36 128.25 9.94 4000009 0.00 0.06 sqlite3VdbeExec 3.06 135.21 6.96 315629923 0.00 0.00 parseCellPtr 3.05 142.16 6.95 171797186 0.00 0.00 sqlite3VdbeRecordCompare 2.67 148.22 6.07 12000005 0.00 0.01 sqlite3BtreeMoveto 2.14 153.10 4.88 343594380 0.00 0.00 sqlite3VdbeSerialGet 1.68 156.93 3.83 171797188 0.00 0.00 sqlite3MemCompare 1.63 160.65 3.72 77995781 0.00 0.00 sqlite3pager_get 1.60 164.29 3.65 169734946 0.00 0.00 sqlite3pager_unref 1.58 167.88 3.59 654100795 0.00 0.00 get2byte 1.30 170.84 2.95 973877 0.00 0.07 balance_nonroot 1.27 173.74 2.90 56939555 0.00 0.00 initPage 1.24 176.56 2.83 171797188 0.00 0.00 binCollFunc 0.93 178.69 2.12 386371475 0.00 0.00 findCell 0.86 180.65 1.96 96207437 0.00 0.00 pageDestructor 0.83 182.53 1.89 95976540 0.00 0.00 _page_ref 0.80 184.36 1.83 2708031 0.00 0.00 assemblePage 0.80 186.19 1.82 41662605 0.00 0.00 reparentPage 0.74 187.88 1.70 171797188 0.00 0.00 fetchPayload 0.73 189.55 1.67 73995778 0.00 0.00 getPage 0.67 191.07 1.52 59647596 0.00 0.00 decodeFlags 0.63 192.51 1.44 132945443 0.00 0.00 findOverflowCell 0.62 193.93 1.41 40148167 0.00 0.00 sqlite3PutVarint 0.59 195.27 1.34 134687272 0.00 0.00 releasePage 0.59 196.62 1.34 73764879 0.00 0.00 getAndInitPage 0.54 197.84 1.22 8000003 0.00 0.02 sqlite3BtreeInsert 0.52 199.01 1.18 60000030 0.00 0.00 sqlite3VdbeSerialType 0.52 200.19 1.18 24000011 0.00 0.00 moveToRoot 0.49 201.30 1.10 179797130 0.00 0.00 getCellInfo 0.43 202.28 0.98 9882306 0.00 0.00 insertCell 0.42 203.22 0.94 47288132 0.00 0.00 moveToChild 0.40 204.15 0.92 173434760 0.00 0.00 parseCell 0.40 205.06 0.91 95806930 0.00 0.00 get4byte 0.34 205.83 0.78 41662605 0.00 0.00 sqlite3pager_lookup 0.33 206.57 0.74 165099370 0.00 0.00 sqlite3MallocFailed 0.32 207.31 0.73 20000010 0.00 0.00 sqlite3VdbeSerialPut 0.31 208.02 0.71 8000015 0.00 0.00 sqlite3VdbeHalt 0.30 208.70 0.68 27052986 0.00 0.00 sqlite3GetVarint 0.28 209.33 0.63 174637767 0.00 0.00 put2byte 0.27 209.96 0.62 8000006 0.00 0.00 sqlite3BtreeCursor 0.26 210.54 0.59 8148152 0.00 0.00 fillInCell 0.25 211.12 0.57 3385610 0.00 0.01 reparentChildPages 0.25 211.69 0.57 16000006 0.00 0.00 checkReadLocks 0.22 212.19 0.51 48000093 0.00 0.00 sqlite3VdbeFreeCursor 0.22 212.69 0.50 133898861 0.00 0.00 cellSizePtr 0.22 213.19 0.50 24000010 0.00 0.00 popStack 0.20 213.65 0.46 50076560 0.00 0.00 sqlite3pager_ref 0.20 214.10 0.45 pager_reset 0.19 214.54 0.44 8000024 0.00 0.00 closeAllCursors 0.19 214.97 0.42 12000024 0.00 0.00 sqlite3VdbeMemMakeWriteable 0.18 215.38 0.41 32000052 0.00 0.00 sqlite3VdbeMemSetStr 0.18 215.78 0.40 11616158 0.00 0.00 allocateSpace 0.17 216.16 0.39 8000000 0.00 0.00 bindText 0.16 216.51 0.35 25098767 0.00 0.00 sqlite3MallocRaw 0.16 216.87 0.35 8000005 0.00 0.00 sqlite3BtreeCloseCursor 0.15 217.22 0.34 45560699 0.00 0.00 sqlite3FreeX 0.15 217.56 0.34 36000014 0.00 0.00 sqlite3VarintLen 0.15 217.90 0.34 36000009 0.00 0.00 sqlite3VdbeMemShallowCopy 0.14 218.22 0.33 47999969 0.00 0.00 sqlite3VdbeSerialTypeLen 0.14 218.54 0.33 4000008 0.00 0.00 sqlite3VdbeMakeReady ----------------------------------------------- 41.19 0.00 41662605/119658386 sqlite3pager_lookup [15] 77.12 0.00 77995781/119658386 sqlite3pager_get [8] [5] 52.0 118.31 0.00 119658386 pager_lookup [5] ----------------------------------------------- 0.19 4.02 4000003/77995781 sqlite3BtreeGetMeta [28] 3.53 74.30 73995778/77995781 getPage [9] [8] 36.0 3.72 78.31 77995781 sqlite3pager_get [8] 77.12 0.00 77995781/119658386 pager_lookup [5] 1.12 0.00 56939550/95976540 _page_ref [40] 0.03 0.00 230897/230897 page_remove_from_stmt_list [139] 0.03 0.00 230897/230897 makeClean [138] 0.01 0.00 230897/461804 sqlite3pager_pagecount [150] 0.00 0.00 230897/25098767 sqlite3MallocRaw [58] ----------------------------------------------- 1.82 44.94 41662605/41662605 reparentChildPages [13] [14] 20.5 1.82 44.94 41662605 reparentPage [14] 0.78 41.96 41662605/41662605 sqlite3pager_lookup [15] 2.21 0.00 41672966/131189801 sqlite3pager_unref [31] 0.00 0.00 93099/50076560 sqlite3pager_ref [75] ----------------------------------------------- 0.78 41.96 41662605/41662605 reparentPage [14] [15] 18.8 0.78 41.96 41662605 sqlite3pager_lookup [15] 41.19 0.00 41662605/119658386 pager_lookup [5] 0.77 0.00 39036990/95976540 _page_ref [40] ----------------------------------------------- 0.77 0.00 39036990/95976540 sqlite3pager_lookup [15] 1.12 0.00 56939550/95976540 sqlite3pager_get [8] [40] 0.8 1.89 0.00 95976540 _page_ref [40] ---- _2006-May-04 21:41:37 by anonymous:_ {linebreak} I guess increasing this array size is in order: PgHdr *aHash[N_PG_HASH]; /* Hash table to map page number to PgHdr */ Too many hash collisions leading to growing linked lists in buckets. Or perhaps pager_hash has to be replaced with a better hash function. ---- _2006-May-04 22:04:47 by anonymous:_ {linebreak} Increasing the size of N_PG_HASH to 8192 seems to help the "4 million insert in a transaction into a memory database" benchmark. It now runs in 203.5 seconds (19656 inserts/sec), as opposed to 364.7 seconds (10967 inserts/sec) previously. This is closer to the 187.5 seconds for the file-based database timing. ---- _2006-May-04 22:13:16 by anonymous:_ {linebreak} Increasing N_PG_HASH to 16384 yields 21,052 inserts/second for a 4 million insert single-transaction :memory: database no-index run. This is very close to the file database figure of 21,333 inserts/second. ---- _2006-May-04 22:23:19 by anonymous:_ {linebreak} Setting N_PG_HASH to 32768 yields 21,621 inserts/second in the 4M insert s in a single-transaction in a memory db test. This is marginally faster than the file based database timing. Increasing N_PG_HASH has diminishing returns after 16384. ---- _2006-May-05 15:33:31 by anonymous:_ {linebreak} You should get the same effect if you increase the page size instead of increasing the size of the hash table. With a larger page size there will be fewer pages to be managed by the hash table. This might be a better solution for many applications. A hash table with 32K entries occupies 128K of RAM, whether it is used or not. ---- _2006-May-05 19:37:51 by anonymous:_ {linebreak} 128K of RAM when dealing with a 230M :memory: database is not terribly significant. Here's the timings for various N_PG_HASH and SQLITE_DEFAULT_PAGE_SIZE values for 4 million inserts into a :memory: database in a single transaction: N_PG_HASH SQLITE_DEFAULT_PAGE_SIZE inserts/sec --------- ------------------------ ----------- 16384 4096 21,622 32768 1024 21,621 8192 8192 20,513 4096 4096 20,101 4096 8192 19,417 2048 4096 16,878 2048 8192 16,598 2048 16384 15,038 2048 32768 13,937 2048 1024 10,782 So it seems the default values of N_PG_HASH and SQLITE_DEFAULT_PAGE_SIZE should be raised. ---- _2006-May-05 21:34:01 by anonymous:_ {linebreak} My point was that most users do not have 230 MB memory databases, so having a large hash table which is fixed at that size may be a burden. 128K for the hash table is a lot if you only have 128K in your memmory database. I agree that increasing these values would seem to provide a substantial performance increase at little cost. I would suggest using the 4K hash table and the 4K page size. These values are close to the current values. Many users have reported a general speed improvement using a page size of 4K which matches the value used by WinXP (and think many other Os's as well) for disk I/O blocks. These values nearly double the insert rate over the current default values. The fixed size hash table only takes twice the space. ---- _2006-May-06 14:54:32 by anonymous:_ {linebreak} Memory page speed should be as fast as possible as it effects the general performance of SQLite. Perhaps a static hash table is not the best data structure here. Don't temp tables and intermediate select results on file-based tables use memory-based pages? Making memory page speed as fast as possible will improve overall SQLite performance whether you are using a file or memory based database. For example, when ordering result sets from a file-based database select this routine is used to generate the code: static void pushOntoSorter( Parse *pParse, /* Parser context */ ExprList *pOrderBy, /* The ORDER BY clause */ Select *pSelect /* The whole SELECT statement */ ){ Vdbe *v = pParse->pVdbe; sqlite3ExprCodeExprList(pParse, pOrderBy); sqlite3VdbeAddOp(v, OP_Sequence, pOrderBy->iECursor, 0); sqlite3VdbeAddOp(v, OP_Pull, pOrderBy->nExpr + 1, 0); sqlite3VdbeAddOp(v, OP_MakeRecord, pOrderBy->nExpr + 2, 0); sqlite3VdbeAddOp(v, OP_IdxInsert, pOrderBy->iECursor, 0); For those of us who have very complicated nested sub-selects of file-based tables in many queries or even ORDER BYs on huge result sets, speeding up the memory page performance should be a performance win for SQLite in general. ---- _2006-May-06 17:37:32 by anonymous:_ {linebreak} The following test demonstrates that this memory page issue can greatly effect the performance of queries against file-based tables if temp_store is set to MEMORY. "big" is a file-based table in foo.db with 10 million rows. It was created with "create table big(x,y)". # unmodified stock SQLite built from May 5 2006 CVS (after check-in [3178]) # compiled with default settings for SQLITE_DEFAULT_PAGE_SIZE and N_PG_HASH $ time ./may5-sqlite/sqlite3 foo.db "PRAGMA temp_store = MEMORY; select x, y from big order by y, x" >/dev/null real 13m23.828s user 13m18.452s sys 0m0.811s # SQLite built from May 5 2006 CVS, but compiled with proposed change of # SQLITE_DEFAULT_PAGE_SIZE set to 4096, and N_PG_HASH set to 16384 $ time ./may5-sqlite-hash-opt/sqlite3 foo.db "PRAGMA temp_store = MEMORY; select x, y from big order by y, x" >/dev/null real 6m16.031s user 6m13.108s sys 0m0.811s This is not even what I would consider to be a big table. I should mention that compiling with SQLITE_DEFAULT_PAGE_SIZE = 1024, and N_PG_HASH = 32768 resulted in same timing as the may5-sqlite-hash-opt test run above. A pretty good return for an extra 126K. ---- _2006-May-08 04:07:25 by anonymous:_ {linebreak} You now get 20,725 inserts/second as of the latest check-in [3180] for 4 million inserts into a :memory: database in a single transaction (using the default SQLITE_DEFAULT_PAGE_SIZE of 1K). This is nearly twice as fast as SQLite prior to the check-in [3180] (10,782 inserts/second). However, it is 4% slower than the best timing prior to [3180] when compiled with N_PG_HASH=32768 and SQLITE_DEFAULT_PAGE_SIZE=1024 which got 21,622 inserts/second (see table above). Increasing the size of SQLITE_DEFAULT_PAGE_SIZE with the latest CVS either has no effect or makes the memory insert benchmark timings slightly worse.
#f2dcdc 1791 active 2006 May anonymous Unknown Pending 1 1 Native threads support for BeOS BeOS ports lacks native thread support. BeOS has very powerful but lightweight threading system, being throughout multithreaded, but it differs from posix-thread ideology, thus our pthreads implementation atm looks more like flacky workaround. Ideally will be to have separate implementation for thread-support, like for Win16/32 versions. At the moment this problem caused bustage of BeOS Mozilla port, https://bugzilla.mozilla.org/show_bug.cgi?id=330340 nearest workaround might be pthreads usage, inspite its flackyness, but it also causes mess for Mozilla build/configure system, because for other parts in Mozilla we use nspr-threads, which, for BeOS, use native version _2006-Oct-27 05:48:51 by anonymous:_ {linebreak} BeOS locking extensions (using native bthreads) have been written and are included in the SQLite3 built into Mozilla Firefox. Is there some process wherein these changes might be incorporated into the SQLite tree? ---- _2006-Oct-27 12:48:11 by anonymous:_ {linebreak} Follow the example of OS/2 and propose a patch against the latest SQLite CVS that has proper #ifdef's around BeOS code so it won't break other platforms. Since you're probably the only one interested in this patch, you'll have to do the diffing/merging/testing work yourself. ---- _2006-Nov-07 03:55:36 by anonymous:_ {linebreak} Thanks for the advice. We've completed updates to code so it works with the sqlite 3.3.8 patches proposed for Firefox. Current implementation has a parallel os-specific file (os_beos.c). However, with the latest round of locking enhancements to os_unix.c, we're now wondering if it makes more sense to simply enhance this file to support BeOS locking. (yes, we. surprisingly, there is more than one BeOS user left on the planet.) :)
#f2dcdc 1797 active 2006 May anonymous TclLib Pending drh 1 1 COPY command doesn't work in tclsqlite 3.3.5 The COPY command doesn't seem to work in the tcl sqlite lib. This same script and datafile works in version 3.2.7. load ./lib/libtclsqlite[info sharedlibextension] sqlite MEMORY_DB :memory: MEMORY_DB onecolumn "PRAGMA empty_result_callbacks=1" puts [MEMORY_DB version] MEMORY_DB eval "create table xyz (col1,col2)" MEMORY_DB copy ignore win_pol /home/centadm/win_pol4.csv \t MEMORY_DB eval "select * from xyz" sqlite_array { puts "Here in the callback" foreach sqlite_value $sqlite_array(*) { puts "$sqlite_value $sqlite_array($sqlite_value)" } } The data file win_pol4.csv consists of two columns, tab seperated. DATA1 DATA2 And the output: -bash-3.00$ tclsh test_sqlite.tcl 3.3.5 while executing "MEMORY_DB copy ignore win_pol /home/centadm/win_pol4.csv \t" (file "test_sqlite.tcl" line 5) -bash-3.00$ pwd /home/centadm -bash-3.00$ ls -l /home/centadm/win_pol4.csv -rw-r--r-- 1 centadm centadm 12 May 5 14:21 /home/centadm/win_pol4.csv -bash-3.00$ more /home/centadm/win_pol4.csv DATA1 DATA2 A TCL Error is returned from the copy command, no message tho. I have used catch to capture the command and verified that there is no data going into the table. Also, PRAGMA empty_result_callbacks=1 still doesn't seem to work in the tcllib. If you catch the COPY command above, you still never see the "Here in the callback" message. _2006-May-05 17:57:42 by anonymous:_ {linebreak} Clarification: The line MEMORY_DB copy ignore win_pol /home/centadm/win_pol4.csv \t should read MEMORY_DB copy ignore xyz /home/centadm/win_pol4.csv \t However the result is the same: -bash-3.00$ tclsh test_sqlite.tcl 3.3.5 while executing "MEMORY_DB copy ignore xyz /home/centadm/win_pol4.csv \t" (file "test_sqlite.tcl" line 7) -bash-3.00$ ---- _2006-May-05 19:46:56 by anonymous:_ {linebreak} I have narrowed it down to the code here in tclsqlite.c: zSql = sqlite3_mprintf("SELECT * FROM '%q'", zTable); if( zSql==0 ){ Tcl_AppendResult(interp, "Error: no such table: ", zTable, 0); return TCL_ERROR; } nByte = strlen(zSql); rc = sqlite3_prepare(pDb->db, zSql, 0, &pStmt, 0); sqlite3_free(zSql); if( rc ){ Tcl_AppendResult(interp, "Error: ", sqlite3_errmsg(pDb->db), 0); nCol = 0; }else{ nCol = sqlite3_column_count(pStmt); <--- RETURNING 0 FOR COLUMN COUNT, HAVE VERIFIED TABLE HAS TWO COLUMNS } sqlite3_finalize(pStmt); if( nCol==0 ) { return TCL_ERROR; <--- NO ERROR MESSAGE RETURNED } ---- _2006-May-16 17:51:28 by anonymous:_ {linebreak} I found the problem. The first sqlite3_prepare under DB_COPY should have -1 as it's third argument. When this was change from a 0 to -1 the copy command works in tclsqlite. rc = sqlite3_prepare(pDb->db, zSql,0, &pStmt, 0); should be rc = sqlite3_prepare(pDb->db, zSql,-1, &pStmt, 0); ---- _2006-May-16 18:01:11 by anonymous:_ {linebreak} There is also another reference (the insert statement) to the prepare statement under DB_COPY that needs to change it's third argument from 0 to -1. ---- _2006-Sep-27 16:24:53 by anonymous:_ {linebreak} The same problem is present with version 3.3.7 over here. However, the indicated patch seem to work.
#cfe8bd 1799 active 2006 May anonymous Pager Pending 2 3 temp_store=MEMORY slower than FILE for large intermediate result sets (This ticket was split off from #1790 because that ticket was becoming too broad.) When temp_store=MEMORY it can negatively effect the performance of queries with large intermediate result sets generated from SELECTs of either file-based tables or memory-based tables. This is true when sufficient RAM is available to the SQLite process to completely hold the intermediate results in memory without swapping to disk. In the example below, "big" is a file-based table in foo.db with 10 million rows. It was created with "create table big(x,y)". # unmodified stock SQLite built from May 5 2006 CVS (after check-in [3178]) # compiled with default settings for SQLITE_DEFAULT_PAGE_SIZE and N_PG_HASH $ time ./may5-sqlite/sqlite3 foo.db "PRAGMA temp_store = MEMORY; select x, y from big order by y, x" >/dev/null real 13m23.828s user 13m18.452s sys 0m0.811s # SQLite built from May 5 2006 CVS, but compiled with proposed change of # SQLITE_DEFAULT_PAGE_SIZE set to 4096, and N_PG_HASH set to 16384 $ time ./may5-sqlite-hash-opt/sqlite3 foo.db "PRAGMA temp_store = MEMORY; select x, y from big order by y, x" >/dev/null real 6m16.031s user 6m13.108s sys 0m0.811s Compiling with SQLITE_DEFAULT_PAGE_SIZE = 1024, and N_PG_HASH = 32768 resulted in the same timing as the may5-sqlite-hash-opt test run above. If temp_store=FILE (with default SQLite values for SQLITE_DEFAULT_PAGE_SIZE and N_PG_HASH), the timings are comparable to temp_store=MEMORY with SQLITE_DEFAULT_PAGE_SIZE=4096, and N_PG_HASH=16384. Large intermediate results sets can cause SQLite to spend more than half of its CPU time in the function pager_lookup(). By increasing the value of N_PG_HASH and SQLITE_DEFAULT_PAGE_SIZE, the time spent in pager_lookup can be reduced to near zero, thus doubling performance in such cases. % cumulative self self total time seconds seconds calls ms/call ms/call name 51.97 118.31 118.31 119658386 0.00 0.00 pager_lookup 4.36 128.25 9.94 4000009 0.00 0.06 sqlite3VdbeExec 3.06 135.21 6.96 315629923 0.00 0.00 parseCellPtr 3.05 142.16 6.95 171797186 0.00 0.00 sqlite3VdbeRecordCompare 2.67 148.22 6.07 12000005 0.00 0.01 sqlite3BtreeMoveto 2.14 153.10 4.88 343594380 0.00 0.00 sqlite3VdbeSerialGet 1.68 156.93 3.83 171797188 0.00 0.00 sqlite3MemCompare 1.63 160.65 3.72 77995781 0.00 0.00 sqlite3pager_get 1.60 164.29 3.65 169734946 0.00 0.00 sqlite3pager_unref 1.58 167.88 3.59 654100795 0.00 0.00 get2byte _2006-May-07 18:37:50 by anonymous:_ {linebreak} Timings on same Windows machine with check-in [3180] applied: # FILE $ time ./may7-sqlite/sqlite3 foo.db "PRAGMA temp_store = FILE; select x, y from big order by y, x" >/dev/null real 5m7.157s user 4m19.905s sys 0m20.827s # MEMORY $ time ./may7-sqlite/sqlite3 foo.db "PRAGMA temp_store = MEMORY; select x, y from big order by y, x" >/dev/null real 5m12.328s user 5m9.781s sys 0m0.984s Much better. temp_store=MEMORY is now competitive with FILE, although temp_store=FILE (when the OS is able to cache the file entirely in memory) is marginally faster. I still think the MEMORY time can be reduced further by another 20 seconds judging by the sys time of 20.827s in the FILE test. The MEMORY subsystem of SQLite ought to have an advantage over the FILE subsystem because it does not incur any system call overhead. I'll see if a profile turns up anything obvious.
#cfe8bd 1804 active 2006 May anonymous Unknown Pending 4 3 Inconsistent value type returned by SUM when using a GROUP BY Using a schema with test table: CREATE TABLE Tbl1 (Key1 INTEGER, Num1 REAL) And test data: INSERT INTO Tbl1 (Key1,Num1) VALUES (1,5.0) The query: SELECT SUM(Tbl1.Num1) AS Num1Sum FROM Tbl1 Returns a column with the value type correctly reported as FLOAT (2). However, the query: SELECT Tbl1.Key1, SUM(Tbl1.Num1) AS Num1Sum FROM Tbl1 GROUP BY Tbl1.Key1 Returns two columns with value types INT (1) and INT (1). The SUM function is returning a different value type for these two queries when both should return FLOAT (2). This problem does not occur when any SUMmed value is not a whole number in which case, both queries return a value type of FLOAT for the SUM column. I have applied the patch from Check In 3169 (relating to #1726 and #1755) to select.c but this does not resolve the problem. _2006-May-10 09:34:11 by anonymous:_ {linebreak} I should have added that this problem was seen in a Windows CE build running on Pocket PC 2003 and built using eMbedded Visual C++ 4.0. ---- _2006-May-10 11:24:47 by anonymous:_ {linebreak} I can confirm that exactly the same behaviour is exhibited when built under Windows XP (32-bit). ---- _2006-May-10 12:40:21 by drh:_ {linebreak} The answer you are getting back is exactly correct. Why do you care what its datatype is? If you don't like the datatype, cast it. ---- _2006-May-10 13:42:14 by anonymous:_ {linebreak} For maximum compatibility with other SQL databases, both SELECT SUM(field2) FROM table and SELECT field1, SUM(field2) FROM table GROUP BY field1 should return the same data type for the SUM column. All other databases I have worked with do this. I understand that SQLite uses manifest typing but believe that it should be consistent. The problem I have is that in my query function (which takes an SQL string and returns a page of results as a 2D array of objects), I don't know whether to use sqlite3_column_double or sqlite3_column_int because I don't know what the calling function requires this column to be returned as. I am currently using the sqlite3_column_decltype call to switch which sqlite3_column_* function I use (and falling back on sqlite3_column_type when the declared type is not known e.g. for aggregate functions like SUM). If the return type of SUM is unpredictable, my calling functions can't assume returned values will be of the same type as the field that is being SUMmed (as is the case with other SQL databases). If you don't consider this to be a problem with SQLite, then I think my only option will be for calling functions to pass in an array of return types so that I always return objects of the correct type.
#cfe8bd 1809 active 2006 May anonymous CodeGen Pending 1 3 Huge slowdown/increased memory use when using GROUP BY on big dataset This seemingly nonsensical query is a greatly reduced test case taken from several queries I use with SQLite 3.2.1. The real example joins various huge tables and much more complicated views. I'd like to upgrade beyond SQLite 3.2.1, but this is a showstopper. It takes 13 seconds to run on SQLite 3.2.1 and uses just 1.2M of memory. With 3.3.5+ from CVS it takes 185 seconds and uses 230M of memory. PRAGMA temp_store=MEMORY; CREATE TABLE n1(a integer primary key); INSERT INTO "n1" VALUES(1); INSERT INTO "n1" VALUES(2); INSERT INTO "n1" VALUES(3); INSERT INTO "n1" VALUES(4); INSERT INTO "n1" VALUES(5); INSERT INTO "n1" VALUES(6); INSERT INTO "n1" VALUES(7); INSERT INTO "n1" VALUES(8); INSERT INTO "n1" VALUES(9); INSERT INTO "n1" VALUES(10); INSERT INTO "n1" VALUES(11); INSERT INTO "n1" VALUES(12); INSERT INTO "n1" VALUES(13); INSERT INTO "n1" VALUES(14); INSERT INTO "n1" VALUES(15); CREATE VIEW vu as select v3.a a, v5.a-v2.a*v7.a b from n1 v1,n1 v2,n1 v3,n1 v4,n1 v5,n1 v6,n1 v7; select a a, sum(b) T from vu where a=7 group by a; It seems that SQLite 3.2.1 had a much more efficient GROUP BY algorithm that discarded unnecessary data as the view was traversed. _2006-May-13 03:01:28 by anonymous:_ {linebreak} Seeing as this ticket concerns the GROUP BY statement it would make more sense to have an example like this: select a a, sum(b) T from vu where a<4 group by a; But both queries exhibit the same slowdown and memory increase, in any event. ---- _2006-May-13 15:09:39 by anonymous:_ {linebreak} This GROUP BY slowdown/memory increase is not specific to VIEWs. I repeated the test against a comparably sized table with the same results. You'll see this effect for any SELECT operating on a large number of rows using GROUP BY. ---- _2006-May-13 16:44:04 by anonymous:_ {linebreak} The slowdown first appears in SQLite 3.2.6 in check-in [2662]. ---- _2006-May-24 13:19:29 by anonymous:_ {linebreak} Here's an example to show an actual real-life use of GROUP BY in SQLite <= 3.2.5... Imagine performing mathematical operations on every combination of rows in several large tables for statistical analysis. The GROUP BY algorithm change in 3.2.6 now makes using GROUP BY on huge cross joins not usable for this purpose because it creates an intermediate result set of the product of all cross joins - several times larger than the size of the (already huge) database itself. Indexing is not useful in this case because there is nothing to index by design. All table rows must be traversed. Older versions of SQLite performed this operation extremely efficiently because grouping took place in the main traversal loop. I would think that the old algorithm could be used, but instead of keeping the intermediate results in memory, an index and a table in temp store could be used.
#cfe8bd 1815 active 2006 May anonymous Parser Pending 3 3 Support of W3C-DTF(ISO8601 subset) is incomplete "Z" of a time zone is ignored. Reference: http://www.w3.org/TR/NOTE-datetime CREATE table test(dt); INSERT INTO "test" VALUES('2006-05-20T01:10:20+00:00'); INSERT INTO "test" VALUES('2006-05-20T01:10:20Z'); INSERT INTO "test" VALUES('2006-05-20T10:10:20+09:00'); SELECT datetime(dt) from test; 2006-05-20 01:10:20 2006-05-20 01:10:20
#e8e8bd 1816 active 2006 May anonymous VDBE Pending 1 2 Database corruption with pragma auto_vacuum We had a database created with PRAGMA auto_vacuum=1, that started returning the following message on a DELETE statement. SQL error: database disk image is malformed Running the VACUUM command and running the same DELETE statement succeeds. Running PRAGMA integrity_check on the database (before the VACUUM command is issued) results in the following output: sqlite> PRAGMA integrity_check; *** in database main *** Page 3393 is never used Page 3398 is never used Page 3400 is never used Page 3401 is never used Page 3402 is never used Page 3405 is never used Page 3406 is never used sqlite> VACUUM; sqlite> PRAGMA integrity_check; ok We tried as a temporary workaround, running PRAGMA integrity_check and, based on the result, deciding whether or not to run VACUUM, but this can consume too much time. If needed, I can send a small database that exhibits this problem. _2006-May-22 21:45:47 by drh:_ {linebreak} The database is probably not helpful. What I need to know is: *: What sequence of SQL statements do you issue to cause this to occur? *: What operating system you are using. *: Is the application multi-threaded? *: Is the problem reproducible? *: Are you using a precompiled binary or did you compile it yourself? *: Does the problem go away if you turn off autovacuum? ---- _2006-May-22 22:11:09 by anonymous:_ {linebreak} *: What sequence of SQL statements do you issue to cause this to occur? It is unknown exactly what all of the the statements are leading up to the corruption. I can send the possible statements via private e-mail. *: What operating system you are using. Windows XP Professional w/ Service Pack 2. *: Is the application multi-threaded? Yes. *: Is the problem reproducible? The corruption happens on occasion -- so far it is not known to be easily reproducable in a finite number of steps. *: Are you using a precompiled binary or did you compile it yourself? Self-compiled library. When we use the database in our application, it is contained in abstracted classes with concurrency control. *: Does the problem go away if you turn off autovacuum? We have not seen database corruption if auto_vacuum is off when the database is initially created. Is it possible to turn off auto vacuum after the database tables have been created (no when using pragma auto_vacuum, according to the docs)? ---- _2006-May-22 22:28:46 by anonymous:_ {linebreak} Rather than relying on trial and error to reproduce the bug, one technique the bug reporter might try to reproduce the problem is to take a snapshot of the database when it is in a known good state and save it somewhere and then have every process that comes into contact with the database file log every SQLite command (and pragma) complete with millisecond-resolution timestamp and process/thread ID as follows: SELECT * FROM WHATEVER; -- 2006-05-23 14:44:45.237 PID 345 Thread 0 insert into blah values(3,4,5); -- 2006-05-23 14:50:15.345 PID 345 Thread 0 update foo set v=5 where y>4; -- 2006-05-23 15:05:12.930 PID 239 Thread 0 Should the problem happen again, each command could easily be replayed in an appropriate thread in the same order from the last known "good" state, greatly increasing the chances of repeating the bug. If repeating these commands does not lead to database corruption, it is fairly likely that the bug is in your multithreaded code, and not in SQLite. Perhaps SQLite already has such a command tracing facility already. I don't know. ---- _2006-May-22 22:42:04 by anonymous:_ sqlite3_trace(); It passes all the caller-generated SQL statements to a callback (although it doesn't fill in bindings). It also outputs a lot of "internal" SQL statements (VACUUM, for example, is a collection of operations on a temp table), but you should be able to recognize that stuff as something your app would never generate.
#cfe8bd 1822 active 2006 May anonymous Duplicate 3 3 Table Alias together with Subquery seems not to work proper SELECT * FROM auth AS a LEFT JOIN (SELECT tm.team FROM teammbs AS tm) AS tr ON a.ateam=tr.team; Error message: No such colum tr.team But if I run the sub-query itself, it works fine. Of course, this example can be expressed different, so no subquery required. But the complete expression looks like this: SELECT a.auth, a.avalue FROM auth a LEFT JOIN (SELECT tm.member, tm.team FROM teammbs tm, team t WHERE tm.team=t.teamid AND (t._state<64 or (t._state>120 AND t._state<192)) AND (tm._state<64 or (tm._state>120 AND tm._state<192))) AS tr ON a.ateam=tr.team WHERE (a._state<64 or (a._state>120 AND a._state<192)) AND (a.auser='test' OR tr.member='test') ORDER BY a.auth; It works fine with MySQL 5, and brings the same error on SQLite 3: No such column tr.team. Any idea?
#f2dcdc 1850 active 2006 Jun anonymous Unknown Pending 2 1 NUMERIC data type ERROE when read on uClinux I have update some data of a tables's NUMERIC TYPE column on Windows or Linux,but when I use "select *......." to read on uClinux 2.4.24,I get the wrong value,example:the date I've written is 12.5,but readback is 2.3534826093695e -18.5(use the sqlite3_column_text API). I tried to get the value used "sqlite3_column_double" API,but the result is also wrong; But when I update some data with this column on uClinux,I can read the data right! _2006-Jun-16 01:53:24 by drh:_ {linebreak} What CPU is this happening on? SQLite assumes that floating point values are stored as IEEE 64-bit floats in the same byte order as a 64-bit integer. If your chip does not match this expectation, then floating point won't work.
#f2dcdc 1851 active 2006 Jun anonymous Unknown Pending 2 1 USE "ORDER BY" error on uClinux when I use "ORDER BY" function in a "select" on uClinux 2.4.24, I get a error:"SQL error or missing database" but the same program run on windows or Linux OK. _2006-Jun-16 11:20:15 by drh:_ {linebreak} This is certainly a strange error. Combined with #1850, it suggests a problem with your build, not a problem in SQLite. I have no ability to use or run uCLinux. So if the error cannot be reproduced on a desktop system, there is not much I can do to address the problem. I am afraid you are on your own on this one. ---- _2006-Jun-19 03:12:31 by anonymous:_ {linebreak} I'm just wondering that SQLite 3.2.8 runs on this uClinux system OK but SQLite 3.3.5 is error.
#cfe8bd 1856 active 2006 Jun anonymous Pending 2 3 SQLITE_OMIT_UTF16 breaks 'make test' When compiling sqlite 3.3.6 with -DSQLITE_OMIT_UTF16 and you say 'make test' it fails: make test ./libtool --mode=link gcc -g -O2 -DOS_UNIX=1 -DHAVE_USLEEP=1 -DHAVE_FDATASYNC=1 -I. -I./src -DSQLITE_DEBUG=2 -DSQLITE_MEMDEBUG=2 -DSQLITE_OMIT_UTF16 -I/usr/include -DTHREADSAFE=1 -DSQLITE_THREAD_OVERRIDE_LOCK=-1 -DSQLITE_OMIT_CURSOR -DTCLSH=1 -DSQLITE_TEST=1 -DSQLITE_CRASH_TEST=1 \ -DTEMP_STORE=1 -o testfixture ./src/btree.c ./src/date.c ./src/func.c ./src/os.c ./src/os_unix.c ./src/os_win.c ./src/os_os2.c ./src/pager.c ./src/pragma.c ./src/printf.c ./src/test1.c ./src/test2.c ./src/test3.c ./src/test4.c ./src/test5.c ./src/test6.c ./src/test7.c ./src/test_async.c ./src/test_md5.c ./src/test_server.c ./src/utf.c ./src/util.c ./src/vdbe.c ./src/where.c ./src/tclsqlite.c \ libsqlite3.la -L/usr/lib -ltcl8.4 -ldl -lpthread -lieee -lm gcc -g -O2 -DOS_UNIX=1 -DHAVE_USLEEP=1 -DHAVE_FDATASYNC=1 -I. -I./src -DSQLITE_DEBUG=2 -DSQLITE_MEMDEBUG=2 -DSQLITE_OMIT_UTF16 -I/usr/include -DTHREADSAFE=1 -DSQLITE_THREAD_OVERRIDE_LOCK=-1 -DSQLITE_OMIT_CURSOR -DTCLSH=1 -DSQLITE_TEST=1 -DSQLITE_CRASH_TEST=1 -DTEMP_STORE=1 -o .libs/testfixture ./src/btree.c ./src/date.c ./src/func.c ./src/os.c ./src/os_unix.c ./src/os_win.c ./src/os_os2.c ./src/pager.c ./src/pragma.c ./src/printf.c ./src/test1.c ./src/test2.c ./src/test3.c ./src/test4.c ./src/test5.c ./src/test6.c ./src/test7.c ./src/test_async.c ./src/test_md5.c ./src/test_server.c ./src/utf.c ./src/util.c ./src/vdbe.c ./src/where.c ./src/tclsqlite.c ./.libs/libsqlite3.so -L/usr/lib -ltcl8.4 -ldl -lpthread -lieee -lm -Wl,--rpath -Wl,/home/cla/proj/caissadb/sqlite/sqlite/lib ./src/test1.c: In function 'Sqlitetest1_Init': ./src/test1.c:3742: error: 'unaligned_string_counter' undeclared (first use in this function) ./src/test1.c:3742: error: (Each undeclared identifier is reported only once ./src/test1.c:3742: error: for each function it appears in.) make: *** [testfixture] Error 1 Maybe there is a '#ifndef SQLITE_OMIT_UTF16' / '#endif' needed around Tcl_LinkVar(interp, "unaligned_string_counter", (char*)&unaligned_string_counter, TCL_LINK_INT); in Line 3742 in file src/test1.c? Regards.
#f2dcdc 1861 active 2006 Jun anonymous Pager Pending 1 1 Problem in using Triggers and multithreading I am using SQLite3 database with triggers . This database is used by my processing engine which is having 10 threads accessing the same database. Trigger is used to updata and insert records in a table and that very table is also updated by threads. Processing engine crashes whenever a trigger updates or inserts a record in the table. Can you tell me how to configure my existing engine to avoid crashing? Is it safe to use trigger?
#f2dcdc 1862 active 2006 Jun anonymous TclLib Pending tclguy 1 1 SQLite cannot load/import data from file I found the problem when I tried to load a data file into a table. To reproduce the problem, I got a mini testcase. DATA FILE - test.dat --------------------------- 1 0 0 2 90000 0 3 366000 0 --------------------------- Log from SQLite: ------------------------------------------------------ khronos-yajun>sqlite3 test SQLite version 3.3.6 Enter ".help" for instructions sqlite> create table test (id INT, x1 INT, x2 INT); sqlite> .import test.dat test test.dat line 1: expected 3 columns of data but found 1 sqlite> .exit ------------------------------------------------------- The problem also exists when I use tcl wrapper (sql copy abort test test.dat). I looked into the code in src/tclsqlite.c, In Lines 1045 nByte = strlen(zSql); 1046 rc = sqlite3_prepare(pDb->db, zSql, 0, &pStmt, 0); 1047 sqlite3_free(zSql); Is the third argument of sqlite3_prepare supposed to be the length of zSql, hence nByte? Also in lines 1070 zSql[j++] = ')'; 1071 zSql[j] = 0; 1072 rc = sqlite3_prepare(pDb->db, zSql, 0, &pStmt, 0); 1073 free(zSql); If I change these two places to reflect the length of zSql, I seem to succeed. Yajun _2006-Sep-27 16:25:47 by anonymous:_ {linebreak} This is a duplicate of #1797
#cfe8bd 1867 active 2006 Jun anonymous BTree Pending 1 3 Access Violation after set a new page_size An access violation occured on W2K when I try to create a new table in the empty database. There was a following sequence of SQL commands select count(*)==2 as cnt from sqlite_master where type='table' and tbl_name in ('tbl1', 'tbl2'); so if cnt is equal 0 then I execute command pragma page_size=4096; and then create a new table. I gess that some of internal structures by this time have been initialized and so when I try to create new table the page_size is lower then needed. we overwrite memory in the function zeroPage in instruction: memset(&data[hdr], 0, pBt->usableSize - hdr); Size of structure data less then pBt->usableSize Below result after memset 0:000> dt MemPage 004c3cf0
+0x000 isInit : 0 ''
+0x001 idxShift : 0 ''
+0x002 nOverflow : 0 ''
+0x003 intKey : 0x1 ''
+0x004 leaf : 0x1 ''
+0x005 zeroData : 0 ''
+0x006 leafData : 0x1 ''
+0x007 hasData : 0 ''
+0x008 hdrOffset : 0 ''
+0x009 childPtrSize : 0 ''
+0x00a maxLocal : 0
+0x00c minLocal : 0
+0x00e cellOffset : 0
+0x010 idxParent : 0
+0x012 nFree : 0xf94
+0x014 nCell : 0
+0x018 aOvfl : [5] _OvflCell
+0x040 pBt : (null)
+0x044 aData : (null)
+0x048 pgno : 0
+0x04c pParent : (null)
0012ea50 10006861 004c3cf0 0000000d 00000064 dblited!decodeFlags+0x80 [D:\sqllite\sqlite-3.3.6\btree.c @ 1349]
0012ea70 10006710 004c3cf0 0000000d 004c3cf0 dblited!zeroPage+0xd0 [D:\sqllite\sqlite-3.3.6\btree.c @ 1466]
0012ea8c 10006215 002fd390 002fd390 00000000 dblited!newDatabase+0xf9 [D:\sqllite\sqlite-3.3.6\btree.c @ 2061]
0012eaa0 10052ba0 002f7c30 00000001 0012f0e4 dblited!sqlite3BtreeBeginTrans+0xd6 [D:\sqllite\sqlite-3.3.6\btree.c @ 2141]
0012f0a4 10057cf5 004c3d80 0012f13c 0012f478 dblited!sqlite3VdbeExec+0x2c6d [D:\sqllite\sqlite-3.3.6\vdbe.c @ 2386]
0012f0e4 00412801 004c3d80 0012f1d4 0012f478 dblited!sqlite3_step+0x1db [D:\sqllite\sqlite-3.3.6\vdbeapi.c @ 223]
#cfe8bd 1872 active 2006 Jun anonymous Pending 4 3 sqlite3_open doesn't support RFC1738 format for filename sqlite3_open only supports UTF-8 encoding as a format for its filename argument (http://www.sqlite.org/capi3ref.html#sqlite3_open). If your application receives a RFC1738 encoded URL for filename, that has to be UTF-8-encoded for use in SQLite. It would be nice if that could be instead passed directly to sqlite3_open. Is RFC1738 URL decoding support planned for SQLite? (RFC1738 link: http://www.cse.ohio-state.edu/cgi-bin/rfc/rfc1738.html)
#cfe8bd 1878 active 2006 Jun anonymous CodeGen Pending 2 3 No index used when specifying alias name in ORDER BY clause Using an alias name in the ORDER BY clause prevents indices from being used in the query for sorting purposes: For this schema: CREATE TABLE t1 (c1, c2); CREATE TABLE t2 (c3, c4); CREATE INDEX t1_idx ON t1(c2); the following select query: EXPLAIN QUERY PLAN SELECT t1.c2 AS col2, t2.c4 AS col4 FROM t1 LEFT JOIN t2 ON t1.c1=t2.c3 ORDER BY t1.c2; will indeed use index t1_idx: sqlite> EXPLAIN QUERY PLAN SELECT t1.c2 AS col2, t2.c4 AS col4 FROM t1 LEFT JOIN t2 ON t1.c1=t2.c3 ORDER BY t1.c2; 0|0|TABLE t1 WITH INDEX t1_idx 1|1|TABLE t2 However, when using the alias name =col2= in the =ORDER BY= clause, the index won't be used: sqlite> EXPLAIN QUERY PLAN SELECT t1.c2 AS col2, t2.c4 AS col4 FROM t1 LEFT JOIN t2 ON t1.c1=t2.c3 ORDER BY col2; 0|0|TABLE t1 1|1|TABLE t2 IMHO, the same index should be used in both queries? _2006-Jun-30 13:54:10 by anonymous:_ {linebreak} Not sure whether it's a different issue, but when using a second column in the ORDER BY clause, also no index will be used: sqlite> EXPLAIN QUERY PLAN SELECT t1.c2 AS col2, t2.c4 AS col4 FROM t1 LEFT JOIN t2 ON t1.c1=t2.c3 ORDER BY t1.c2, t2.c4; 0|0|TABLE t1 1|1|TABLE t2 Personally, I'd expect sqlite to use the =t1_idx= index as well to fulfill the primary ordering? ---- _2006-Jun-30 16:04:31 by anonymous:_ {linebreak} As a workaround try "ORDER BY 1" ---- _2006-Jul-03 08:41:01 by anonymous:_ {linebreak} Sorry, I'm not sure how "ORDER BY 1" would be a workaround, when I really need the results to be sorted by table column data... (I don't want to start a discussion in the bug tracker, so you're welcome to take any suggestions/answers to the sqlite-user mailing list, which I also monitor.) ---- _2006-Jul-03 15:51:11 by anonymous:_ {linebreak} I'm not the poster of previous comment, but ORDER BY (n) order by result column index. In your case, using ORDER BY 1, it will be ordered by the first column. ---- _2006-Jul-04 07:33:30 by anonymous:_ {linebreak} Thanks for the clarification. This would be a workaround for the first problem mentioned, but when sorting by two columns, still no index will be used, even if using =ORDER BY 1,2= ---- _2006-Jul-04 21:34:24 by anonymous:_ {linebreak} SQLite really needs a way to explicitly state which index(es) to use. Perhaps something similar to Oracle's comment hints.
#f2dcdc 1882 active 2006 Jul anonymous Pending 1 1 Wrong algorithm of SQLITE_VERSION_NUMBER calculation The sqlite3.h comment describing how numeric version number is calculated is as follows: "The SQLITE_VERSION_NUMBER is an integer with the value (X*100000 + Y*1000 + Z). For example, for version "3.1.1beta", SQLITE_VERSION_NUMBER is set to 3001001." But the value of SQLITE_VERSION_NUMBER is greater than the equation above suggests. The value X*100000 should be changed to X*1000000 (one milion).
#e8e8bd 1884 active 2006 Jul anonymous Pending 3 2 pragma table_info caches results from previous query this problem is observed with pysqlite's latest windows build 2.3.2 and others. it does not occur on unix-based builds, which is why I suspect the issue is in sqlite, since pysqlite's code is platform-neutral. if you get a result from a "pragma table_info()" call, and do not consume all the results, then a subsequent call to the same statement does not return up-to-date results, i.e. if the table had been dropped in between. it behaves as though the results of "pragma table_info" are globally cached somewhere, ignoring the fact that is was executed again. this test program illustrates the problem: from pysqlite2 import dbapi2 as sqlite connection = sqlite.connect(':memory:') # check for a nonexistent table c = connection.execute("pragma table_info(users)") row = c.fetchone() assert row is None # its good. # now create the table connection.execute(""" create table users ( foo VARCHAR(10), name VARCHAR(40) ) """) # do the table_info pragma. returns two rows c = connection.execute("pragma table_info(users)") # get the first row row = c.fetchone() print row # but then dont get the second, close out the cursor instead. #row2 = c.fetchone() # uncomment to fully consume both rows, then it works c.close() c = None # rollback too. connection.rollback() # now drop the table connection.execute("DROP TABLE users") print "dropped" # now it should be gone, right? well it is, but the pragma # call starts off with the former result set c = connection.execute("pragma table_info(users)") row = c.fetchone() print row assert row is None # fails.
#cfe8bd 1885 active 2006 Jul anonymous Shell Pending 2 3 sqlite3 .mode insert and .dump do not list column names for selects In sqlite3 .mode insert does not list column names for selects - it should. This makes dumping selected columns from tables when intending to add or delete columns problematic. .dump doesn't list column names either, IMHO it should. Consider sqlite> .mode tabs{linebreak} sqlite> select * from users;{linebreak} ed 2006-07-05 52{linebreak} sqlite> .mode insert{linebreak} sqlite> select abs_tgt from users;{linebreak} INSERT INTO table VALUES(52);{linebreak} sqlite> Obviously the workaround is to hand edit the output SQL _2006-Jul-11 10:20:08 by anonymous:_ {linebreak} I've just noticed it doesn't include the table name in the INSERT statements either.
#cfe8bd 1893 active 2006 Jul anonymous Pending 3 3 sqlite doesn't use indexes containing primary key in prim. key selects I have table: CREATE TABLE IF NOT EXISTS 'customers' ( 'rowid' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'fname' CHAR(40) NOT NULL, 'sname' CHAR(40) NOT NULL, 'birthno' CHAR(11) NULL) And index: CREATE UNIQUE INDEX IF NOT EXISTS 'idx_customers_sname' ON 'customers' ( 'sname' ASC, 'fname' ASC, 'rowid' ASC ); Command SELECT * FROM customers ORDER BY sname ASC, fname ASC, rowid ASC; doesn't use created index. Command SELECT * FROM customers ORDER BY sname ASC, fname ASC; uses index idx_customers_sname. I think this is a bug, but maybe (i don't know), it is by desing. If I don't specify rowid in ORDER BY, is the resultset ordered by rowid anyway? _2006-Jul-24 16:02:29 by anonymous:_ {linebreak} In SQL single quotes are used around string literals, and double quotes are used around identifiers where required to enclose keywords and/or embedded spaces. In your case no quotes are required at all because your table and column identifiers are continuos (i.e. do not contain embedded spaces) non-keyword names. If you are going to include unnecessary quotes then you should at least use the correct ones. CREATE TABLE IF NOT EXISTS "customers" ( "rowid" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, "fname" CHAR(40) NOT NULL, "sname" CHAR(40) NOT NULL, "birthno" CHAR(11) NULL); CREATE UNIQUE INDEX IF NOT EXISTS "idx_customers_sname" ON "customers" ( "sname" ASC, "fname" ASC, "rowid" ASC ); Aside from that, this does look like a bug. SQLite is doing an unnecessary sort for the first query, and correctly using the index for the second. I suspected that it might be related to handling of the special column name rowid, but it does the same thing if rowid is replaced with a more generic name like id as shown below. SQLite version 3.3.6 Enter ".help" for instructions sqlite> CREATE TABLE IF NOT EXISTS "customers" ( ...> "id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, ...> "fname" CHAR(40) NOT NULL, ...> "sname" CHAR(40) NOT NULL, ...> "birthno" CHAR(11) NULL); sqlite> sqlite> CREATE UNIQUE INDEX IF NOT EXISTS "idx_customers_sname" ...> ON "customers" ( "sname" ASC, "fname" ASC, "id" ASC ); sqlite> sqlite> sqlite> explain query plan SELECT * FROM customers ORDER BY sname ASC, fname ASC , id ASC; 0|0|TABLE customers sqlite> explain query plan SELECT * FROM customers ORDER BY sname ASC, fname ASC ; 0|0|TABLE customers WITH INDEX idx_customers_sname ORDER BY sqlite> sqlite> .explain on sqlite> sqlite> explain SELECT * FROM customers ORDER BY sname ASC, fname ASC, id ASC; addr opcode p1 p2 p3 ---- -------------- ---------- ---------- --------------------------------- 0 OpenVirtual 1 5 keyinfo(3,BINARY,BINARY) 1 Goto 0 34 2 Integer 0 0 3 OpenRead 0 2 4 SetNumColumns 0 4 5 Rewind 0 19 6 Rowid 0 0 7 Column 0 1 8 Column 0 2 9 Column 0 3 10 MakeRecord 4 0 11 Column 0 2 12 Column 0 1 13 Rowid 0 0 14 Sequence 1 0 15 Pull 4 0 16 MakeRecord 5 0 17 IdxInsert 1 0 18 Next 0 6 19 Close 0 0 20 OpenPseudo 2 0 21 SetNumColumns 2 4 22 Sort 1 32 23 Integer 1 0 24 Column 1 4 25 Insert 2 0 26 Column 2 0 27 Column 2 1 28 Column 2 2 29 Column 2 3 30 Callback 4 0 31 Next 1 23 32 Close 2 0 33 Halt 0 0 34 Transaction 0 0 35 VerifyCookie 0 2 36 Goto 0 2 37 Noop 0 0 sqlite> explain SELECT * FROM customers ORDER BY sname ASC, fname ASC; addr opcode p1 p2 p3 ---- -------------- ---------- ---------- --------------------------------- 0 Noop 0 0 1 Goto 0 21 2 Integer 0 0 3 OpenRead 0 2 4 SetNumColumns 0 4 5 Integer 0 0 6 OpenRead 2 4 keyinfo(3,BINARY,BINARY) 7 Rewind 2 18 8 RowKey 2 0 9 IdxIsNull 0 17 10 IdxRowid 2 0 11 MoveGe 0 0 12 Rowid 0 0 13 Column 0 1 14 Column 0 2 15 Column 0 3 16 Callback 4 0 17 Next 2 8 18 Close 0 0 19 Close 2 0 20 Halt 0 0 21 Transaction 0 0 22 VerifyCookie 0 2 23 Goto 0 2 24 Noop 0 0 sqlite> ---- _2006-Aug-03 17:33:51 by anonymous:_ {linebreak} Thank you for clarification of single and double quotes usage. I will drop the quotes completely since using double quotes is little bit annoying inside C string literals... It seems that the problem with index and sorting on primary key is independent of primary key column name. In fact, previously I was using "id" :-) and the result was the same as you mentioned.
#f2dcdc 1900 active 2006 Jul anonymous Unknown Pending a.rottmann 1 1 CURRENT_TIMESTAMP keyword not inserting UTC date in column This is the schema for my table. create table char (player varchar(64) NOT NULL default '~', name varchar(64) NOT NULL default '~', date timestamp NOT NULL default current_timestamp) Whenever an insert is made to the table the column 'date' does get a UTC timestamp, it gets a string value 'current_timestamp'. Is my schema wrong? _2006-Jul-30 22:31:06 by anonymous:_ {linebreak} *doesnt get a UTC timestap ---- _2006-Jul-31 00:38:49 by anonymous:_ {linebreak} Works fine for me. What's the exact syntax of your INSERT statement?
#e8e8bd 1901 active 2006 Jul anonymous Unknown Pending adamd 2 2 problem in select request with a alias table I have a table with 3 columns : c0, c1 and c2 My request is: select * from (select *, 'test' as new_col from table) as tmp inner join (select 'test' as new_col) as tmp1 on tmp.new_col = tmp1.new_col; The column's name as a result of this request (sqlite 3-3.3.6) is: |tmp.table.c0|tmp.table.c1|tmp.table.c2|tmp.new_col|tmp1.new_col In sqlite 3-3.2.7, the column's name is: |c0|c1|c2|collected|new_col|new_col Before this version, my request ran on mysql, postgresql and sqlite. Now I don't have the possibility of using this request with the new sqlite version. _2006-Jul-31 10:11:59 by anonymous:_ {linebreak} sorry in In sqlite 3-3.2.7, the column's name is: |c0|c1|c2|new_col|new_col ---- _2007-Jan-08 14:52:43 by anonymous:_ {linebreak} I had a similar problem with SQLite in PHP, see my bug report here: http://bugs.php.net/bug.php?id=40064
#f2dcdc 1941 active 2006 Aug anonymous Pending 1 1 Unrevolved _sqlite3ExprCodeAndCache with SQLITE_OMIT_TRIGGER If =SQLITE_OMIT_TRIGGER= is set, linker complains about an unresolved =_sqlite3ExprCodeAndCache= symbol. =sqlite3ExprCodeAndCache= is defined in =expr.c= and wrapped with =#ifndef SQLITE_OMIT_TRIGGER=. However, references in insert.c, line 536 update.c, line 348 and 362 are not wrapped with #ifndef =SQLITE_OMIT_TRIGGER=. I followed the suggestion quoted below (posted earlier to this list) without avail. Is it safe (or even required?) to change sqliteInt.h to #ifndef SQLITE_OMIT_TRIGGER void sqlite3ExprCodeAndCache(Parse*, Expr*); #else # define sqlite3ExprCodeAndCache(A,B) #endif In the mailing list, DRH argued that the above change will probably fail and suggested that a safer fix would be to remove the #ifndef SQLITE_OMIT_TRIGGER from around the sqlite3ExprCodeAndCache function. _2006-Oct-12 17:35:32 by anonymous:_ {linebreak} The problem is still present in 3.3.8. Removing the #ifndef SQLITE_OMIT_TRIGGER from around the sqlite3ExprCodeAndCache function seems to fix it. Could you commit this?
#e8e8bd 1946 new 2006 Aug anonymous Unknown New 2 2 .read file fails on blob fields with end-of-file char I've a table with a blob fields. I put there binary data that contains 0x1a (end of file) symbol. It's alright until i try to dump table to file and then trying to import that file. sqlite3 my_db {linebreak}>.output my_file {linebreak}>.dump table_with_blob {linebreak}>.exit {linebreak}del my_db sqlite3 my_db {linebreak}>.read my_file Fails with "Incomplete SQL: ..." SQL break before 0x1a char I'm on windows. Possibly solving with opening file as binary file. Sorry for my English
#cfe8bd 1947 active 2006 Aug anonymous Shell Pending 3 3 ".mode insert" works bad with BLOBs .mode insert displays BLOBs as strings, which isn't very good for embedded NULs. Having output more like the one from .dump would be better, IMO. sqlite> select * from t; INSERT INTO table VALUES(''); sqlite> .dump BEGIN TRANSACTION; CREATE TABLE t(f BLOB); INSERT INTO "t" VALUES(X'0041'); COMMIT;
#cfe8bd 1948 active 2006 Aug anonymous Shell Pending 2 3 Double quotes are not escaped in csv mode If text is exported using "csv" mode, double quotes in strings are not escaped. Generally double-quotes in a quoted field in CSV should be escaped by repeate. I.e., 'This is a "test".' could be about as "This is a ""test""." This doesn't appear to be the behavior SQLite uses, so, in the meantime, I'll have to export my data using another method and then transform that data into CSV for my import script.
#cfe8bd 1953 active 2006 Sep anonymous TclLib Pending 4 3 Fix for false 64-bit comparisons "make test" failures on Cygwin The trivial patch below allows Cygwin to correctly pass all (two dozen or so) 64-bit integer-related tests in "make test". It does so by treating all 64-bit integer SQL results as strings. (Note: SQLite has always produced correct 64-bit integer results, it's just that the test harness on Cygwin produces false failures without this patch.) There is no impact to other platforms, and allows us unfortunate Windows users to be useful members of society. RCS file: /sqlite/sqlite/src/tclsqlite.c,v retrieving revision 1.172 diff -u -r1.172 tclsqlite.c --- src/tclsqlite.c 31 Aug 2006 15:07:15 -0000 1.172 +++ src/tclsqlite.c 1 Sep 2006 17:27:44 -0000 @@ -432,7 +432,12 @@ if( v>=-2147483647 && v<=2147483647 ){ pVal = Tcl_NewIntObj(v); }else{ +#ifndef __CYGWIN__ pVal = Tcl_NewWideIntObj(v); +#else + int bytes = sqlite3_value_bytes(pIn); + pVal = Tcl_NewStringObj((char *)sqlite3_value_text(pIn), bytes); +#endif } break; } @@ -1420,7 +1425,11 @@ if( v>=-2147483647 && v<=2147483647 ){ pVal = Tcl_NewIntObj(v); }else{ +#ifndef __CYGWIN__ pVal = Tcl_NewWideIntObj(v); +#else + pVal = dbTextToObj((char *)sqlite3_column_text(pStmt, i)); +#endif } break; } Example test failures before patch: $ ./testfixture.exe test/misc2.testmisc2-1.1... Ok misc2-1.2... Ok misc2-2.1... Ok misc2-2.2... Ok misc2-2.3... Ok misc2-3.1... Ok misc2-4.1... Expected: [4000000000] Got: [-294967296] misc2-4.2... Expected: [4000000000 2147483648] Got: [-294967296 -2147483648] misc2-4.3... Ok misc2-4.4... Expected: [1 2147483648 2147483647] Got: [1 -2147483648 2147483647] misc2-4.5... Expected: [1 4000000000 2147483648 2147483647] Got: [1 -294967296 -2147483648 2147483647] misc2-4.6... Expected: [1 2147483647 2147483648 4000000000] Got: [1 2147483647 -2147483648 -294967296] misc2-5.1... Ok misc2-6.1... Ok misc2-7.1... Ok misc2-7.2... Ok misc2-7.3... Ok misc2-7.4... Ok misc2-7.5... Ok misc2-7.6... Ok misc2-7.7... Ok misc2-7.8... Ok misc2-8.1... Ok misc2-9.1... Ok misc2-9.2... Ok misc2-9.3... Ok misc2-10.1... Ok Thread-specific data deallocated properly 5 errors out of 28 tests Failures on these tests: misc2-4.1 misc2-4.2 misc2-4.4 misc2-4.5 misc2-4.6 After patch applied: $ ./testfixture.exe test/misc2.testmisc2-1.1... Ok misc2-1.2... Ok misc2-2.1... Ok misc2-2.2... Ok misc2-2.3... Ok misc2-3.1... Ok misc2-4.1... Ok misc2-4.2... Ok misc2-4.3... Ok misc2-4.4... Ok misc2-4.5... Ok misc2-4.6... Ok misc2-5.1... Ok misc2-6.1... Ok misc2-7.1... Ok misc2-7.2... Ok misc2-7.3... Ok misc2-7.4... Ok misc2-7.5... Ok misc2-7.6... Ok misc2-7.7... Ok misc2-7.8... Ok misc2-8.1... Ok misc2-9.1... Ok misc2-9.2... Ok misc2-9.3... Ok misc2-10.1... Ok Thread-specific data deallocated properly 0 errors out of 28 tests Failures on these tests: The only new regression on Cygwin is this test, which is expected: types3-2.3... Expected: [wideInt] Got: [] _2006-Sep-01 18:55:25 by drh:_ {linebreak} The TCL interface is more than just part of the test harness. A lot of people use the TCL interface as part of their applications. I believe what this patch does is mask a real problem. I would prefer to fix the underlying problem, not just treat the symptom. ---- _2006-Sep-02 02:48:57 by anonymous:_ {linebreak} I have no interest in fixing bugs in Tcl itself on Cygwin. I just want to reliably build and test SQLite. The proposed fix is purely pragmatic and is intended only for the test harness. Indeed, when dealing with testing only, the fix is not Cygwin-specific and would work on any platform. The test harness under stock Cygwin as it stands simply does not work for 64 bit values. When you see such a failure you assume that SQLite is in error. Perhaps a compromise can be made and the code fix in question can be wrapped in #ifdef SQLITE_TESTFIXTURE or equivalent instead of #ifdef __CYGWIN__. I would hate to see someone else waste any time on this trivially fixable issue. ---- _2006-Sep-02 13:27:08 by drh:_ {linebreak} Perhaps you could put an "if" statement in the test scripts that skipped over the tests that do not work if running under cygwin. You can probably figure out if you are running under cygwin by looking at elements of the tcl_platform array. ---- _2006-Sep-02 13:48:56 by drh:_ {linebreak} I retract my previous suggestion. I do not want such patches in the SQLite source tree. I will resist any patches such as shown here because they are really hacks to work around a faulty Tcl build on Cygwin. The correct way to fix this is to fix the Tcl build for Cygwin. This is probably as simple as download a copy of Tcl and recompiling. I'm curious to know why the default Tcl build for Cygwin only supports 32-bit integers. Is there some problem with 64-bit integer support on Cygwin? The patch shown in the Description section above is not good because it presumes that Cygwin will always be broken. I think a better assumption is that Cygwin will get fixed. And I do not want to cripple the TCL interface to work around a bug that is not related to SQLite and which might not exist on every system. That is *so* wrong. I will be willing to put in a test that checks for the cygwin brokenness and prints a warning to the user. Perhaps something like this: if {"*[db eval {SELECT 10000000000}]*"!="*10000000000*"} { puts "*********** WARNING *************" puts "Your build of TCL only supports 32-bit integers." puts "This will cause many test failures. To run these" puts "tests you must install a version of TCL that supports" puts "64-bit integers." exit } The question is, does that test correctly detect the broken Cygwin? Since I have no ready access to a windows machine, I have no way of testing it. ---- _2006-Sep-02 14:06:28 by anonymous:_ {linebreak} Then you would have an on-going maintenance issue with future tests. If'ing out valid tests just masks the problem and defeats the purpose of having a test regression suite. If a test fails legitimately, it should be reported as such. But these particular 64-bit tests work correctly if the simple proposed patch to the test harness is checked in. There is nothing wrong with the tests themselves - just the test harness on certain platforms for which Tcl does support 64-bit integers for whatever reason. Is the purpose of the test suite to test SQLite or Tcl implementations? I know that Cygwin is a considered a tier "C" platform for SQLite, but appreciate that from a Cygwin environment me and many others have reported at least couple of dozen non-platform-specific SQLite bugs over the past year. You probably have as many or more Cygwin users on the mailing list than Mac OSX users. Why put up artificial ideological roadblocks? ---- _2006-Sep-02 14:10:35 by anonymous:_ {linebreak} Please do not check the "Your build of TCL only supports 32-bit integers". It is couter-productive to exit when the great majority of tests will pass. Such a check will basically exclude stock Cygwin installs from testing SQLite. Given the choice between having a broken test harness and this TCL 32-bit check, it is more useful to have a broken test harness. ---- _2006-Sep-04 01:25:00 by anonymous:_ {linebreak} {link: http://sourceforge.net/tracker/index.php?func=detail&aid=1551762&group_id=10894&atid=110894 Cygwin Tcl 8.5 64-bit integer math bug report} {link: http://sourceforge.net/tracker/download.php?group_id=10894&atid=110894&file_id=191898&aid=1551762 Cygwin Tcl 8.5a5 64-bit integer math fix}
#f2dcdc 1954 active 2006 Sep anonymous Unknown Pending 1 1 Dual Core Processor Lockup I seem to be seeing a problem with dual core processors in the the Open call is locking and does not release or throw an exception. It does not occur every time, but occurs around 50% of the time. I have not seen the problem on non dual core processors. _2006-Sep-02 21:06:38 by anonymous:_ {linebreak} This ticket is way too vague to be actionable. What operating system? AMD or Intel? What specific version of SQLite? Was the library precompiled or did you compile it yourself? Personally, I can report no errors or problems with dual-core CPU's on Windows XP using an AMD X2 4400+ dual-core CPU. Tested with both a 32-bit build and a 64-bit build of SQLite on x64 Windows.
#e8e8bd 1960 active 2006 Sep anonymous Pending 4 2 Issues with .import in sqlite.exe I ran into two possible problems when using the .import operation in sqlite3: - .import seems to be confused by NULLs; in the file NullTest.dat the null is at the end of the line - .import chokes on empty field when importing to field of type: integer PRIMARY KEY AUTOINCREMENT For example line like: ~2~3~4~5~6 Example: Schema: --Table with autoincrement CREATE TABLE test1( id integer PRIMARY KEY AUTOINCREMENT, c1 integer NULL , c2 integer NULL , c3 text NULL, c4 text NULL, c5 text NULL ); -- Table with no autoincrement field CREATE TABLE test2( id integer NULL, c1 integer NULL , c2 integer NULL , c3 text NULL, c4 text NULL, c5 text NULL ); .separator ~ .import NullTest.dat test1 .import NullTest.dat test2 .import NoNullTest.dat test2 I have short test files that I can email to the person who is looking at this.
#f2dcdc 1974 active 2006 Sep anonymous Unknown Pending 1 1 column type not consistent in views package require sqlite3 sqlite3 db test.db db eval { create table one ( size FLOAT ); create view two as select size from one; } db eval {insert into one values(50.0)} puts [db eval {select size from one}] puts [db eval {select size from two}] outputs: 50.0 50
#f2dcdc 1980 active 2006 Sep drh Pending 1 1 Initializing FTS1 twice causes it to fail. If you try to load the shared module twice, it causes the module to no longer work.
#e8e8bd 1983 active 2006 Sep anonymous Pending 2 2 I/O Error at a size of 4GB and auto_vacuum=1 when i'm building a database with auto_vacuum=1 and page_size=8192, i get an I/O error at a size of about 4GB. All tables are still readable but then it isn't possible to insert any more data. The table is filled with a column of BLOBs and some columns with numbers. I use the 3.3.7 binary with Windows 2000 Server.
#f2dcdc 1990 active 2006 Sep anonymous Pending 1 1 sqlite3_close doesn't release always the file handle I *think* that sqlite3_close behave strangly. I use version 3.3.7 on Linux (Fedora Core 5). What I do is to open a database, and start a transaction in it. Then, without ending the transaction, open again the database and simply close it. I found out, that the inner sqlite3_close return 0 (SQLITE_OK), but the file handle is not released. So if I do it too many times, I run out of file handles. You are free to ask why I open and close that many times the same database while it is already in transaction. This is my mistake. Actually, it is already fixed. But I still wonder - shouldn't the sqlite3_close return other thing then just SQLITE_OK? Especially if the file handle is not released? If it did, I would find my mistake much earlier. Here is my script that demonstrate it (you can use /usr/sbin/lsof in linux to see how many times the file is opened): #include int main(int argc, char **argv) { sqlite3* db; sqlite3* db_inner; int rc; int i; system("rm -f open_many_test.db"); rc = sqlite3_open("open_many_test.db", &db); sqlite3_exec(db, "begin", 0, 0, 0); sqlite3_stmt *pStmt; rc = sqlite3_prepare(db, "create table a (id varchar)", -1, &pStmt, 0); rc = sqlite3_step(pStmt); sqlite3_finalize(pStmt); rc = sqlite3_prepare(db, "insert into a values('bla')", -1, &pStmt, 0); rc = sqlite3_step(pStmt); sqlite3_finalize(pStmt); for (i = 0; i < 10000; i++) { rc = sqlite3_open("open_many_test.db", &db_inner); printf("sqlite3_open gives %d\n", rc); rc = sqlite3_close(db_inner); printf("sqlite3_close gives %d\n", rc); } sqlite3_exec(db, "commit", 0, 0, 0); rc = sqlite3_close(db); } _2006-Sep-23 15:29:46 by drh:_ {linebreak} This behavior is intentional. It is there to work around bugs in the design of posix advistory locks. See ticket #561 and check-in [1171]. Under posix, if you have the same file open multiple times and you close one of the file descriptors, all locks on that file for all file descriptors are cleared. To prevent this from occurring, SQLite defers closing file descriptors until all locks on the file have been released. One possible work-around would be to reuse file descriptors that waiting to be closed for the next open, rather than creating a new file descriptor. ---- _2006-Sep-23 15:35:21 by anonymous:_ {linebreak} The inner call should to sqlite3_open() should simply fail in that case, rather than set up a condition where by a file descriptor is leaked (which no one wants). This is unfortunate because sqlite3_open()'s behavior would not be uniform across platforms. ---- _2006-Sep-23 16:43:32 by anonymous:_ {linebreak} SQLite should do a lookup via stat()'s st_dev/st_ino fields prior to open() and if found to be the same as an already opened database file, it should use the same (refcounted) file descriptor, eliminating the need for open() in this case. ...upon reflection, having two sqlite connections using the same file descriptor would be a bad thing. stat() could be used to decide if a fd pending close() is recyclable, though. ---- _2006-Sep-23 18:17:34 by drh:_ {linebreak} Two points: 1: SQLite does not and has never leaked file descriptors. All file descriptors are eventually closed. The close is merely deferred until the pending transaction COMMITs. 2: I will be taking a very caution and careful approach toward resolving this issue. The issue itself is minor (it has only just now been reported but the behavior has been there for 3 years) but the consequences of getting the fix wrong are severe (database corruption.) And there are abundant opportunities for getting the fix wrong.
#f2dcdc 1992 active 2006 Sep anonymous Fixed shess 1 1 FTS1: Problems after dropping utility tables There are problems if FTS1 utilities tables are dropped from a database. See following SQL for details. drop table if exists x; -- Create a FTS1 table. create virtual table x using fts1 ('content'); -- Drop table x_content: Works fine, but should this be allowed? -- The same errors below also show if table x_term is dropped. drop table x_content; -- All attempts to access table x now result in errors, -- including dropping table x. There seems to be no way out -- except of recreating the database. All three commands below -- cause the same error, regardless if executed in sequence -- or individually: insert into x (content) values ('one two three'); -- Error! delete from x; -- Error! drop table x; -- Error! Added "not exists" to allow dropping an fts table with corrupted backing. Allowing updates to such tables is unlikely to happen (not even clear what it would mean, in most cases!).
#cfe8bd 1994 active 2006 Sep anonymous Parser Pending 1 3 Columns from nested joins aren't properly propagated When using this query: _:SELECT * FROM ROLE_ATTRIBUTE INNER JOIN (ROLE INNER JOIN PERSON ON ROLE.PERSON_ID=PERSON.ID) ON ROLE_ATTRIBUTE.PERSON_ID=ROLE.PERSON_ID AND ROLE_ATTRIBUTE.PROJECT_ID=ROLE.PROJECT_ID WHERE ((PERSON.FIRSTNAME = "bob")); the parser fails with an error "no such column: ROLE.PROJECT_ID". It seems that doing an inner join with more than one subexpression doesn't work. _2006-Sep-25 22:41:52 by anonymous:_ {linebreak} Your query will run without the brackets. SELECT * FROM PERSON P INNER JOIN ROLE_ATTRIBUTE RA ON P.ID = RA.PERSON_ID INNER JOIN ROLE R ON RA.PROJECT_ID = R.PROJECT_ID AND P.ID = R.PERSON_ID WHERE P.FIRSTNAME = 'bob'; ---- _2006-Sep-25 23:03:28 by navaraf:_ {linebreak} Hm, you're right. So actually the thing SQLite chokes on is the parenthesis syntax as JOIN parameter. I can try to modify the generator to produce the expanded form, but since the same code is used for MSSQL, MySQL and Oracle I still think it would be handy to allow it in SQLite too. Also it's not my code that generates these horrible expressions and I'd rather try to avoid modifying it. ---- _2006-Sep-26 09:59:13 by anonymous:_ {linebreak} I changed the title to correctly describe the problem. Also I found another thread on the mailing list that describes exactly the same problem: http://marc.10east.com/?t=115378699000001 ---- _2006-Sep-26 11:42:38 by navaraf:_ {linebreak} I believe the "lookupName" function in src/expr.c should do recursion for ephemeral tables found in the pSrcList (at least those that were created as subqueries in the FROM clause of the SELECT statement).
#cfe8bd 2010 active 2006 Oct anonymous Fixed_in_3.0 3 3 Timeout ignored in Shared-Cache locking model With shared cache enabled, the busy timeout seems to be ignored. SQLITE_BUSY comes immediately. This occurs at least for locking situations within one shared cache. My server (if i may call the cache sharing thread that way) has its own timeout handling. But I thought that a small timeout in sqlite3 might help to distinguish locks from deadlocks. This was reproduced with both Python wrappers. These just call sqlite3_enable_shared_cache and sqlite3_busy_timeout and then execute BEGIN IMMEDIATE from two connections. _2006-Oct-06 13:56:21 by anonymous:_ {linebreak} Weird, I thought it's my fault, but I see exactly the same behaviour with the C# ADO.NET 2.0 wrapper w/ the shared cache patch.
#e8e8bd 2011 active 2006 Oct anonymous New 3 2 Escaping Porblem with .mode insert (double apostrophe) select * from messages where message_id="74B23AAF-5FFD6BF2"; 74B23AAF-5FFD6BF2|75|0|0|0|0|Europe talks, acts tough on Iran||http://www.ncr-iran.org/index.php?option=com_content&task=view&id=1052&Itemid=71|1140529235.0|By Gareth HardingThe United Press International, BRUSSELS -- Europeans are supposed to prefer soft to hard power, jaw-jaw to war-war and appeasement to confrontation. In short, in the words of neo-conservative scholar Robert Kagan: \'Americans are from Mars; Europeans are from Venus.\' The ".mode insert / .output" file looks like this. INSERT INTO messages VALUES('74B23AAF-5FFD6BF2',75,0,0,0,0,'Europe talks, acts tough on Iran','','http://www.ncr-iran.org/index.php?option=com_content&task=view&id=1052&Itemid=71',1140529235.0,'By Gareth HardingThe United Press International, BRUSSELS -- Europeans are supposed to prefer soft to hard power, jaw-jaw to war-war and appeasement to confrontation. In short, in the words of neo-conservative scholar Robert Kagan: \''Americans are from Mars; Europeans are from Venus.\'''); Now there are two apostrophe and the Escaping is broken.
#cfe8bd 2012 active 2006 Oct anonymous New 4 3 trigger4.test aborts "make test" on Windows The failure to remove these files causes "make test" to abort without completing remaining tests: trigger4-99.9... Ok ./testfixture: error deleting "trigtest.db": permission denied while executing "file delete -force trigtest.db trigtest.db-journal" (file "test/trigger4.test" line 199) fix: Index: test/trigger4.test =================================================================== RCS file: /sqlite/sqlite/test/trigger4.test,v retrieving revision 1.9 diff -u -3 -p -r1.9 trigger4.test --- test/trigger4.test 4 Oct 2006 11:55:50 -0000 1.9 +++ test/trigger4.test 9 Oct 2006 14:09:07 -0000 @@ -195,6 +195,6 @@ do_test trigger4-7.2 { integrity_check trigger4-99.9 -file delete -force trigtest.db trigtest.db-journal +catch {file delete -force trigtest.db trigtest.db-journal} finish_test Not sure why this ticket was set to Fixed_in_3.0, but I can reproduce the "make test" abort on Windows. ---- _2006-Oct-11 00:27:16 by drh:_ {linebreak} I do not know why the resolution was set to "Fixed_In_3.0" either. It seems to have been set that why by the original submitter. I will fix this eventually, but since it does not represent a real malfunction, it has a lower priority.
#cfe8bd 2013 active 2006 Oct anonymous Pending drh 4 3 Autoincrement increments on failing INSERT OR IGNORE % package require sqlite3 3.3.8 % sqlite3 db "" % db eval "CREATE TABLE test (counter INTEGER PRIMARY KEY AUTOINCREMENT, value text NOT NULL UNIQUE)" % db eval "INSERT INTO test VALUES(4, 'hallo')" % db eval "SELECT * FROM sqlite_sequence" test 4 % db eval "INSERT OR IGNORE INTO test(value) VALUES('hallo')" % db eval "SELECT * FROM sqlite_sequence" test 5 ---> there has no dataset been inserted but the AUTOINCREMENT-counter is incremented % db eval "INSERT OR IGNORE INTO test VALUES(4, 'hallo')" % db eval "SELECT * FROM sqlite_sequence" test 5 ---> right behavior: no inserted dataset and no incrementation This maybe could be a problem if the "INSERT OR IGNORE" happens very often.
#cfe8bd 2014 active 2006 Oct anonymous Pending anonymous 4 3 Enhancement Req: CREATE [TEMP | TEMPORARY] VIRTUAL TABLE Regarding the experimental VIRTUAL TABLE implementation, I believe it would of benefit to provide a "temp", or volatile construct when working with them. -- From a SQL syntax perspective, adding an optional keyword "TEMP" to the declaration: CREATE [TEMP | TEMPORARY] VIRTUAL TABLE. -- From a code perspective, I would envision this to invoke xCreate as it does now, but when the database is closed, the table is automatically dropped like any temp table, and xDestroy invoked rather than xDisconnect. One sticky point I can picture is behavior when multiple opens exist to a single database from the same process space. Since virtual tables are already reference counted (in SQLite 3.3.8), perhaps the reference count could be made to span database handles and be bubbled up to the process level instead. That would allow the table to be CREATEd on one handle, CONNECTed on a second handle, then DISCONNECTed/DESTROYed based on the process-wide reference count. I feel that there are numerous implementation possibilities for this. Having no option to auto-drop a virtual table can lead to stray module references, creating SQLite database files that cannot be properly utilized if the vtable module is not available. Of course this can be implemented by the application calling DROP TABLE on it's own, but an embedded solution that takes care of it seems more 'proper' given the thought that goes into SQLite as a whole.
#f2dcdc 2017 active 2006 Oct anonymous Pending 1 1 DROP TABLE fails on FTS1 utility tables with certain OMIT_s defined The following SQL fails when SQLite is compiled with the SQLITE_OMIT_ defines stated below: create virtual table foo using fts1 (content); drop table foo; create virtual table foo using fts1 (content); Cause: The foo_content and foo_term tables are not deleted. To verify, please define these SQLITE_OMIT_s: OPTS += -DSQLITE_OMIT_ALTERTABLE OPTS += -DSQLITE_OMIT_ANALYZE OPTS += -DSQLITE_OMIT_AUTHORIZATION OPTS += -DSQLITE_OMIT_AUTOINCREMENT OPTS += -DSQLITE_OMIT_AUTOVACUUM OPTS += -DSQLITE_OMIT_BETWEEN_OPTIMIZATION OPTS += -DSQLITE_OMIT_BLOB_LITERAL OPTS += -DSQLITE_OMIT_CAST OPTS += -DSQLITE_OMIT_CHECK OPTS += -DSQLITE_OMIT_COMPLETE OPTS += -DSQLITE_OMIT_COMPOUND_SELECT OPTS += -DSQLITE_OMIT_EXPLAIN OPTS += -DSQLITE_OMIT_FLAG_PRAGMAS OPTS += -DSQLITE_OMIT_FOREIGN_KEY OPTS += -DSQLITE_OMIT_GET_TABLE OPTS += -DSQLITE_OMIT_GLOBALRECOVER OPTS += -DSQLITE_OMIT_INTEGRITY_CHECK OPTS += -DSQLITE_OMIT_LIKE_OPTIMIZATION OPTS += -DSQLITE_OMIT_MEMORYDB OPTS += -DSQLITE_OMIT_OR_OPTIMIZATION OPTS += -DSQLITE_OMIT_ORIGIN_NAMES OPTS += -DSQLITE_OMIT_PAGER_PRAGMAS OPTS += -DSQLITE_OMIT_PROGRESS_CALLBACK OPTS += -DSQLITE_OMIT_QUICKBALANCE OPTS += -DSQLITE_OMIT_REINDEX OPTS += -DSQLITE_OMIT_SCHEMA_VERSION_PRAGMAS OPTS += -DSQLITE_OMIT_SHARED_CACHE OPTS += -DSQLITE_OMIT_SUBQUERY OPTS += -DSQLITE_OMIT_TCL_VARIABLE OPTS += -DSQLITE_OMIT_TEMPDB OPTS += -DSQLITE_OMIT_TRACE OPTS += -DSQLITE_OMIT_TRIGGER OPTS += -DSQLITE_OMIT_UTF16 OPTS += -DSQLITE_OMIT_VACUUM OPTS += -DSQLITE_OMIT_VIEW Without the SQLITE_OMIT_s, everything works just fine.
#f2dcdc 2019 active 2006 Oct anonymous Pending 1 1 FTS1: Create table in transaction raises Out of Sequence error (21) This error: SQL error: library routine called out of sequence is caused if the following script is executed by the Windows version of the SQLite3 console application with .load fts1.dll extension. If it does not show immediately, it will eventually surface if the script is run multiple times. The cause of the problem seems to be related to the transaction, the create virtual table as well as the amount of data inserted. Finally, the script is attached.
#f2dcdc 2022 active 2006 Oct anonymous Pending 1 1 .import command is not working I have a windows system running version 3.3.6 and a linux system running 3.3.3 when I run .import catalog.csv TEMPDATA on the windows system, it works fine. On the linux system, no data gets imported. There are no error messages. Is this a known issue in 3.3.3? _2006-Oct-14 01:15:07 by anonymous:_ {linebreak} A sample SQL schema and a 3 line import file demonstrating the problem would be helpful. ---- _2006-Nov-08 15:48:28 by anonymous:_ {linebreak} Schema: CREATE TABLE Catalog ( UPC text , SKU text primary key , DESC text , PACK text , PRICE text , SIZE text ); test.csv contents 00000000103,103,EFFEM CHOCOLATE FUNSIZE 75PPK 1 X1EA,1,$155.94,1 EA 00000000152,414317,CLEARLIGHT SLUSH CUP 16OZ CDL16 1X50EA,1,$5.04,50 EA 00000000152,56880,CLEARLIGHT SLUSH CUP 16OZ CDL16 20X50EA,20,$96.31,50 EA Command that does nothing: .import test.csv Catalog ---- _2006-Nov-08 15:52:40 by anonymous:_ {linebreak} Sorry, I'll try this again: Schema: CREATE TABLE Catalog ( UPC text , SKU text primary key , DESC text , PACK text , PRICE text , SIZE text ); test.csv contents 00000000103,103,EFFEM CHOCOLATE FUNSIZE 75PPK 1 X1EA,1,$155.94,1 EA 00000000152,414317,CLEARLIGHT SLUSH CUP 16OZ CDL16 1X50EA,1,$5.04,50 EA 00000000152,56880,CLEARLIGHT SLUSH CUP 16OZ CDL16 20X50EA,20,$96.31,50 EA Command that does nothing: .import test.csv Catalog
#f2dcdc 2027 active 2006 Oct anonymous Pending 1 1 FTS: Phrase searches return Offsets for individual phrase words With FTS (one as well as two), phrase searches return offsets for all individual words instead of the phrase as a whole, like in select name, ingredients from recipe where ingredients match '"broccoli cheese"'; Offsets() returns at least two matches for both individual words: *: broccoli *: cheese
#e8e8bd 2028 active 2006 Oct anonymous Pending 4 2 FTS1: UNIQUE() expression and UPDATE command not working I'm working with tables, containing around 1,4 million entries (1GB file size). To allow faster fulltext search I tried FTS1 now. What I saw is: creating the virtual FTS1 table with one keyword "UNIQUE(code), reference, text, ..." I had the idea to have faster access to "code", because this entry is only one time existing in table. In my actual SQLITE table "UNIQUE" was good idea, because "UPDATE"ing of entries was much faster as without "UNIQUE" expression. Unfortunately, in that moemnt I use "UNIQUE" expression in fulltext table, the FTS1 table doesn't accept insertion of entries like "INSERT into receipe (code, reference, text) values ('4711', 'RefnotAvailable', 'Test');" So I removed the "UNIQUE" keyword, knowing that later "UPDATE" command to modify entries will be slower. So I built new table with additional FTS1 fulltext table. Then I tried to "UPDATE" one entry. In that moment the program stopped immediately working (WIN XP system), what means that the application stopped without comment and returned to desktop. I tried the same in SQLITE3.exe (command line program) but also that program suspended immediately after the UPDATE command (like "UPDATE Volltext SET code = '4710', reference = 'RefChanged', text = 'notext';" That seems to me to be a bug. By the way, creating fulltext table to search inside my whole database increased the filesize a lot (4 times). May be that is solved in FTS2? Last wish: Fulltext search like "foo*" to find "fool" and "foot" would be a really great improvement. Best regards Ingo _2006-Oct-23 13:56:59 by anonymous:_ {linebreak} Ooops, as I saw today, also "DELETE" statements are causing SQLITE to stop working (crash). Program returns to Desktop on WIN XP after DELETE command.
#f2dcdc 2032 active 2006 Oct anonymous Pending 1 1 AV in btree.c running FTS2 compiled with SQLITE_OMIT_SHARED_CACHE If compiled with FTS2 support as well as SQLITE_OMIT_SHARED_CACHE=1, the sqlite console application causes an Access Violation: btree.c, line 3538: Read of address x00000014 if( pCur->idx>=pPage->nCell ){ if the SQL (attatched) is executed. I believe that this is a bug in btree.c, for the following reasons: *: The AV does not show if the #ifndef SQLITE_OMIT_SHARED_CACHE (lines 3514 and 3525) are commentet out. *: From my reading, all virtual tables use the extension API only and do not access the btree directly. _2006-Oct-25 06:30:43 by shess:_ {linebreak} Note that the attached SQL has exactly 273 INSERT statements. 273==256+16+1, so this is kicking in at a merge point. Don't know how that's relevant, but it seems suspicious. ---- _2006-Oct-25 16:31:34 by anonymous:_ {linebreak} Many thanks for looking into this - it was driving me mad until I came up with the rather simple SQL to reproduce it. I am not sure if the number of INSERTS is 100% the number needed to cause the problem, but the crash always happens after the exact same number of inserts. I did not count them but added roughly enough of them to cause the error. Sidenote: I can also make FTS2 to crash at another point, which I thought was related to the sizeof() bug I also reported. But apprarently it is not. Unfortunately I can not provide a test case for this since I can reproduce it only after adding some 3000 or so copyrighted documents to an empty database. At the time of the crash the DB is about 250 MB in size. However, I will run a test after the next commits to FTS2. ---- _2006-Oct-26 08:57:41 by anonymous:_ {linebreak} My previious comments from yesterday seem to be invalidated by the latest checkins [3486], [3488] and [3489]. Many thanks for those! However, the problem with =SQLITE_OMIT_SHARED_CACHE= still persists.
#f2dcdc 2037 active 2006 Oct anonymous Pending 1 1 Sqlite3 can't use datafile in Chinese path with Win2000 and WindowsXP. Sqlite3 can't use datafile in Chinese path with Win2000 and WindowsXP. This is a bug in os_win.c . My firend modify code to so , it work right. /* ** Convert a UTF-8 string to UTF-32. Space to hold the returned string ** is obtained from sqliteMalloc. */ static WCHAR *utf8ToUnicode(const char *zFilename){ int nChar; WCHAR *zWideFilename; if( !isNT() ){ return 0; } nChar = MultiByteToWideChar(CP_THREAD_ACP, MB_COMPOSITE, zFilename, -1, NULL, 0); zWideFilename = sqliteMalloc( nChar*sizeof(zWideFilename[0]) ); if( zWideFilename==0 ){ return 0; } nChar = MultiByteToWideChar(CP_THREAD_ACP, MB_COMPOSITE, zFilename, -1, zWideFilename, nChar); if( nChar==0 ){ sqliteFree(zWideFilename); zWideFilename = 0; } return zWideFilename; } /* ** Convert UTF-32 to UTF-8. Space to hold the returned string is ** obtained from sqliteMalloc(). */ static char *unicodeToUtf8(const WCHAR *zWideFilename){ int nByte; char *zFilename; nByte = WideCharToMultiByte(CP_THREAD_ACP, WC_COMPOSITECHECK, zWideFilename, -1, 0, 0, 0, 0); zFilename = sqliteMalloc( nByte ); if( zFilename==0 ){ return 0; } nByte = WideCharToMultiByte(CP_THREAD_ACP, WC_COMPOSITECHECK, zWideFilename, -1, zFilename, nByte, 0, 0); if( nByte == 0 ){ sqliteFree(zFilename); zFilename = 0; } return zFilename; } _2006-Oct-20 10:26:46 by anonymous:_ {linebreak} The proposed fix is completely wrong, but the bug exists nonetheless. The problem is that SQLite expects file names in UTF-8 encoding (and there is probably bug in your application too guessing from the proposed fix). While this works fine on NT systems where the UTF-8 encoding is converted to UTF-16 and passed to system wide-character APIs, the code path for non-NT systems (Win 9x) with ANSI-only APIs doesn't convert the UTF-8 file names into the ANSI code page which is expected by the system APIs.
#f2dcdc 2043 active 2006 Oct anonymous Pending 1 1 Spaces in view statement If you have a table defined with fields that contain spaces. create table table1 ("field one", "field two", "field three"); Then you do a select select "field one" from table1; That works fine. However if you save it as a view create view view_one as select "field one" from table1; Then if you run a select on the view it fails. select * from view_one;
#f2dcdc 2046 active 2006 Oct anonymous Fixed shess 1 1 FTS1 - Error closing database due to unfinished statements The following script causes an error in SQLite3.exe with FTS1. The error will surface only AFTER the script has finished AND you have typed .exit at the sqlite> prompt to quit SQLite3. The problem seems that the SELECT statement is not properly finalized due to an internal error. -- The next line is for Windows only, please adopt it -- if running Linux or use a FTS1-enabled SQLite3 binary. select load_extension ('fts1.dll'); CREATE TABLE Snippets( SnippetID INTEGER PRIMARY KEY, SnippetTitle TEXT, FtsID INTEGER); CREATE VIRTUAL TABLE SnippetsFts USING FTS1 (SnippetTitle, SnippetText); INSERT INTO Snippets (SnippetTitle) VALUES ('one'); INSERT INTO Snippets (SnippetTitle) VALUES ('two'); SELECT SnippetID FROM Snippets JOIN SnippetsFts ON FtsID = +SnippetsFts.RowID WHERE SnippetsFts MATCH 'one'; -- After the script is done, type .exit at the prompt to close the database. -- -- SQLite3 will close, but report the following error before doing so: -- -- "error closing database: Unable to close due to unfinalised statements" -- -- Does this qualify for a bug? The script is also attached to this ticket. _2006-Nov-27 22:58:49 by shess:_ {linebreak} Attached tighter version of the replication script, generated in isolating what mattered to the bug.
#f2dcdc 2048 active 2006 Oct anonymous Pending drh 1 1 table_info on columns with no default value are returned as string On line 486, noDflt is declared as{linebreak} static const Token noDflt = { (unsigned char*)"", 0, 0 };{linebreak} {linebreak} And on line 493:{linebreak} if( pDflt->z ){{linebreak} sqlite3VdbeOp3(v, OP_String8, 0, 0, (char*)pDflt->z, pDflt->n);{linebreak} }else{{linebreak} sqlite3VdbeAddOp(v, OP_Null, 0, 0);{linebreak} {linebreak} So columns with no default value aren't being set to null because the (pDflt->z) condition is non-null.
#f2dcdc 2057 active 2006 Nov anonymous Pending 3 1 full_column_names when 2 or more tables are joined is not working Version 2.8 has the behavior described in the documentation in respect to full_column_names when 2 or more tables are present with (table/alias).*, but 3.3.8 doesn't, mixing the pragmas "full_column_names" and "short_column_names" can only force to have full_column_names allways or never, some programs expect the behavior described in the documentation to remain working. _2006-Nov-08 20:10:13 by anonymous:_ {linebreak} Version 3.3.3 as well has the same problem. ---- _2006-Nov-09 09:34:52 by anonymous:_ {linebreak} Changing the line 977 of select.c (3.3.8) from: if( pEList->a[i].zName){ to: if( pEList->a[i].zName && pTabList->nSrc==1){ with pragma short_column_names = 0 behaves like 2.8 series.
#f2dcdc 2059 active 2006 Nov anonymous Pending 1 1 Still missing .DEF file from Windows 3.3.8 source code distribution The file sqlite3.def is missing from the zip archive of sources used to build sqlite3 on Windows. Ticket number 2031 was closed with a remark that this file is generated during the build process. That is true if one is building on Linux with MinGW32 configured as a cross-compiler. If one were building using that method then I assume one would not be downloading the src.zip archive anyway. My impression is that the src.zip archive is prepared once the build has been performed on Linux so Windows developers can directly build sqlite (and the generated files) without need of the other tools that the build process depends on. If this is accurate, then it would be very helpful if the src.zip archive could also include the sqlite3.def file. Without this file it is not possible for Windows developers to create a DLL from the src.zip archive. Thanks _2006-Nov-09 20:05:23 by anonymous:_ {linebreak} Works fine as is with MinGW ./configure && make sqlite3.exe
#f2dcdc 2060 active 2006 Nov anonymous Pending 1 1 Table references enclosed in parenthesis become "invisible" Hi, I'm developing an RDF-based system, which translates queries from SPARQL into SQL. While trying to add support for SQLite (MySQL is already supported) I came across the following problem: when table references in a FROM clause are enclosed in parenthesis, they cannot be referenced from outside the parenthesized expression. For example, given the table definitions CREATE TABLE IF NOT EXISTS t1 (a, b); CREATE TABLE IF NOT EXISTS t2 (c, d); CREATE TABLE IF NOT EXISTS t3 (e, f); The following queries all fail with "no such column" errors: SELECT t1.a, t3.f FROM (t1 CROSS JOIN t2 ON t1.b = t2.c) LEFT JOIN t3 ON t2.d = t3.e; SELECT t1.a, t3.f FROM t1 CROSS JOIN (t2 LEFT JOIN t3 ON t2.d = t3.e) ON t1.b = t2.c; SELECT t1.a, t2.d FROM (t1), (t2) WHERE t1.b = t2.c; I'm not sure if it is always possible to reformulate the queries in such a way that the extra parenthesis aren't necessary, but I suspect that complex expressions involving joins may require them to achieve the intended semantics. In any case, my system would require large changes to be able to get rid of the parenthesized subjoins, so it would be nice if this problem could be fixed. :-) _2006-Nov-10 03:56:46 by anonymous:_ {linebreak} For what it's worth, here's the parse trees of two similar queries ("SELECT t1.a, t2.d FROM t1, t2 WHERE t1.b = t2.c" and "SELECT t1.a, t2.d FROM (t1), (t2) WHERE t1.b = t2.c"), as well as one of the other more complicated join queries previously listed. SELECT t1.a, t2.d FROM t1, t2 WHERE t1.b = t2.c; Select { op: TK_SELECT isResolved: 1 pSrc: { a[0]: { zName: t1 iCursor: 0 colUsed: 0x00000003 pTab: t1 jointype: JT_INNER } a[1]: { zName: t2 iCursor: 1 colUsed: 0x00000003 pTab: t2 } } pEList: { a[0]: { pExpr: { op: TK_COLUMN span: {t1.a} affinity: SQLITE_AFF_NONE iTable: 0 iColumn: 0 pTab: t1 } } a[1]: { pExpr: { op: TK_COLUMN span: {t2.d} affinity: SQLITE_AFF_NONE iTable: 1 iColumn: 1 pTab: t2 } } } pWhere: { op: TK_EQ span: {t1.b = t2.c} pLeft: { op: TK_COLUMN span: {t1.b} affinity: SQLITE_AFF_NONE iTable: 0 iColumn: 1 pTab: t1 } pRight: { op: TK_COLUMN span: {t2.c} affinity: SQLITE_AFF_NONE iTable: 1 iColumn: 0 pTab: t2 } } } SELECT t1.a, t2.d FROM (t1), (t2) WHERE t1.b = t2.c; Select { op: TK_SELECT isResolved: 1 pSrc: { a[0]: { zAlias: sqlite_subquery_5C0A10_ iCursor: 0 pTab: sqlite_subquery_5C0A10_ pSelect: { op: TK_SELECT isResolved: 1 pSrc: { a[0]: { zName: t1 iCursor: 1 colUsed: 0x00000003 pTab: t1 } } pEList: { a[0]: { zName: a pExpr: { op: TK_COLUMN token: {a} span: {a} affinity: SQLITE_AFF_NONE iTable: 1 iColumn: 0 pTab: t1 } } a[1]: { zName: b pExpr: { op: TK_COLUMN token: {b} span: {b} affinity: SQLITE_AFF_NONE iTable: 1 iColumn: 1 pTab: t1 } } } } jointype: JT_INNER } a[1]: { zAlias: sqlite_subquery_5BE4F0_ iCursor: 2 pTab: sqlite_subquery_5BE4F0_ pSelect: { op: TK_SELECT isResolved: 1 pSrc: { a[0]: { zName: t2 iCursor: 3 colUsed: 0x00000003 pTab: t2 } } pEList: { a[0]: { zName: c pExpr: { op: TK_COLUMN token: {c} span: {c} affinity: SQLITE_AFF_NONE iTable: 3 iColumn: 0 pTab: t2 } } a[1]: { zName: d pExpr: { op: TK_COLUMN token: {d} span: {d} affinity: SQLITE_AFF_NONE iTable: 3 iColumn: 1 pTab: t2 } } } } } } pEList: { a[0]: { pExpr: { op: TK_COLUMN span: {t1.a} flags: EP_Resolved EP_Error iTable: -1 iColumn: 0 } } a[1]: { pExpr: { op: TK_DOT span: {t2.d} pLeft: { op: TK_ID token: {t2} span: {t2} } pRight: { op: TK_ID token: {d} span: {d} } } } } pWhere: { op: TK_EQ span: {t1.b = t2.c} pLeft: { op: TK_DOT span: {t1.b} pLeft: { op: TK_ID token: {t1} span: {t1} } pRight: { op: TK_ID token: {b} span: {b} } } pRight: { op: TK_DOT span: {t2.c} pLeft: { op: TK_ID token: {t2} span: {t2} } pRight: { op: TK_ID token: {c} span: {c} } } } } SQL error: no such column: t1.a SELECT t1.a, t3.f FROM (t1 CROSS JOIN t2 ON t1.b = t2.c) LEFT JOIN t3 ON t2.d = t3.e; Select { op: TK_SELECT isResolved: 1 pSrc: { a[0]: { zAlias: sqlite_subquery_5BFA30_ iCursor: 0 pTab: sqlite_subquery_5BFA30_ pSelect: { op: TK_SELECT isResolved: 1 pSrc: { a[0]: { zName: t1 iCursor: 1 colUsed: 0x00000003 pTab: t1 jointype: JT_INNER JT_CROSS } a[1]: { zName: t2 iCursor: 2 colUsed: 0x00000003 pTab: t2 } } pEList: { a[0]: { zName: a pExpr: { op: TK_COLUMN span: {t1.a} affinity: SQLITE_AFF_NONE iTable: 1 iColumn: 0 pTab: t1 } } a[1]: { zName: b pExpr: { op: TK_COLUMN span: {t1.b} affinity: SQLITE_AFF_NONE iTable: 1 iColumn: 1 pTab: t1 } } a[2]: { zName: c pExpr: { op: TK_COLUMN span: {t2.c} affinity: SQLITE_AFF_NONE iTable: 2 iColumn: 0 pTab: t2 } } a[3]: { zName: d pExpr: { op: TK_COLUMN span: {t2.d} affinity: SQLITE_AFF_NONE iTable: 2 iColumn: 1 pTab: t2 } } } pWhere: { op: TK_EQ span: {t1.b = t2.c} flags: EP_FromJoin EP_Resolved iRightJoinTable: 2 pLeft: { op: TK_COLUMN span: {t1.b} affinity: SQLITE_AFF_NONE flags: EP_FromJoin EP_Resolved iTable: 1 iColumn: 1 iRightJoinTable: 2 pTab: t1 } pRight: { op: TK_COLUMN span: {t2.c} affinity: SQLITE_AFF_NONE flags: EP_FromJoin EP_Resolved iTable: 2 iColumn: 0 iRightJoinTable: 2 pTab: t2 } } } jointype: JT_LEFT JT_OUTER } a[1]: { zName: t3 iCursor: 3 pTab: t3 } } pEList: { a[0]: { pExpr: { op: TK_COLUMN span: {t1.a} flags: EP_Resolved EP_Error iTable: -1 iColumn: 0 } } a[1]: { pExpr: { op: TK_DOT span: {t3.f} pLeft: { op: TK_ID token: {t3} span: {t3} } pRight: { op: TK_ID token: {f} span: {f} } } } } pWhere: { op: TK_EQ span: {t2.d = t3.e} flags: EP_FromJoin iRightJoinTable: 3 pLeft: { op: TK_DOT span: {t2.d} flags: EP_FromJoin iRightJoinTable: 3 pLeft: { op: TK_ID token: {t2} span: {t2} flags: EP_FromJoin iRightJoinTable: 3 } pRight: { op: TK_ID token: {d} span: {d} flags: EP_FromJoin iRightJoinTable: 3 } } pRight: { op: TK_DOT span: {t3.e} flags: EP_FromJoin iRightJoinTable: 3 pLeft: { op: TK_ID token: {t3} span: {t3} flags: EP_FromJoin iRightJoinTable: 3 } pRight: { op: TK_ID token: {e} span: {e} flags: EP_FromJoin iRightJoinTable: 3 } } } } SQL error: no such column: t1.a ---- _2006-Nov-11 18:29:33 by anonymous:_ {linebreak} The resolving bug appears to be that unique column names or column aliases are searched across all subqueries, but table names and table aliases are only searched at their current SELECT level only. With this in mind, here are mechanical workarounds without using column aliases (assumes the column names in all joined tables are unique): SELECT a, f FROM (t1 CROSS JOIN t2 ON t1.b = t2.c) LEFT JOIN t3 ON d = e; SELECT t1.a, f FROM t1 CROSS JOIN (t2 LEFT JOIN t3 ON t2.d = t3.e) ON t1.b = c; SELECT a, d FROM (t1), (t2) WHERE b = c; And here are mechanical workarounds using column aliases (assumes the column names are not unique between tables): SELECT t1.a, t3f FROM t1 CROSS JOIN (select t3.f t3f, t2.c t2c from t2 LEFT JOIN t3 ON t2.d = t3.e) ON t1.b = t2c; SELECT t1a, t3.f FROM (select t1.a t1a, t2.d t2d from t1 CROSS JOIN t2 ON t1.b = t2.c) LEFT JOIN t3 ON t2d = t3.e; SELECT t1a, t2d FROM (select t1.a t1a, t1.b t1b from t1), (select t2.c t2c, t2.d t2d from t2) WHERE t1b = t2c; Notice that t3.f in the second query did not require an alias because the table "t3" was part of its immediate SELECT. You could make an alias for every column just in case, I just wanted to highlight the difference. ---- _2007-Feb-13 15:40:31 by anonymous:_ {linebreak} Fixing this issue would slow down SELECT parsing and column resolution for all queries (more specifically all prepared statements) due to the recursion required for column resolution. It would be easier to change your SQL code generator to accomodate SQLite. Just make aliases for every table at every subselect level and have the SELECT at any given level only work with the table aliases at that level.
#e8e8bd 2066 active 2006 Nov anonymous Pending 2 2 Incorrect error message in the case of ENOLCK If you're trying to open a sqlite database that is stored on a filesystem that doesn't support locking, then you'll get the error when you try to execute any commands on it: Error: file is encrypted or is not a database If you run sqlite under strace, you see: read(0, ".schema\n.quit\n", 4096) = 14 fcntl64(3, F_SETLK64, {type=F_RDLCK, whence=SEEK_SET, start=1073741824, len=1}, 0xafa5cd70) = 0 fcntl64(3, F_SETLK64, {type=F_RDLCK, whence=SEEK_SET, start=1073741826, len=510}, 0xafa5cd70) = 0 fcntl64(3, F_SETLK64, {type=F_UNLCK, whence=SEEK_SET, start=1073741824, len=1}, 0xafa5cd70) = 0 access("/mnt/www/zzz_old_sites/trac.db-journal", F_OK) = -1 ENOENT (No such file or directory) fstat64(3, {st_mode=S_IFREG|0644, st_size=584704, ...}) = 0 _llseek(3, 0, [0], SEEK_SET) = 0 read(3, "** This file contains an SQLite "..., 1024) = 1024 fcntl64(3, F_SETLK64, {type=F_UNLCK, whence=SEEK_SET, start=0, len=0}, 0xafa5cdd0) = -1 ENOLCK (No locks available) write(2, "Error: file is encrypted or is n"..., 46Error: file is encrypted or is not a database Sqlite should really check the exact error code, and give a more helpful error (eg "Locking not available on this filesystem. Databases may only be stored on filesystems that support locking")
#cfe8bd 2075 active 2006 Nov anonymous Pending 3 3 Improve VACUUM speed and INDEX page locality In testing several 100 Meg - 1 Gig databases (including the Monotone OpenEmbedded database) I found that changing the order of the SQL commands executed by VACUUM to create indexes after table inserts results in 15% faster VACUUM times, and up to 25% faster cold-file-cache queries when indexes are used. This patch effectively makes the pages of each index contiguous in the database file after a VACUUM, as opposed to being scattered throughout the pages of the table related to the index. Your results may vary, but I think this is a very safe change that can potentially boost average database performance. Index: src/vacuum.c =================================================================== RCS file: /sqlite/sqlite/src/vacuum.c,v retrieving revision 1.65 diff -u -3 -p -r1.65 vacuum.c --- src/vacuum.c 18 Nov 2006 20:20:22 -0000 1.65 +++ src/vacuum.c 20 Nov 2006 21:09:27 -0000 @@ -143,14 +143,6 @@ int sqlite3RunVacuum(char **pzErrMsg, sq " AND rootpage>0" ); if( rc!=SQLITE_OK ) goto end_of_vacuum; - rc = execExecSql(db, - "SELECT 'CREATE INDEX vacuum_db.' || substr(sql,14,100000000)" - " FROM sqlite_master WHERE sql LIKE 'CREATE INDEX %' "); - if( rc!=SQLITE_OK ) goto end_of_vacuum; - rc = execExecSql(db, - "SELECT 'CREATE UNIQUE INDEX vacuum_db.' || substr(sql,21,100000000) " - " FROM sqlite_master WHERE sql LIKE 'CREATE UNIQUE INDEX %'"); - if( rc!=SQLITE_OK ) goto end_of_vacuum; /* Loop through the tables in the main database. For each, do ** an "INSERT INTO vacuum_db.xxx SELECT * FROM xxx;" to copy @@ -162,10 +154,22 @@ int sqlite3RunVacuum(char **pzErrMsg, sq "FROM sqlite_master " "WHERE type = 'table' AND name!='sqlite_sequence' " " AND rootpage>0" - ); if( rc!=SQLITE_OK ) goto end_of_vacuum; + /* Create indexes after the table inserts so that their pages + ** will be contiguous resulting in (hopefully) fewer disk seeks. + */ + rc = execExecSql(db, + "SELECT 'CREATE UNIQUE INDEX vacuum_db.' || substr(sql,21,100000000) " + " FROM sqlite_master WHERE sql LIKE 'CREATE UNIQUE INDEX %'"); + if( rc!=SQLITE_OK ) goto end_of_vacuum; + + rc = execExecSql(db, + "SELECT 'CREATE INDEX vacuum_db.' || substr(sql,14,100000000)" + " FROM sqlite_master WHERE sql LIKE 'CREATE INDEX %' "); + if( rc!=SQLITE_OK ) goto end_of_vacuum; + /* Copy over the sequence table */ rc = execExecSql(db, _2007-Feb-11 00:49:50 by drh:_ {linebreak} My alternative plan is to modify insert.c so that it recognizes the special case of INSERT INTO table1 SELECT * FROM table2; when table1 and table2 have identical schemas, including all the same indices. When this special case is recognized, the generated bytecode will first transfer all table entries from table2 to table1, using a row by row transfer without decoding each row into its constituient columns. Then it will do the same for each index. There will be two benefits here. First, when the above construct occurs during the course of a VACUUM, the table and each index, including intrisic indices associated with UNIQUE and PRIMARY KEY constraints, will be transferred separately so that all of there pages will be adjacent in the database file. The second benefit will occur when trying to load large quantities of data into an indexed table. Loading indexed data into a very large table is currently slow because the index entries are scattered haphazardly around in the file. But if data is first loaded into a smaller temporary table with the same schema, it can then be transferred to the main table using an INSERT statement such as the above in what amounts to a merge operation. ---- _2007-Feb-11 06:58:36 by anonymous:_ {linebreak} There's no question that your proposal will greatly improve VACUUM speed which relies on the "INSERT INTO table1 SELECT * from table2" construct. But would it be possible for you to relax the restriction on having identical indexes for table1 and table2? For that matter it would be nice if table2 could be any subselect or view. Then "REPLACE INTO table1 SELECT ...anything..." could also be optimized. Since you can detect that SQLite is doing a bulk insert anyway, it could generate code to make a temporary staging table with automatically generated identical indexes to table1 which could be periodically merged into table1 and truncated every X rows. X could be either set via pragma or be a function of the size of the page cache. The temporary staging table would be dropped after the bulk INSERT INTO ... SELECT. Every user inserting large volumes of data would have to perform this procedure anyway. Manually recreating all the indexes for a given temporary table to match the original table and performing the looping logic is cumbersome and error-prone. It would be very conveniant if SQLite were to do it on the user's behalf. This scheme could only work if there are no triggers on table1, of course. ---- _2007-Feb-11 09:16:25 by drh:_ {linebreak} My initial enhancement does nothing to preclude the more agressive enhancement described by anonymous. In order to avoid subtle bugs, and in view of my limited time available to work on this, I think it best to take the more conservative approach first and defer the more elaborate optimization suggested by anonymous until later. ---- _2007-Feb-11 13:54:34 by anonymous:_ {linebreak} It should be possible to identify contiguous blocks of individual "INSERT INTO table1 VALUES(...)" statements to the same table within a large transaction and perform the same proposed optimization as with "INSERT INTO table1 SELECT ...". This would require higher level coordination by the parser. Anytime a read operation (SELECT, UPDATE) occurs on such a table marked for bulk INSERT within the large transaction, its temp staging table could be merged into the INSERT destination table and the staging table truncated. The process could be repeated for the remainder of the transaction. Such an optimization would be a huge benefit to SQLite users since they would need not know the idiosynchracies of the implementation of "INSERT INTO table1 SELECT ..." in order to have efficient table and index population. Alternatively, if you wish to avoid the complexity of re-assembling and staging individual INSERT statements, it might be a good opportunity for SQLite to support the multi-row variant of the INSERT command: INSERT INTO table1 (a,b,c) VALUES(1,2,3), (4,5,6), (7,8,9); Which is essentially a transform of: CREATE TEMP TABLE table1_staging (a,b,c); INSERT INTO table1_staging VALUES(1,2,3); INSERT INTO table1_staging VALUES(4,5,6); INSERT INTO table1_staging VALUES(7,8,9); INSERT INTO table1 SELECT * FROM table1_staging; -- TRUNCATE OR DROP table1_staging as necessary which could use the same bulk staging optimization. ---- _2007-Feb-13 02:42:41 by anonymous:_ {linebreak} Any harm in checking in the simple patch above for the 3.3.13 release? ---- _2007-Feb-13 12:51:47 by drh:_ {linebreak} I have a much better fix standing by that I intend to check-in as soon as I get 3.3.13 out the door. I don't want this in 3.3.13 for stability reasons. ---- _2007-Feb-18 23:07:08 by anonymous:_ {linebreak} Some related analysis and an .import patch using a :memory: staging table with the "INSERT INTO table1 SELECT FROM table2" construct can be found here: http://www.mail-archive.com/sqlite-users%40sqlite.org/msg22143.html
#f2dcdc 2076 active 2006 Nov anonymous Pending a.rottmann 1 1 % exists as value in varchar abnormal abend of client application (C++) when sqlite returns stream of data containing "%" value. Is % a special character? _2006-Nov-21 14:14:25 by anonymous:_ {linebreak} % is not a special character. Can you post a small C program demonstrating the problem?
#f2dcdc 2077 active 2006 Nov anonymous Pending 2 1 Problems with using ASCII symbols 0x80 - 0xFF in database path Platform: Windows.
The SQLite library and executable doesn't see database files that are placed into folders named using ASCII symbols with codes 0x80-0xFF. That symbols are used to represent language-specific symbols (for example, Russian). In result, database cannot be placed into folder with name in Russian language. This bug is "unstable": it doesn't appear in all cases. Below are logs from my experiments with this problem. In all cases the path I requested exists, and database file is placed there. I have noticed that problem depends on filename path and name lengths. =========================================================
// creating test database
E:\!DISTRIB\sqlite-3_3_7>sqlite3.exe test.sqb
SQLite version 3.3.7
Enter ".help" for instructions
sqlite> create table a(id int);
sqlite> insert into a values (1);
sqlite> ^C
E:\!DISTRIB\sqlite-3_3_7>copy test.sqb e:\test.sqb
'3'\'`'a'Z'b'`'S'Q'_'` 'f'Q'[']'`'S: 1. //This means that 1 file was copied
E:\!DISTRIB\sqlite-3_3_7>sqlite3 e:\test.sqb
SQLite version 3.3.7
Enter ".help" for instructions
sqlite> select * from a;
1
sqlite> ^C
// Works!
E:\!DISTRIB\sqlite-3_3_7>mkdir e:\'/
//Using ASCII symbol "'/" (0x8D) to represent cyrillic letter which can be entered in the command line by using Alt+(141) combination
E:\!DISTRIB\sqlite-3_3_7>copy test.sqb E:\'/\test.sqb
'3'\'`'a'Z'b'`'S'Q'_'` 'f'Q'[']'`'S: 1.
E:\!DISTRIB\sqlite-3_3_7>sqlite3 e:\'/\test.sqb
SQLite version 3.3.7
Enter ".help" for instructions
sqlite> select * from a;
1
sqlite> ^C
// That is works too!
E:\!DISTRIB\sqlite-3_3_7>mkdir E:\'/\1
E:\!DISTRIB\sqlite-3_3_7>copy test.sqb E:\'/\1\test.sqb
'3'\'`'a'Z'b'`'S'Q'_'` 'f'Q'[']'`'S: 1.
E:\!DISTRIB\sqlite-3_3_7>sqlite3 E:\'/\1\test.sqb
Unable to open database "E:\(T\1\test.sqb": unable to open database file
// Doesn't work, and writes the wrong symbol "(T" in place of "'/"! I've noticed that if we convert symbol "'/" from DOS encoding to Windows encoding and then write it in DOS encoding, then we'll get "(T".
E:\!DISTRIB\sqlite-3_3_7>copy test.sqb E:\'/\tst.sqb
'3'\'`'a'Z'b'`'S'Q'_'` 'f'Q'[']'`'S: 1.
E:\!DISTRIB\sqlite-3_3_7>sqlite3 E:\'/\tst.sqb
SQLite version 3.3.7
Enter ".help" for instructions
sqlite> select * from a;
SQL error: no such table: a
sqlite> ^C
// It seems to work, i don't get an error, but doesn't see the tables! =(
=================================
#f2dcdc 2081 active 2006 Nov anonymous Pending doughenry 1 1 sqlite3_column_decltype throws exception, if selection is grouped If I "group by" a selection over several columns I can't find out the orgin type of these columns using sqlite3_column_decltype(..). An exception is thrown. _2006-Nov-23 18:37:47 by anonymous:_ {linebreak} You also get no decl type from a subselect. This goes to the typeless nature of SQLite - I don't think a type can even be derived in this case.
#cfe8bd 2084 active 2006 Nov anonymous Pending 4 3 Add API function mapping column decl string to SQLite type This is an API feature request. It would be nice to be able to obtain the SQLite type (e.g. SQLITE_INTEGER) from the declared column type string as returned by sqlite3_column_decltype. This was discussed briefly on the mailing list here: http://marc.10east.com/?l=sqlite-users&m=116422872301957&w=2 The function I have in mind is: int sqlite3_decltype_to_type(const char *decl) { Token decl_token; char aff_type; int col_type; decl_token.z = decl; if( decl_token.z ){ decl_token.n = strlen(decl_token.z); aff_type = sqlite3AffinityType(&decl_token); switch( aff_type ){ case SQLITE_AFF_INTEGER: col_type = SQLITE_INTEGER; break; case SQLITE_AFF_NUMERIC: /* falls through */ case SQLITE_AFF_REAL: col_type = SQLITE_FLOAT; break; case SQLITE_AFF_TEXT: col_type = SQLITE_TEXT; break; case SQLITE_AFF_NONE: col_type = SQLITE_BLOB; break; default: col_type = 0; /* unknown */ break; } } return col_type; } If this seems agreeable, I would be willing to put together a real patch. However, I would need some guidance on where it should go. I'm not sure what should happen when no type can be determined. _2006-Nov-26 22:32:45 by anonymous:_ {linebreak} According to the comment above the function sqlite3AffinityType: "If none of the substrings in the above table are found, SQLITE_AFF_NUMERIC is returned". The default condition in sqlite3_decltype_to_type will not be reached. ---- _2006-Nov-26 23:04:23 by anonymous:_ {linebreak} Thanks for pointing to that comment. Looks like SQLITE_AFF_NUMERIC is, for these purposes, unknown. So the case statement could be: switch( aff_type ){ case SQLITE_AFF_INTEGER: col_type = SQLITE_INTEGER; break; case SQLITE_AFF_REAL: col_type = SQLITE_FLOAT; break; case SQLITE_AFF_TEXT: col_type = SQLITE_TEXT; break; case SQLITE_AFF_NONE: col_type = SQLITE_BLOB; break; case SQLITE_AFF_NUMERIC: /* falls through */ default: col_type = 0; /* unknown */ break; } ---- _2006-Nov-27 02:43:06 by anonymous:_ {linebreak} Your first function was correct, it just had some unreachable code. There's no unknown affinity, in the absence of a match the affinity is assumed to be numeric: int sqlite3_decltype_to_type(const char *decl) { int type = SQLITE_FLOAT; if( decl ){ Token token; token.z = decl; token.n = strlen(token.z); switch( sqlite3AffinityType(&token) ){ case SQLITE_AFF_INTEGER: type = SQLITE_INTEGER; break; case SQLITE_AFF_TEXT: type = SQLITE_TEXT; break; case SQLITE_AFF_NONE: type = SQLITE_BLOB; break; default: break; } } return type; }
#cfe8bd 2089 active 2006 Nov anonymous Pending 3 3 Decouple sqlite_int64 from other 64bit datatypes Currently sqlite3 makes the (valid) assumption that sqlite_int64 (or i64, u64) is 64 bit wide, matches with Tcl_WideInt and has the same datasize (and byte order) than double. The following patch fixes this and allows sqlite_int64 to be any integral type, e.g. a 32bit int (with the limitations of the reduced datatype size). The use case for this is for systems that do not support 64bit integers (e.g. lack of compiler feature, embedded system), db's of small data size, and systems without large file support. The patch allows compiling with -DSQLITE_INT64_TYPE=int -DSQLITE_32BIT_ROWID for such a system. _2006-Nov-29 01:13:07 by anonymous:_ {linebreak} Hm, now I wanted to add the patch file but I don't get the formatting right without editing the file and removing empty lines. How am I supposed to add a patch file (created with diff -ru)?
#cfe8bd 2093 active 2006 Dec anonymous Pending 2 3 sqlite3_vtab_cursor doesn't have errMsg The sqlite3_vtab_cursor structure doesn't have a zErrMsg pointer. Only the containing vtable does. This means that operations on cursor objects that have an error have to set the error on the vtable not the cursor. Unfortunately this means that there are race conditions since two different cursors on the same vtable could have errors at the same time. If the cursors are in different threads then a crash or worse can happen.
#cfe8bd 2096 active 2006 Dec anonymous Pending 3 3 ATTACH DATABASE returns SQLITE_ERROR when database is locked From an email sent to DRH: I am working on a problem surrounding the inability to ATTACH to a database file. The error text being returned is "database is locked", which should be SQLITE_BUSY, however, the error code being returned by sqlite3_exec is SQLITE_ERROR. Is sqlite3_exec wrong in returning SQLITE_ERROR rather than SQLITE_BUSY? I have some nagging feeling that I determined or read that the attachFunc function does not return a truly-relevant status code, but I can't see why offhand nor can I find any evidence to support that theory. If sqlite3_exec is doing the right thing, however, then the question becomes one of identifying when to retry the ATTACH statement; we're currently keying off SQLITE_BUSY or SQLITE_LOCKED, as appropriate, and I'd rather not be trying to trap errors based on error text.
#f2dcdc 2100 active 2006 Dec anonymous Pending 1 1 Fixes for SQL lower() and upper() As acknowledged in the documentation, the SQL lower() and upper() functions might not work correctly on UTF-8 characters. This bug might show if a country specific locale is used instead of the standard C locale. Under certain circumstances, SQL lower() or upper() can even corrupt the UTF-8 string into invalid UTF-8 if the tolower() and toupper() C functions convert character values starting from 0x80. Below I propose implementations of lowerFunc() and upperFunc() which work correctly with UTF-8 characters, regardless of the implementation of the C library tolower() and toupper() functions. If these C functions are implemented to support high ASCII or even Unicode case conversion, the new SQL lower() and upper() will support them as well. The proposed C implementation applies a technique also found in sqlite3VdbeMemTranslate() in utf.c and makes use of some macros contained in that unit. To avoid duplicating existing code, it could make sense to move lowerFunc() and lowerFunc() to utf.c, just as it has been done with sqlite3utf16Substr(). Finally, here is the code: /* ** Implementation of the upper() and lower() SQL functions. */ static void upperFunc(sqlite3_context *context, int argc, sqlite3_value **argv){ const unsigned char *zIn, *zInTerm; unsigned char *z, *zOut; int c, l; if( argc<1 || SQLITE_NULL==sqlite3_value_type(argv[0]) ) return; zIn = sqlite3_value_text(argv[0]); if( zIn==0 ) return; l = sqlite3_value_bytes(argv[0]); zInTerm = &zIn[l]; /* When converting case, the maximum growth results from ** translating a 1-byte UTF-8 character to a 4-byte UTF-8 character. */ zOut = sqliteMalloc( l * 4 ); z = zOut; while( zIn #ifdef SQLITE_UNICODE_UPPERLOWERFUNCS #define WCHAR_T_SIZE sizeof(wchar_t) #if (WCHAR_T_SIZE == 2) #define MAXUPPERLOWERCHAR_AVAIL 0x0000ffff #else // (WCHAR_T_SIZE == 4) #define MAXUPPERLOWERCHAR_AVAIL 0x7fffffff #endif // (WCHAR_T_SIZE == 2) #define TOLOWERSQLFUNC(c) unicode_tolower #define TOUPPERSQLFUNC(c) unicode_toupper int unicode_tolower(const int c) { wchar_t buff [2]; if (c > MAXUPPERLOWERCHAR_AVAIL) return c; buff[0] = (wchar_t) c; buff[1] = 0; _wcslwr(buff); return (int) buff[0]; } int unicode_toupper(const int c) { wchar_t buff [2]; if (c > MAXUPPERLOWERCHAR_AVAIL) return c; buff[0] = (wchar_t) c; buff[1] = 0; _wcsupr(buff); return (int) buff[0]; } #else // SQLITE_UNICODE_UPPERLOWERFUNCS #define TOLOWERSQLFUNC(c) (c > 255 ? c : tolower(c)) #define TOUPPERSQLFUNC(c) (c > 255 ? c : toupper(c)) #endif // SQLITE_UNICODE_UPPERLOWERFUNCS /* ** Implementation of the upper() and lower() SQL functions. */ static void upperFunc(sqlite3_context *context, int argc, sqlite3_value **argv){ const unsigned char *zIn, *zInTerm; unsigned char *z, *zOut; int c, l; if( argc<1 || SQLITE_NULL==sqlite3_value_type(argv[0]) ) return; zIn = sqlite3_value_text(argv[0]); if( zIn==0 ) return; l = sqlite3_value_bytes(argv[0]); zInTerm = &zIn[l]; /* When converting case, the maximum growth results from ** translating a 1-byte UTF-8 character to a 4-byte UTF-8 character. */ zOut = sqliteMalloc( l * 4 ); z = zOut; while( zIn