Small. Fast. Reliable.
Choose any three.
*** 109,116 ****
  insert into the db? This would slow down the process of inserting records extremly.
  ----
  
- ----
- 
  **Database size and performance on a CD**
  
  _Vis Naicker on 2007-05-104:_
--- 109,114 ----
***************
*** 118,120 ****
--- 116,127 ----
  I struggled to get good performance outta a **CD** with 200K files, it reads the blobs pretty well , but for the same recordset where I kept the data separate from the blob, it was extremely slow, like 1min+ until the cd was cached by windows during the query. By caching to the drive before querying OTOH, it was pretty good.
  
  It is quite embarrassing, at the clients end, where sometimes 1 database seems to hang the machine, while another takes 3 seconds or so. I think my solution will have to be is to merge to blob with the field data, there it is admirable - it performs as well as the HD/network stored db. Some hints I have used is to change the cache to 16K from the default 2k.
+ 
+ *: Try running VACUUM on the database using SQLite 3.3.17 prior
+    to burning the database onto CD.
+ 
+ *: Try rebuilding the database with the
+    {link: /pragma.html#pragma_page_size page_size pragma}
+    set to something larger than 1024.  4096 or 8192 might
+    work better.  Be sure to VACUUM again after rebuilding
+    the database before burning it onto the CD.