I just used Spatialite to create a tool for managing large stacks of images on the pixel level; i.e., each image gets its own database file, each pixel is stored as an XYZ coordinate and an RGB value.
I can use Spatialite's built-in functions to rotate, translate, scale, etc. images; and use SQL to pull out sub-volumes of the stack, edit and composite them.
My biggest stack is almost 2000 images, over 90GB uncompressed data. Working with subvolumes is pretty snappy up to a few hundred megabytes, which is good enough for my purposes. For bigger jobs it should be possible to parallelize some tasks for better performance.
Not entirely on-topic, but the takeaway is I'm thumbs-up for using SQLite to process image data.
Do you do anything in particular to improve access speed to that image data? I've been working with big matrices that get spit out of traffic assignment software that we use in travel modeling. Every vendor seems to have their own proprietary format. We ended up using HDF5 as a container due to its somewhat awesome speed characteristics. I'd initially tried SQLite for that matrix data, but couldn't squeeze the same kinds of performance out of it. That could have just been my own brain fail though.
Nothing beyond the standard advice for SQLite performance. I found that keeping the data in individual per-image database files worked a lot better than trying to create one big database for the whole stack. With that done, I expect the big performance win to come with parallelization of the image transformation tasks.