I actually don't use SQL for querying anything (as I said, I'm more looking for seeing how much values deviate from the mean etc), but this looks cool.
One question in relation to your project is why not simply use awk, filter data, pipe to a sqlite db and then query with "real" SQL? I'm curious to know why you choose to write a parser for SQL rather than a lint-thing for CSV..
Because a lot of what I do when profiling systems involves parsing through, joining and filtering massive (constantly changing) log files. Rather than pumping it into sql why not just leverage the unix command line tools? Basically all aql does is create the awk command for you but with the veneer of sql instead of AWKward (haha) syntax where I refer to fields as numbers etc.
Also writing the sql "parser" was entirely trivial in that I just match keywords then split it into a hash and build the command accordingly. I dug that approach because it means the sql can be in any order, commas are white space and I can easily extend the syntax i.e. "fields, separator, columns (for setting widths)" etc. without much work.
I actually don't use SQL for querying anything (as I said, I'm more looking for seeing how much values deviate from the mean etc), but this looks cool.
One question in relation to your project is why not simply use awk, filter data, pipe to a sqlite db and then query with "real" SQL? I'm curious to know why you choose to write a parser for SQL rather than a lint-thing for CSV..