I'd love to see how it compares to MonetDB. I've used MonetDB successfully to look at smallish data set (few hundred millions rows) that neither postgres nor sqlite could complete queries at a useful speed on my laptop (at the time an x220 with a spinning disk).
The primary difference for analytic workloads is the processing model. MonetDB uses a column-at-a-time processing model which causes large intermediates to be materialized in memory. This significantly increases memory usage, and causes poor performance when data exceeds memory. DuckDB uses a vectorized execution engine, which materializes only small subsets of the columns. This increases cache-locality, decreases memory usage and also improves parallel execution capabilities (although parallel execution is still WIP currently).
DuckDB is developed by the same database research group at CWI that developed MonetDB, so they have applied a lot of the lessons they learned from MonetDB.
https://www.monetdb.org/Home