Lately I’ve been working quite a bit with DuckDB, and one of the things that interests me most is understanding how to optimize performance according to the file format we’re using.
It’s not the same working with Parquet, compressed CSV, or uncompressed CSV. And the performance differences can be dramatic.
Let’s review the key optimizations to keep in mind when working with different file formats in DuckDB.
Parquet: Direct Query or Load First?
DuckDB has advanced Parquet support, including the ability to query Parquet files directly without loading them into the database. But when should you do one or the other?
In Favor of Direct Parquet Query
Basic statistics available: Parquet uses columnar storage and contains basic statistics like zonemaps. This allows DuckDB to apply optimizations like projection pushdown and filter pushdown. Workloads combining projection, filtering, and aggregation work very well on Parquet.
Storage considerations: Loading data from Parquet requires approximately the same space as the DuckDB file. If disk space is limited, direct query on Parquet is a good option.
Against Direct Parquet Query
Lack of advanced statistics: The DuckDB database format has hyperloglog statistics that Parquet doesn’t have. These improve cardinality estimation accuracy, especially important in queries with many joins.
Tip: If DuckDB produces a suboptimal join order on Parquet files, try loading the Parquet into DuckDB tables. The improved statistics will help obtain a better join order.
Repeated queries: If you plan to execute multiple queries on the same dataset, it’s worth loading the data into DuckDB. Queries will always be somewhat faster, amortizing the initial load time.
Row Group Size in Parquet
DuckDB works better with Parquet files that have row groups of 100K-1M rows each.
Why? Because DuckDB can only parallelize over row groups. If a Parquet has a single giant row group, it can only be processed by one thread.
Recommendations Summary
Parquet
- ✅ Direct query for projection/filter/aggregation workloads
- ✅ Load into DuckDB for many joins or repeated queries
- ✅ Use row groups of 100K-1M rows
- ✅ Keep files between 100 MB - 10 GB
- ✅ Prefer Snappy/LZ4/zstd over gzip
CSV
- ✅ Read .csv.gz directly (DO NOT uncompress first)
- ✅ Disable sniffer for many small files with same schema
- ✅ Consider converting to Parquet for analytical workloads
Conclusion
What has struck me most working with DuckDB is that traditional intuitions about data processing sometimes don’t apply.
The .csv.gz case is the perfect example. You’d think uncompressing first would be faster, but the reality is that reading the compressed file directly is faster.
And that’s key when working with large data. Load and process times matter, and understanding these optimizations can make a substantial difference in your data pipeline performance.











Comments