dlt is a recently released python library for data extraction and loading, the EL in ETL. At dltHub we are big fans of optimising things and integrating those optimisations into our toolkit to enable others to re-use them.
Speed boosts and schema from arrow, dlt for loading with schema evolution
In this example, we combine ConnectorX + Arrow + dlt to extract data and load it to a strongly typed environment 30x faster than classic data transfer via sqlalchemy.
Result: Much faster, but mind the memory usage
In this example we can see 30x overall speedup on extraction and normalisation with Arrow The process took 16 seconds with arrow vs 8 minutes with sqlalchemy + dlt's JSON normaliser for 10m rows.
The output in both of methods is the same (parquet files or loaded data) with schema evolution. However, in the case of arrow, we are not iterating row by row, so we cannot perform optimisations we can while streaming from sqlalchemy, such as microbatching to keep memory use low.
Read more about it + implementation docs on our blog here
Top comments (0)