I need to process large(size and number) CSVs (couple of MBs having more than 0.1 million records each). Each record in the CSV has to be processed sequentially.
There are lot many relations comes into picture and so I have use memoization a lot to reduce db calls.
So in such cases, I cannot use the technique mentioned in point no 2.
Load raw-ish data into a work table and then process in the database?
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.