DEV Community

Cover image for Large CSV Processing Using Go
Didik Tri Susanto
Didik Tri Susanto

Posted on • Originally published at blog.didiktrisusanto.dev

Large CSV Processing Using Go

The idea is:

Given a large dummy CSV (1 million rows) contains sample of customer data and do processing with goals below:

  • Extract the data from the CSV
  • Calculate how many data / rows
  • Grouping how many customers for each city
  • Sort cities by customers count from highest to lowest
  • Calculate processing time

Sample CSV of the customers can be downloaded here https://github.com/datablist/sample-csv-files

Load And Extract Data

Apparently Go has standard lib for CSV processing. We don't need 3rd party dependency to solve our problem anymore which is nice. So the solution is pretty straightforward:

  // open the file to a reader interface
  c, err := os.Open("../data/customers-1000000.csv")
  if err != nil {
    log.Fatal(err)
  }
  defer c.Close()

  // load file reader into csv reader
  // Need to set FieldsPerRecord to -1 to skip fields checking
  r := csv.NewReader(c)
  r.FieldsPerRecord = -1
  r.ReuseRecord = true
  records, err := r.ReadAll()
  if err != nil {
    log.Fatal(err)
  }
Enter fullscreen mode Exit fullscreen mode
  1. Open the file from the given path
  2. Load opened file to csv reader
  3. Holds all extracted csv records / rows value into records slice for later processing

FieldsPerRecord is set to -1 because I want to skip fields checking on the row since fields or column count could be different in each format

At this state we already able to load and extract all the data from csv and ready to next processing state. We also will able to know how many rows in CSV by using function len(records).

Grouping Total Customer to Each City

Now we are able to iterate the records and create the map contains city name and total customer looks like this:

["Jakarta": 10, "Bandung": 200, ...]
Enter fullscreen mode Exit fullscreen mode

City data in csv row is located in 7th index and the code will look like this

  // create hashmap to populate city with total customers based on the csv data rows
  // hashmap will looks like be ["city name": 100, ...]
  m := map[string]int{}
  for i, record := range records {
    // skip header row
    if i == 0 {
    continue
    }
    if _, found := m[record[6]]; found {
      m[record[6]]++
    } else {
      m[record[6]] = 1
    }
  }
Enter fullscreen mode Exit fullscreen mode

If the city map is not exists, create new map and set the customer total as 1. Otherwise just increment the total number of given city.

Now we have map m contains collection of city and how many customer inside it. At this point we already solved problem of grouping how many customer for each city.

Sorting Highest Total Customer

I tried to find is there any function in standard lib to sort the map but unfortunately I couldn't find it. Sorting only possible for slice because we can rearrange the data order based on the index position. So yeah, let's make a slice from our current map.

// convert to slice first for sorting purposes
dc := []CityDistribution{}
for k, v := range m {
  dc = append(dc, CityDistribution{City: k, CustomerCount: v})
}
Enter fullscreen mode Exit fullscreen mode

Now how we sorted it by the CustomerCount from highest to lowest? The most common algorithm for this is using bubble short. Although it's not the fastest but it could do the job.

Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in the wrong order. This algorithm is not suitable for large data sets as its average and worst-case time complexity is quite high.

Reference: https://www.geeksforgeeks.org/bubble-sort-algorithm/

Using our slice, it will loop over the data and check the next value of the index and swap it if current data is less than next index. You can check the detail algorithm on the reference website.

Now our sorting process could be like this

// use bubble sort
dcCount := len(dc)
for i := 0; i < dcCount; i++ {
  swapped := false
  for j := 0; j < dcCount-i-1; j++ {
    if dc[j].CustomerCount < dc[j+1].CustomerCount {
      temp := dc[j]
      dc[j] = dc[j+1]
      dc[j+1] = temp
      swapped = true
    }
  }

  if !swapped {
    break
  }
}
Enter fullscreen mode Exit fullscreen mode

By the end of the loop, the final slice will give us a sorted data.

Calculate Processing Time

Calculate processing time is quite simple, we get timestamp before & after executing the main process of the program and calculate the difference. In Go the approach should be simple enough:

func main() {
    start := time.Now() // start timing for processing time
    // the main process
    // ...
    duration := time.Since(start)
    fmt.Println("processing time (ms): ", duration.Milliseconds())
}
Enter fullscreen mode Exit fullscreen mode

The Result

Run the program with command

go run main.go
Enter fullscreen mode Exit fullscreen mode

The printed out would be rows count, sorted data, and processing time. Something like this below:

CSV processing result

As expected of Go performance, it handled 1 million rows csv under 1 second!

All the completed codes already publish on my Github Repository:

https://github.com/didikz/csv-processing/tree/main/golang

Lesson Learned

  • CSV processing in Go is already available in standard lib, no need to use 3rd party lib
  • Processing the data is quite easy. The challenge was to find out how to sort the data because need to do manually

What's Come in Mind?

I was thinking my current solution might can be optimized further because I looped all the records extracted csv to map and if we checked at ReadAll() source, it also have loop to create the slice based on the given file reader. By this, 1 Mil rows could produce 2 x loops for 1 Mil data which is not nice.

I thought if I could read data directly from File reader it only needs 1 loop because I could create map directly from it. Except the records slice will be used elsewhere but not in this case.

I still have no time to figure it out yet, but I also thought some downside if I will do it manually:

  • Probably need handle more errors of the parsing process
  • I am not sure how significant it will reduce the processing time to consider the workaround will be worth it or not

Happy Coding!

Top comments (0)