DEV Community

Chidozie C. Okafor
Chidozie C. Okafor

Posted on • Originally published at doziestar.Medium on

Mastering Memory: The Art of Memory Management and Garbage Collection in Go

Memory management in the hectic world of programming frequently resembles managing a busy, always-open restaurant. The constant demand for effective service and the requirement for optimal performance are echoed by each diner or variable needing their own table or memory space. The Go language has distinguished itself as a top chef in this dynamic environment, renowned for its ease of use, effectiveness, and strong support for concurrent programming.

However, how does Go run the busy restaurant from memory? How does it make sure that every diner is seated on time, is well taken care of, and that no table is still occupied by a patron who has left long ago?

In this article, we examine the memory management and garbage collection techniques used by the Go programming language. We lift the curtain on the strategies Go employs to better serve its users, resulting in code that runs quicker and uses less memory.

Grab your apron and let’s explore Go’s memory management and garbage collection, using examples from the restaurant industry as our guide. This journey will not only give you a better understanding of Go, but also useful optimization techniques so that your own Go code runs as efficiently as possible.

Memory Management in Go

  1. Memory Allocation: Imagine a restaurant filled with tables (memory) and customers (variables) to help you better understand this process. A table (memory address) is given to a visitor (variable) when they arrive by the host (compiler). Go’s memory management options include:

a. Stack Allocation: Fast allocation and deallocation, ideal for short-lived variables. Similar to customers who only stay briefly at the restaurant.

Here, x is a local variable allocated on the stack, which is automatically deallocated when the function returns.

func stackAlloc() int {
    x := 42 // x is allocated on the stack
    return x
}
Enter fullscreen mode Exit fullscreen mode

b. Heap Allocation: Longer-lasting, but slower allocation and deallocation. Suitable for long-lived variables or large objects. Comparable to customers staying for extended periods at the restaurant.

type myStruct struct {
    data []int
}

func heapAlloc() *myStruct {
    obj := &myStruct{data: make([]int, 100)} // obj is allocated on the heap
    return obj
}
Enter fullscreen mode Exit fullscreen mode

obj is allocated on the heap because it "escapes" its scope, as it is still accessible after the function returns.

  1. Escape Analysis: The escape analysis carried out by the Go compiler determines whether a variable should be allocated on the stack or heap. A variable is allocated on the heap if it “escapes” its scope, or if it can be accessed after its function completes. In our hypothetical restaurant, this is analogous to patrons who choose to stay longer, necessitating more stable seating arrangements.
package main

import "fmt"

// This function returns an integer pointer.
// The integer i is created within the function scope,
// but because we're returning the address of i, it "escapes" from the function.
// The Go compiler will decide to put this on the heap.
func escapeAnalysis() *int {
    i := 10 // i is initially created here, within the function's scope
    return &i // The address of i is returned here, which means it "escapes" from the function
}

// This function also returns an integer, but the integer does not escape
// This integer will be stored on the stack as it doesn't need to be accessed outside the function.
func noEscapeAnalysis() int {
    j := 20 // j is created here, within the function's scope
    return j // The value of j is returned here, but it doesn't escape from the function
}

func main() {
    // Call both functions and print the results
    fmt.Println(*escapeAnalysis()) // Output: 10
    fmt.Println(noEscapeAnalysis()) // Output: 20
}
Enter fullscreen mode Exit fullscreen mode

In the escapeAnalysis() function, the variable i "escapes" because its address is returned by the function. This means that the variable i needs to be available even after the function has finished executing. Therefore, it will be stored on the heap.

In contrast, in the noEscapeAnalysis() function, the variable j does not escape because only its value is returned. Therefore, it can be safely disposed of after the function finishes, and it will be stored on the stack.

The Go compiler automatically performs escape analysis, so you don’t need to explicitly manage stack and heap allocation. This simplifies memory management and helps to prevent memory leaks and other errors.

  1. Memory Management Techniques: Go employs several memory management techniques, such as:

a. Value Semantics: Go prefers to pass variables by value rather than by reference, which means it uses value semantics. Memory management is made easier and memory leaks are less likely with this method. This is comparable to giving each customer their own table in a restaurant, which lessens the possibility of misunderstandings.

package main

import "fmt"

// incrementByValue takes an integer as a parameter and increments it.
// Since Go uses value semantics by default, the function receives a copy of the original value.
// Changing the value of i inside this function does not affect the original value.
func incrementByValue(i int) {
    i++ // increment i
    fmt.Println("Inside incrementByValue, i =", i) 
}

// incrementByReference takes a pointer to an integer as a parameter and increments the integer.
// In this case, the function is dealing with a reference to the original value,
// so changing the value of *p will affect the original value.
func incrementByReference(p *int) {
    (*p)++ // increment the value that p points to
    fmt.Println("Inside incrementByReference, *p =", *p) 
}

func main() {
    var x int = 10
    fmt.Println("Before incrementByValue, x =", x) // Output: Before incrementByValue, x = 10
    incrementByValue(x)
    fmt.Println("After incrementByValue, x =", x) // Output: After incrementByValue, x = 10

    var y int = 10
    fmt.Println("\nBefore incrementByReference, y =", y) // Output: Before incrementByReference, y = 10
    incrementByReference(&y)
    fmt.Println("After incrementByReference, y =", y) // Output: After incrementByReference, y = 11
}
Enter fullscreen mode Exit fullscreen mode

In the incrementByValue function, the variable i is a copy of the argument passed, so when i is incremented, it does not affect the original value. This is known as passing by value, and it's the default in Go.

On the other hand, in the incrementByReference function, the variable p is a pointer to the original argument, so when the value p points to is incremented, it does change the original value. This is known as passing by reference.

In general, Go prefers to use value semantics (pass by value) because it simplifies memory management and minimizes the risk of unexpected side effects. However, Go also supports reference semantics (pass by reference) when necessary.

b. Slices and Maps: Go promotes using slices and maps rather than arrays and pointers because they improve memory management. This allows for a more effective use of resources, similar to a restaurant offering a buffet (slices/maps) rather than à la carte (arrays/pointers).

package main

import (
 "fmt"
)

func main() {
 // SLICES
 // Creating a slice with initial values
 slice := []string{"Table1", "Table2", "Table3"}
 fmt.Println("Initial slice:", slice) // Output: Initial slice: [Table1 Table2 Table3]

 // Adding an element to the slice (like adding a table in the restaurant)
 slice = append(slice, "Table4")
 fmt.Println("Slice after append:", slice) // Output: Slice after append: [Table1 Table2 Table3 Table4]

 // Removing the first element from the slice (like freeing up the first table in the restaurant)
 slice = slice[1:]
 fmt.Println("Slice after removing first element:", slice) // Output: Slice after removing first element: [Table2 Table3 Table4]

 // MAPS
 // Creating a map to represent tables in the restaurant and their status
 tables := map[string]string{
  "Table1": "occupied",
  "Table2": "free",
  "Table3": "free",
 }
 fmt.Println("\nInitial map:", tables) // Output: Initial map: map[Table1:occupied Table2:free Table3:free]

 // Adding an entry to the map (like adding a table in the restaurant)
 tables["Table4"] = "free"
 fmt.Println("Map after adding a table:", tables) // Output: Map after adding a table: map[Table1:occupied Table2:free Table3:free Table4:free]

 // Changing an entry in the map (like changing the status of a table in the restaurant)
 tables["Table2"] = "occupied"
 fmt.Println("Map after changing status of Table2:", tables) // Output: Map after changing status of Table2: map[Table1:occupied Table2:occupied Table3:free Table4:free]

 // Removing an entry from the map (like removing a table from the restaurant)
 delete(tables, "Table1")
 fmt.Println("Map after removing Table1:", tables) // Output: Map after removing Table1: map[Table2:occupied Table3:free Table4:free]
}
Enter fullscreen mode Exit fullscreen mode

To manage a list of tables in a restaurant, we are using slices. Using the append function and slice manipulation is all that is necessary to add and remove tables.

A map is also being used to keep track of each table’s status. Using map operations makes it simple to add, remove, and update table statuses.

This demonstrates the advantages of slices and maps over arrays and pointers. They offer adaptable, dynamic data structures with built-in functions for typical tasks that can expand and contract as needed. Programming is also more practical and memory management is more effective as a result.

Garbage Collection in Go

  1. Generational Garbage Collection: The garbage collector in Go divides objects into generations according to their lifespan. The generation of an object that survives garbage collection is advanced. This is comparable to a restaurant categorizing patrons as returning or new, enabling it to allocate resources more effectively.

  2. Concurrent Mark and Sweep (CMS): An algorithm known as concurrent mark and sweep is used by Go’s garbage collector. The “mark” phase identifies objects that are out of reach, and the “sweep” phase frees up the memory that these objects had been using. This is comparable to a wait staff clearing tables for new customers while continuously looking for empty ones to mark and sweep.

In Go, the concurrent mark and sweep (CMS) process works in three main phases:

a. Marking phase: This phase identifies all the reachable objects. Starting from the roots, which are global variables and local variables on the stack, the garbage collector traces all reachable objects and marks them as live.

b. Sweeping phase: This phase comes after the marking phase. Here, the garbage collector scans the heap and frees up the memory for objects that were not marked as live in the marking phase.

c. Pause phase: Between the marking and sweeping phases, there is a short pause. This is the only time the garbage collector needs to stop the world, i.e., pause the execution of Go routines.

  1. Tri-color Marking: Go makes use of a tri-color marking algorithm to prevent stopping the program during garbage collection. White (unmarked), grey (marked but with unexplored references), and black (marked and explored) are the three colors used in this process. White tables in a restaurant are empty, grey tables have diners seated, and black tables are occupied but don’t require any further attention.

Improving Go Code for Memory Efficiency and Performance

  1. Avoid Global Variables: Reduce the use of global variables because they cause memory leaks because they last the entire life of the program. This is equivalent to holding a table aside indefinitely for a diner who only occasionally visits our hypothetical restaurant.
// A global variable
var global *Type

func badFunc() {
    var local Type
    global = &local
}

func main() {
    badFunc()
    // Now `global` holds a pointer to `local`, which is out of scope and
    // should have been garbage collected. This is a memory leak.
}
Enter fullscreen mode Exit fullscreen mode

badFunc creates a local variable local and then assigns its address to the global variable global. After badFunc returns, local should be out of scope and its memory should be released. However, because global is still holding onto its address, the memory cannot be freed, causing a memory leak.

The solution is to avoid such unnecessary use of global variables. If you need to share data between different parts of your program, consider using function parameters, return values, or struct fields instead. Here is how you might fix the above code:

type MyStruct struct {
    field Type
}

func goodFunc(s *MyStruct) {
    var local Type
    s.field = local
}

func main() {
    var s MyStruct
    goodFunc(&s)
    // Now `s.field` holds the value of `local`, which was copied.
    // There is no memory leak because `local`'s memory can be safely released after `goodFunc` returns.
}
Enter fullscreen mode Exit fullscreen mode

goodFunc takes a pointer to a MyStruct and assigns the value of local to its field. This way, local's memory can be safely released after goodFunc returns, avoiding the memory leak.

  1. Use Pointers Wisely: When working with large data structures, using pointers can help save memory. However, you should be careful not to create irrational references that could cause memory leaks or interfere with garbage collection. This is comparable to seating patrons together at tables in a restaurant to maximize efficiency while minimizing congestion or confusion.
type BigStruct struct {
    data [1 << 20]int
}

func newBigStruct() *BigStruct {
    var bs BigStruct
    return &bs
}

func main() {
    bs := newBigStruct()
    fmt.Println(bs.data[0])
}
Enter fullscreen mode Exit fullscreen mode

newBigStruct creates a BigStruct on the stack and returns a pointer to it. However, as soon as newBigStruct returns, bs goes out of scope and its memory should be released, which makes the pointer returned by newBigStruct invalid.

The correct way to do this is to allocate the BigStruct on the heap using the new function, which will keep it alive as long as there are references to it:

func newBigStruct() *BigStruct {
    bs := new(BigStruct)
    return bs
}

func main() {
    bs := newBigStruct()
    fmt.Println(bs.data[0])
    // When we're done with bs, it's a good idea to set it to nil to avoid unnecessary memory holding.
    bs = nil
}
Enter fullscreen mode Exit fullscreen mode

In this revised code, newBigStruct allocates a BigStruct on the heap, so its memory won't be released until there are no more references to it. In main, we get a pointer to a BigStruct from newBigStruct, use it, and then set it to nil when we're done with it to allow the memory to be garbage collected. This is a wise use of pointers, as it allows us to work with large data structures efficiently without creating memory leaks.

  1. Pool Resources: Consider using the sync.Pool type for memory-intensive operations to reuse objects instead of allocating new ones. This conserves memory by reducing garbage collection overhead. In a restaurant, this can be compared to reusing table settings for new customers instead of always setting new ones.
package main

import (
 "fmt"
 "sync"
 "time"
)

// We'll be pooling these ExpensiveResource types.
type ExpensiveResource struct {
 id int
}

func main() {
 // Create a pool of ExpensiveResource objects.
 var pool = &sync.Pool{
  New: func() interface{} {
   fmt.Println("Creating new resource")
   return &ExpensiveResource{id: time.Now().Nanosecond()}
  },
 }

 // Allocate a new ExpensiveResource and put it in the pool.
 resource := pool.Get().(*ExpensiveResource)
 pool.Put(resource)

 // When we need to use the resource, get it from the pool.
 resource2 := pool.Get().(*ExpensiveResource)
 fmt.Println("Resource ID:", resource2.id)
 pool.Put(resource2)
}
Enter fullscreen mode Exit fullscreen mode

we create a sync.Pool of ExpensiveResource objects. We define a New function for the pool to create a new ExpensiveResource when the pool is empty.

Then, we use pool.Get() to get a ExpensiveResource from the pool. If the pool is empty, it will call our New function to create one. We use the resource and then put it back in the pool with pool.Put(resource) when we're done.

This way, we can reuse ExpensiveResource objects instead of allocating new ones every time we need one, saving memory and reducing garbage collection overhead. In the restaurant analogy, this is like reusing table settings for new customers instead of always setting new ones.

  1. Limit the Scope of Variables: Release resources when they are no longer required and keep variables’ ranges as small as possible. As a result, memory management is more effective, and faster garbage collection is made possible. This would be equivalent to promptly wiping down tables after customers have left in our restaurant example.
package main

import (
 "fmt"
)

func main() {
 // This variable has the whole function scope
 wholeFunctionScope := "I'm available in the whole function"

 fmt.Println(wholeFunctionScope)

 {
  // This variable has only limited scope
  limitedScope := "I'm available only in this block"

  fmt.Println(limitedScope)

  // Releasing the resource manually (just for the sake of this example)
  limitedScope = ""
 }

 // This will cause a compilation error, as limitedScope is not available here
 // fmt.Println(limitedScope)
}
Enter fullscreen mode Exit fullscreen mode

wholeFunctionScope has the scope of the entire function, while limitedScope only exists within the block of code where it's defined. By limiting the scope of limitedScope, we ensure that the memory it uses can be released as soon as we're done with it, which in this case is at the end of the block.

This practice is akin to promptly clearing tables after customers have left in a restaurant, freeing up resources (table space in the restaurant, memory in our program) for new customers (new variables).

  1. Optimize Data Structures: Select the proper data structures and take into account their memory requirements. Use slices and maps as an example rather than arrays and pointers. This facilitates garbage collection and optimizes memory allocation. This would be equivalent to choosing the most practical seating arrangement in a restaurant.

  2. Profile and Benchmark: Regularly profile and benchmark your Go code to identify memory bottlenecks and optimize performance. Tools like pprof and benchmem can help analyze memory usage and find areas for improvement. This is comparable to a restaurant manager observing and analyzing customer flow to optimize operations.

In conclusion, writing efficient and effective Go programs requires a solid understanding of memory management and garbage collection. Our Go code should be written to allocate memory wisely, keep the scope of variables limited, use structures like slices and maps that facilitate garbage collection, and avoid pitfalls like unnecessary global variables or irrational pointer references. This is similar to how a well-managed restaurant optimizes seating arrangements and diligently clears tables for new customers.

Go’s garbage collector is a strong ally, but it’s not a wand of magic. It requires our assistance to function properly, which is where good programming practices come in. We can continuously monitor and improve the memory usage of our code by using tools like benchmem and pprof.

Memory management in Go ultimately resembles a dance between the programmer and garbage collector. A stunning, high-performance application that makes the most of system resources is the outcome when both partners are aware of their responsibilities and work together harmoniously.

So let’s don our dancing shoes and begin coding more intelligently and effectively. Happy Go programming!

Top comments (0)