DEV Community

Cover image for Arrays vs Slices in Go: Understanding the "under the hood" functioning visually
Souvik Kar Mahapatra
Souvik Kar Mahapatra

Posted on

Arrays vs Slices in Go: Understanding the "under the hood" functioning visually

mickey packing bag huge

Have you ever tried packing for a trip without knowing how long you'll be there? That's precisely what happens when we store data in Go. Sometimes, like when packing for a weekend trip, we know exactly how many things we need to store; other times, such as, when packing for a trip where we say, "I'll return when I'm ready," we don't.

Let's take a deep dive into the world of Go arrays and slice internals through simple illustrations. We will look into:

  1. Memory layouts
  2. Growth mechanisms
  3. Reference semantics
  4. Performance implications

By the end of this read, you'll be able to understand when to use arrays versus when to use slices with the help of real world examples and memory diagrams

Arrays: The Fixed-Size Container 📦

Think of an array as a single block of memory where each element sits next to each other, like a row of perfectly arranged boxes.

When you declare var numbers [5]int, Go reserves exactly enough contiguous memory to hold 5 integers, no more, no less.

array memory layout in golang

Since they have contiguous fixed memory it can't be sized during runtime.

func main() {
    // Zero-value initialization
    var nums [3]int    // Creates [0,0,0]

    // Fixed size
    nums[4] = 1       // Runtime panic: index out of range

    // Sized during compilation
    size := 5
    var dynamic [size]int  // Won't compile: non-constant array bound
}
Enter fullscreen mode Exit fullscreen mode

size is part of type in golang arrays

The size is part of the array's type. This means [5]int and [6]int are completely different types, just like int and string are different.

func main() {
    // Different types!
    var a [5]int
    var b [6]int

    // This won't compile
    a = b // compile error: cannot use b (type [6]int) as type [5]int

    // But this works
    var c [5]int
    a = c // Same types, allowed
}
Enter fullscreen mode Exit fullscreen mode

Why Array is Copy By Default?

When you assign or pass arrays in Go, they create copies by default. This ensures data isolation and prevents unexpected mutations.

array pass by reference vs pass by value

func modifyArrayCopy(arr [5]int) {
    arr[0] = 999    // Modifies the copy, not original
}

func modifyArray(arr *[5]int){
    arr[0] = 999  // Modifies the original, since reference is passed
}

func main() {
    numbers := [5]int{1, 2, 3, 4, 5}

    modifyArrayCopy(numbers)
    fmt.Println(numbers[0])  // prints 1, not 999

    modifyArray(&numbers)
    fmt.Println(numbers[0])  // prints 999
}
Enter fullscreen mode Exit fullscreen mode

Slices

Alright so you can't do var dynamic [size]int to set dynamic size, this is where slice comes into play.

Slices under the hood

The magic lies in how it maintains this flexibility while keeping operations fast.

Every slice in Go consists of three critical components:

slice memory layout and internal structure

type slice struct {
    array unsafe.Pointer // Points to the actual data
    len   int           // Current number of elements
    cap   int           // Total available space
}
Enter fullscreen mode Exit fullscreen mode

What's unsafe.Pointer??

The unsafe.Pointer is Go's way of handling raw memory addresses without type safety constraints. It's "unsafe" because it bypasses Go's type system, allowing direct memory manipulation.

Think of it as Go's equivalent to C's void pointer.

What's that array?

When you create a slice, Go allocates a contiguous block of memory in the heap (unlike arrays) called backing array. Now the array in slice struct points to the start of that memory block.

The array field uses unsafe.Pointer because:

  1. It needs to point to raw memory without type information
  2. It allows Go to implement slices for any type T without generating separate code for each type.

The dynamic mechanism of slice

let's try developing intuition for the actual algorithm under the hood.

intuition behind the dynamic mechanism of slice

If we go by intuition we can do two things:

  1. We could set aside space so large and can use it as and when required
    pros: Handles growing needs till a certain point
    cons: Memory wastage, practically might hit limit

  2. We could set a random size initially and as the elements are appended we can reallocate the memory on each append
    pros: Handles the previous case, can grow as per the need
    cons: reallocation is expensive and on every append it's going to get worst

We cannot avoid the reallocation as when the capacity hits one needs to grow. We can minimize the reallocation so that the subsequent inserts/appends cost is constant (O(1)). This is called amortized cost.

How can we go about it?

till Go version v1.17 following formula was being used:

// Old growth pattern
capacity = oldCapacity * 2  // Simple doubling
Enter fullscreen mode Exit fullscreen mode

from Go version v1.18:

// New growth pattern
if capacity < 256 {
    capacity = capacity * 2
} else {
    capacity = capacity + capacity/4  // 25% growth
}
Enter fullscreen mode Exit fullscreen mode

since doubling a large slice is waste of memory so as the slice size increases the growth factor is decreased.

Let's get a better understanding from usage perspective

visually showing slice append in golang

numbers := make([]int, 3, 5) // length=3 capacity

// Memory Layout after creation:
Slice Header:
{
    array: 0xc0000b2000    // Example memory address
    len:   3
    cap:   5
}

Backing Array at 0xc0000b2000:
[0|0|0|unused|unused]
Enter fullscreen mode Exit fullscreen mode

let's add some elements to our slice

numbers = append(numbers, 10)
Enter fullscreen mode Exit fullscreen mode

Since we have capacity (5) > length (3), Go:

Uses existing backing array
Places 10 at index 3
Increases length by 1

// Memory Layout after first append:
Slice Header:
{
    array: 0xc0000b2000    // Same memory address!
    len:   4               // Increased
    cap:   5               // Same
}

Backing Array at 0xc0000b2000:
[0|0|0|10|unused]
Enter fullscreen mode Exit fullscreen mode

Let's hit the limit

numbers = append(numbers, 20)  // Uses last slot
numbers = append(numbers, 30)  // Needs to grow!
Enter fullscreen mode Exit fullscreen mode

Oops! Now we have hit our capacity, we need to grow. Here is what happens:

  1. Calculates new capacity (oldCap < 256, so doubles to 10)
  2. Allocates new backing array (a new memory address say, 300)
  3. Copies existing elements to new backing array
  4. Adds new element
  5. Updates slice header

memory reallocation for slice in golang visualization

// Memory Layout after growth:
Old Backing Array at 0xc0000b2000:
[0|0|0|10|20]          // Will be garbage collected

New Slice Header:
{
    array: 0xc0000c8000    // New memory address!
    len:   6
    cap:   10              // Doubled
}

New Backing Array at 0xc0000c8000:
[0|0|0|10|20|30|unused|unused|unused|unused]
Enter fullscreen mode Exit fullscreen mode

what happens if it's a large slice?

// Create slice with 256 elements
big := make([]int, 256)

// Append one more element
big = append(big, 1)
Enter fullscreen mode Exit fullscreen mode

Since capacity is 256, Go uses the post-1.18 growth formula:

New capacity = oldCap + oldCap/4
256 + 256/4 = 256 + 64 = 320

// Memory Layout after growth
New Slice Header:
{
    array: 0xc0000c8000    // New memory address
    len:   257
    cap:   320             // Grew by 25%
}
Enter fullscreen mode Exit fullscreen mode

Why reference semantics?

  1. Performance: Copying large data structures is expensive
  2. Memory efficiency: Avoiding unnecessary data duplication
  3. Enabling shared views of data: Multiple slices can reference the same backing array
original := []int{10, 20, 30, 40, 50}
slice1 := original[1:3]    // [20, 30]
slice2 := original[2:4]    // [30, 40]

slice1[1] = 999           // Changes 30 to 999
Enter fullscreen mode Exit fullscreen mode

this is how the slice headers will look:

Original Slice Header:
{
    array: 0xc0000b6000
    len:   5
    cap:   5
}

Slice1 Header:
{
    array: 0xc0000b6008    // Points to second element
    len:   2
    cap:   4               // Until end of original
}

Slice2 Header:
{
    array: 0xc0000b6010    // Points to third element
    len:   2
    cap:   3               // Until end of original
}

Backing Array at 0xc0000b6000:
[10|20|999|40|50]         // 999 is visible to all slices!
Enter fullscreen mode Exit fullscreen mode

Usage patterns and cautions for slice

Accidental updates

since slice uses reference semantics, it doesn't create copies which might lead to accidental mutation to original slice if not being mindful.

original := []int{1, 2, 3, 4, 5}
slice1 := original[1:3]
slice2 := original[2:4]
slice1[1] = 999
// Now slice2[0] is also 999!
Enter fullscreen mode Exit fullscreen mode

Expensive append operation

s := make([]int, 0)
for i := 0; i < 10000; i++ {
    // Each append might cause reallocation
    s = append(s, i)
}

// Better approach:
s := make([]int, 0, 10000)  // Pre-allocate capacity
Enter fullscreen mode Exit fullscreen mode

Copy vs Append

// Expensive: Creates new backing array and copies from old
s = append(s, newElement...)

// More efficient for known sizes:
dest := make([]int, len(source))
copy(dest, source)
Enter fullscreen mode Exit fullscreen mode

spongebob

Let's wrap this up with a clear choice guide:

🎯 Choose Arrays When:

  1. You know the exact size upfront
  2. Working with small, fixed data (like coordinates, RGB values)
  3. Performance is critical and data fits on stack
  4. You want type safety with size

🔄 Choose Slices When:

  1. Size might change
  2. Working with dynamic data
  3. Need multiple views of same data
  4. Processing streams/collections

📚 Check out notion-to-md project! It's a tool that converts Notion pages to Markdown, perfect for content creators and developers. Join our discord community.

Top comments (0)