DEV Community

Cover image for A Deep Dive into Memory Leaks in Ruby
Tony Rowan for AppSignal

Posted on • Originally published at blog.appsignal.com

A Deep Dive into Memory Leaks in Ruby

In the first part of this two-part series on memory leaks, we looked at how Ruby manages memory and how Garbage Collection (GC) works.

You might be able to afford powerful machines with more memory, and your app might restart often enough that your users don't notice, but memory usage matters.

Allocation and Garbage Collection aren't free. If you have a leak, you spend more and more time on Garbage Collection instead of doing what you built your app to do.

In this post, we'll look deeper into the tools you can use to discover and diagnose a memory leak.

Let's continue!

Finding Leaks in Ruby

Detecting a leak is simple enough. You can use GC, ObjectSpace, and the RSS graphs in your APM tool to watch your memory usage increase. But just knowing you have a leak is not enough to fix it. You need to know where it is coming from. Raw numbers can't tell you that.

Fortunately, the Ruby ecosystem has some great tools to attach context to those numbers. Two are memory-profiler and derailed_benchmarks.

memory_profiler in Ruby

The memory_profiler gem offers a very simple API and a detailed (albeit a little overwhelming) allocated and retained memory report — that includes the classes of objects that are allocated, their size, and where they were allocated. It's straightforward to add to our leaky program.

# leaky.rb
require "memory_profiler"

an_array = []

report = MemoryProfiler.report do
  11.times do

    1000.times { an_array << "A" + "B" + "C" }
    puts "Array is #{an_array.size} items long"
  end

  GC.start
end

report.pretty_print
Enter fullscreen mode Exit fullscreen mode

Outputting a report that looks similar to this.

Total allocated: 440072 bytes (11001 objects)
Total retained:  440072 bytes (11001 objects)

allocated memory by gem
-----------------------------------
    440072  other

allocated memory by file
-----------------------------------
    440072  ./leaky.rb

allocated memory by location
-----------------------------------
    440000  ./leaky.rb:9
        72  ./leaky.rb:10

allocated memory by class
-----------------------------------
    440000  String
        72  Thread::Mutex

allocated objects by gem
-----------------------------------
     11001  other

allocated objects by file
-----------------------------------
     11001  ./leaky.rb

allocated objects by location
-----------------------------------
     11000  ./leaky.rb:9
         1  ./leaky.rb:10

allocated objects by class
-----------------------------------
     11000  String
         1  Thread::Mutex

retained memory by gem
-----------------------------------
    440072  other

retained memory by file
-----------------------------------
    440072  ./leaky.rb

retained memory by location
-----------------------------------
    440000  ./leaky.rb:9
        72  ./leaky.rb:10

retained memory by class
-----------------------------------
    440000  String
        72  Thread::Mutex

retained objects by gem
-----------------------------------
     11001  other

retained objects by file
-----------------------------------
     11001  ./leaky.rb

retained objects by location
-----------------------------------
     11000  ./leaky.rb:9
         1  ./leaky.rb:10

retained objects by class
-----------------------------------
     11000  String
         1  Thread::Mutex


Allocated String Report
-----------------------------------
     11000  "ABC"
     11000  ./leaky.rb:9


Retained String Report
-----------------------------------
     11000  "ABC"
     11000  ./leaky.rb:9

Enter fullscreen mode Exit fullscreen mode

There is a lot of information here, but generally, the
allocated objects by location and retained objects by location sections can be the most useful when looking for leaks. These are the file locations that allocate objects, ordered by the number of allocated objects.

  • allocated objects are all objects allocated (created) within the report block.
  • retained objects are objects that have not been garbage collected by the end of the report block. We forced a GC run before the end of the block so we could see the leaked objects more clearly.

Be careful about trusting the retained object counts. They depend heavily on what portion of the leaking code is within the report block.

For example, if we move the declaration of an_array into the report block, we might be fooled into thinking the code isn't leaky.

# leaky.rb
require "memory_profiler"

report = MemoryProfiler.report do
  an_array = []

  11.times do

    1000.times { an_array << "A" + "B" + "C" }
    puts "Array is #{an_array.size} items long"
  end

  GC.start
end

report.pretty_print
Enter fullscreen mode Exit fullscreen mode

The top of the resulting report won't report many retained objects (just the report itself).

Total allocated: 529784 bytes (11002 objects)
Total retained:  72 bytes (1 objects)
Enter fullscreen mode Exit fullscreen mode

derailed_benchmarks in Ruby

The derailed_benchmarks gem is a suite of very useful tools for all kinds of performance work, primarily aimed at Rails apps. For finding leaks, we want to look at perf:mem_over_time, perf:objects, and perf:heap_diff.

These tasks work by sending curl requests to a running app, so we can't add them to our little leaky program. Instead, we'll need to set up a small Rails app with an endpoint that leaks memory, then install the derailed_benchmarks on that app.

# Create a rails app with no database
rails new leaky --skip-active-record --minimal

## Add derailed benchmarks
cd leaky
bundle add derailed_benchmarks
Enter fullscreen mode Exit fullscreen mode
# config/routes.rb
Rails.application.routes.draw do
  root "leaks#index"
end

# app/controllers/leaks_controller.rb
class LeaksController < ApplicationController
  def index
    1000.times { $an_array << "A" + "B" + "C" }

    render plain: "Array is #{$an_array.size} items long"
  end
end

# config/initializers/array.rb
$an_array = []
Enter fullscreen mode Exit fullscreen mode

You should now be able to boot the app with bin/rails s. You'll be able to curl an endpoint that leaks on each request.

$ curl http://localhost:3000

Array is 1000 items long

$ curl http://localhost:3000

Array is 2000 items long
Enter fullscreen mode Exit fullscreen mode

We can now use derailed_benchmarks to see our leak in action.

perf:mem_over_time

This will show us memory use over time (similarly to how we watched the memory growth of our leaky script with watch and ps).

Derailed will boot the app in production mode, repeatedly hit an endpoint (/ by default), and report the memory usage. If it never stops growing, we have a leak!

$ TEST_COUNT=10000 DERAILED_SKIP_ACTIVE_RECORD=true \
  bundle exec derailed exec perf:mem_over_time

Booting: production
Endpoint: "/"
PID: 4417
104.33984375
300.609375
455.578125
642.69140625
751.6953125
Enter fullscreen mode Exit fullscreen mode

Note: Derailed will boot the Rails app in production mode to perform the tests. By default, it will also require rails/all first. Since we don't have a database in this app, we need to override this behavior with DERAILED_SKIP_ACTIVE_RECORD=true.

We can run this benchmark against different endpoints to see which one/s (if any) leak.

perf:objects

The perf:objects task uses memory_profiler under the hood so the produced report will look familiar.

$ TEST_COUNT=10 DERAILED_SKIP_ACTIVE_RECORD=true \
  bundle exec derailed exec perf:objects

Booting: production
Endpoint: "/"
Running 10 times
Total allocated: 2413560 bytes (55476 objects)
Total retained:  400000 bytes (10000 objects)

# The rest of the report...
Enter fullscreen mode Exit fullscreen mode

This report can help narrow down where your leaked memory is being allocated. In our example, the report's last section — the Retained String Report — tells us exactly what our problem is.

Retained String Report
-----------------------------------
     10000  "ABC"
     10000  /Users/tonyrowan/playground/leaky/app/controllers/leaks_controller.rb:3
Enter fullscreen mode Exit fullscreen mode

We've leaked 10,000 strings containing "ABC" from the LeaksController on line 3. In a non-trivial app, this report would be significantly larger and contain retained strings that you want to retain — query caches, etc. — but this and the other 'by location' sections should help you narrow down your leak.

perf:heap_diff

The perf:heap_diff benchmark can help if the report from perf:objects is too complex to see where your leak is coming from.

As the name suggests, perf:heap_diff produces three heap dumps and calculates the difference between them. It creates a report that includes the types of objects retained between dumps and the location that allocated them.

$ DERAILED_SKIP_ACTIVE_RECORD=true bundle exec derailed exec perf:heap_diff

Booting: production
Endpoint: "/"
Running 1000 times
Heap file generated: "tmp/2022-06-15T11:08:28+01:00-heap-0.ndjson"
Running 1000 times
Heap file generated: "tmp/2022-06-15T11:08:28+01:00-heap-1.ndjson"
Running 1000 times
Heap file generated: "tmp/2022-06-15T11:08:28+01:00-heap-2.ndjson"

Diff
====
Retained STRING 999991 objects of size 39999640/40008500 (in bytes) at: /Users/tonyrowan/playground/leaky/app/controllers/leaks_controller.rb:3
Retained STRING 2 objects of size 148/40008500 (in bytes) at: /Users/tonyrowan/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/derailed_benchmarks-2.1.1/lib/derailed_benchmarks/tasks.rb:265
Retained STRING 1 objects of size 88/40008500 (in bytes) at: /Users/tonyrowan/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/derailed_benchmarks-2.1.1/lib/derailed_benchmarks/tasks.rb:266
Retained DATA 1 objects of size 72/40008500 (in bytes) at: /Users/tonyrowan/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/objspace.rb:87
Retained IMEMO 1 objects of size 40/40008500 (in bytes) at: /Users/tonyrowan/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/objspace.rb:88
Retained IMEMO 1 objects of size 40/40008500 (in bytes) at: /Users/tonyrowan/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/derailed_benchmarks-2.1.1/lib/derailed_benchmarks/tasks.rb:259
Retained IMEMO 1 objects of size 40/40008500 (in bytes) at: /Users/tonyrowan/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/derailed_benchmarks-2.1.1/lib/derailed_benchmarks/tasks.rb:260
Retained FILE 1 objects of size 8432/40008500 (in bytes) at: /Users/tonyrowan/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/derailed_benchmarks-2.1.1/lib/derailed_benchmarks/tasks.rb:266

Run `$ heapy --help` for more options
Enter fullscreen mode Exit fullscreen mode

You can also read Tracking a Ruby memory leak in 2021 to understand better what's going on.

The report points us exactly where we need to go for our leaky baby app. At the top of the diff, we see 999991 retained string objects allocated from the LeaksController on line 3.

Leaks in Real Ruby and Rails Apps

Hopefully, the examples we've used so far have never been put into real-life apps — I hope no one intends to leak memory!

In non-trivial apps, memory leaks can be much harder to track down. Retained objects are not always bad — a cache with garbage collected items would not be of much use.

There is something common between all leaks, though. Somewhere, a root-level object (a class/global, etc.) holds a reference to an object.

One common example is a cache without a limit or an eviction policy. By definition, this will leak memory since every object put into the cache will remain forever. Over time, this cache will occupy more and more of the memory of an app, with a smaller and smaller percentage of it actually in use.

Consider the following code that fetches a high score for a game. It's similar to something I've seen in the past. This is an expensive request, and we can easily bust the cache when it changes, so we want to cache it.

class Score < ApplicationModel
  def self.user_high_score(game, user)
    @scores = {} unless @scores

    if (score = @scores["#{game.id}:#{user.id}"])
      score
    else
      Score.where(game: game, user: user).order(:score).first.tap do |score|
        @scores["#{game.id}:#{user.id}"] = score
      end
    end
  end

  def self.save_score(game, user, raw_score)
    score = create!(game: game, user: user, score: raw_score)

    if raw_score > user_high_score(game, user).score
      @scores["#{game.id}:#{user.id}"] = score
    end
  end
end
Enter fullscreen mode Exit fullscreen mode

The @scores hash is completely unchecked. It will grow to hold every single high score for every user — not ideal if you have a lot of either.

In a Rails app, we would probably want to use Rails.cache with a sensible expiry (a memory leak in Redis is still a memory leak!) instead.

In a non-Rails app, we want to limit the hash size, evicting the oldest or least recently used items. LruRedux is a nice implementation.

A more subtle version of this leak is a cache with a limit, but whose keys are of arbitrary size. If the keys themselves grow, so too will the cache. Usually, you won't hit this. But, if you're serializing objects as JSON and using that as a key, double-check that you're not serializing things that grow with usage as well — such as a list of a user's read messages.

Circular References

Circular references can be garbage collected. Garbage Collection in Ruby uses the "Mark and Sweep" algorithm. During their presentation introducing variable width allocation, Peter Zhu and Matt Valentine-House gave an excellent explanation of how this algorithm works.

Essentially, there are two phases: marking and sweeping.

  • In the marking phase, the garbage collector starts at root objects (classes, globals, etc.), marks them, and then looks at their referenced objects.

It then marks all of the referenced objects. Referenced objects that are already marked are not looked at again. This continues until there are no more objects to look at — i.e., all referenced objects have been marked.

  • The garbage collector then moves on to the sweeping phase. Any object not marked is cleaned up.

Therefore, objects with live references can still be cleaned up. As long as a root object does not eventually reference an object, it will be collected. In this way, clusters of objects with circular references can still be garbage collected.

Application Performance Monitoring: The Event Timeline and Allocated Objects Graph

As mentioned in the first part of this series, any production-level app should use some form of Application Performance Monitoring (APM).

Many options are available, including rolling your own (only recommended for larger teams). One key feature you should get from an APM is the ability to see the number of allocations an action (or background job) makes. Good APM tools will break this down, giving insight into where allocations come from — the controller, the view, etc.

This is often called something like an 'event timeline.' Bonus points if your APM allows you to write custom code that further breaks down the timeline.

Consider the following code for a Rails controller.

class LeaksController < ApplicationController
  before_action :leak

  def index
    @leaks = $leak.sample(100)
  end


  private

  def leak
    1000.times { $leak << Leak.new }
  end
end
Enter fullscreen mode Exit fullscreen mode

When reported by an APM, the 'event timeline' might look something like the following screenshot from AppSignal.

Bare Event Timeline

This can be instrumented so we can see which part of the code makes the allocations in the timeline. In real apps, it is probably going to be less obvious from the code 😅

class LeaksController < ApplicationController
  before_action :leak

  def index
    Appsignal.instrument('leak.fetch_leaks') do
      @leaks = $leak.sample(100)
    end
  end


  private

  def leak
    return unless params[:leak]

    Appsignal.instrument('leak.create_leaks') do
      1000.times { $leak << Leak.new }
    end
  end
end

Enter fullscreen mode Exit fullscreen mode

Here's an example of an instrumented event timeline, again from AppSignal:

Instrumented Event Timeline

Knowing where to instrument can often be difficult to grasp. There's no substitute for really understanding your application's code, but there are some signals that can serve as 'smells'.

If your APM surfaces GC runs or allocations over time, you can look for spikes to see if they match up with certain endpoints being hit or certain running background jobs. Here's another example from AppSignal's Ruby VM magic dashboard:

Allocations

By looking at allocations in this way, we can narrow down our search when
looking into memory problems. This makes it much easier to use tools like
memory_profiler and derailed_benchmarks efficiently.

Read about the latest additions to AppSignal's Ruby gem, like allocation and GC stats tracking.

Wrapping Up

In this post, we dived into some tools that can help find and fix memory leaks, including memory_profiler, derailed_benchmarks, perf:mem_over_time, perf:objects, perf:heap_diff, the event timeline and allocated objects graph in AppSignal.

I hope you've found this post, alongside part one, useful in diagnosing and sorting out memory leaks in your Ruby app.

Read more about the tools we used:

Additional detailed reading:

Happy coding!

P.S. If you'd like to read Ruby Magic posts as soon as they get off the press, subscribe to our Ruby Magic newsletter and never miss a single post!

Top comments (0)