DEV Community

Cover image for Observe Your Phoenix App with Structured Logging
Sophie DeBenedetto for AppSignal

Posted on • Originally published at blog.appsignal.com

Observe Your Phoenix App with Structured Logging

In this post, we'll configure a Phoenix LiveView application to use a structured logger. We'll then use AppSignal to correlate log events with other telemetry signals, like exception reports and traces.

Along the way, you'll learn about the benefits of structured logging, and you'll see how to configure a distinct framework and application logger in your Phoenix app.

Let's get started!

What is Structured Logging?

Structured logs conform to a specific structure in which an agreed-upon set of key/value pairs describe common log attributes. Without structured logging, you might get a log line that looks something like this:

20:34:05.879 request_id=F1ssWNn62XasBTYAAAEj [info] GET /
20:34:05.911 request_id=F1ssWNn62XasBTYAAAEj [info] hello from the page controller
20:34:05.919 request_id=F1ssWNn62XasBTYAAAEj [info] Sent 200 in 39ms
Enter fullscreen mode Exit fullscreen mode

A few bits of information make up each log statement:

  • a timestamp
  • a level (in this case, info)
  • a log message
  • a bit of metadata (the request_id)

In this unstructured format, all the data is included in one string, so it's difficult to search and analyze log statements based on each data point.

Structured logging solves this problem for us. It breaks each piece of information into its own key/value pair. Here's an example of these same log statements structured using the logfmt format:

timestamp="20:38:15.973 2023-05-01" level=info message="GET /" request_id=F1sskxS1k-w0jUIAAAKB
timestamp="20:38:16.013 2023-05-01" level=info message="hello from the page controller" request_id=F1sskxS1k-w0jUIAAAKB
timestamp="20:38:16.031 2023-05-01" level=info message="Sent 200 in 58ms" request_id=F1sskxS1k-w0jUIAAAKB
Enter fullscreen mode Exit fullscreen mode

Now you can search and analyze log statements based on the timestamp, level, message, and request_id fields. For example, you could search all log statements with a shared request_id, filter log lines by their level key, or search logs within a range of timestamps.

Now that we can see the benefits of using structured logs, we can configure our Phoenix LiveView application to use the logfmt format in development and production. But first, we'll take a closer look at how Phoenix apps configure their loggers out of the box.

Logger Configuration in Phoenix

By default, your Phoenix application will configure its logger to output unstructured logs to the console. Open up your config.exs file, and you'll see something like this:

config :logger, :console,
  format: "$time $metadata[$level] $message\n",
  metadata: [:request_id]
Enter fullscreen mode Exit fullscreen mode

This means the logger will use the console backend to print log messages to the console using the specified (unstructured) format. This configuration says that only the request_id from the default metadata list should be included in each log statement. Elixir's Logger adds a certain set of metadata to all log statements, and you can find the full list here. The request_id metadata, however, is not added by default with the Elixir Logger. Instead, it is added by the Plug.RequestId plug that you'll see added to your application's endpoint.ex file like this:

# endpoint.ex
plug Plug.RequestId
Enter fullscreen mode Exit fullscreen mode

Let's take a quick peek at the rest of the default metadata by setting metadata: :all in config.exs:

config :logger, :console,
  format: "$time $metadata[$level] $message\n",
  metadata: :all
Enter fullscreen mode Exit fullscreen mode

Now, when we fire up our app and visit the root path, we'll see something like this:

21:09:24.947 erl_level=info application=phoenix domain=elixir file=lib/phoenix/logger.ex function=phoenix_endpoint_stop/4 line=231 mfa=Phoenix.Logger.phoenix_endpoint_stop/4 module=Phoenix.Logger pid=<0.815.0> request_id=F1suRjwyEp4CjWAAAACi [info] Sent 200 in 2ms
Enter fullscreen mode Exit fullscreen mode

That's a lot of information! Too much information for our development environment. These logs are so information-dense that they're difficult to parse with our human eyes. For that reason, you'll notice that the dev.exs file overwrites this configuration to simplify it:

config :logger, :console,
  format: "$metadata[$level] $message\n"
Enter fullscreen mode Exit fullscreen mode

Here, the timestamp and metadata are excluded entirely, except for the metadata's $level data point.

Now that we've seen where and how your Phoenix app configures the logger out-of-the-box, let's configure the logger to emit structured logs that conform to logfmt.

Using Logfmt in Phoenix Development

First up, we'll configure our development environment to use the console logger backend with a logfmt formatter. For this, we need the LogfmtEx dependency added to our application. Open up your mix.exs file and add it like this:

{:logfmt_ex, "~> 0.4"}
Enter fullscreen mode Exit fullscreen mode

Go ahead and run mix deps.get to install the new dependency.

Next up, we'll configure our application's console logger to use this formatter. In dev.exs, add the following:

config :logger, :console, format: {LogfmtEx, :format}
Enter fullscreen mode Exit fullscreen mode

Now, when you start up your application, you should see structured logs like this:

timestamp="21:18:36.828 2023-05-01" level=info message="Sent 200 in 45ms" request_id=F1suxrg8tRisBTYAAAmh
Enter fullscreen mode Exit fullscreen mode

Notice that we haven't overwritten the :metadata key from the console backend config in config.exs, so we're still only logging the request_id data point from the metadata.

Now we've successfully emitted structured logs to the console, but this isn't good enough for production. To operate a Phoenix application in production, we need to emit our logs to a third-party service that allows us to search, aggregate, and analyze them at scale. This is where AppSignal comes in.

In the next section, we'll incorporate the AppSignal logger into our Phoenix application and configure it to use logfmt.

Using Logfmt with the AppSignal Logger in Production

If you don't already have AppSignal installed and configured in your application, you can follow the instructions here. Assuming you have an AppSignal account and organization configured, we're ready to set up the AppSignal logger backend by following the steps in AppSignal's Logging from Elixir docs. We'll use the Elixir integration rather than set up a custom source.

In our runtime.exs file, we'll specify the Appsignal.Logger.Backend, like this:

config :logger, :backends, [
  {Appsignal.Logger.Backend, [group: "phoenix", format: "logfmt"]}
]
Enter fullscreen mode Exit fullscreen mode

Also, in runtime.exs, you'll want to set the application's log level to :info, to ensure we send all logs at the info level and below to AppSignal:

config :logger, level: :info
Enter fullscreen mode Exit fullscreen mode

This will forward our application's logs to our AppSignal account and group them under the "phoenix" group designation. It will treat the format of the log statements as logfmt and allow us to search and analyze our logs based on their attributes.

If you log into your AppSignal account, select your organization, and select "Logging" -> "Application", you'll see something like this:

AppSignal log example

Note that the log line includes the timestamp, hostname, level, and group (in this case, "phoenix"). Since we've specified "logfmt" for the Appsignal.Logger.Backend, each of these attributes is indexed and searchable. All we have to do is double-click any of these attributes on the log line itself, and the search filter on the top of the page will update and execute:

Log search

Let's take a closer look at one of these log lines now. The "hello from the page controller" log line is emitted explicitly in application code by the following line added to the PagesController:

# pages_controller.ex
require Logger

def home(conn, _params) do
  Logger.info("hello from the page controller")
  render(conn, :home, layout: false)
end
Enter fullscreen mode Exit fullscreen mode

If we click the "+" icon next to the log line, we see all of the log line's metadata:

Expanded log line

You can see that the Appsignal.Logger.Backend emits all of the default metadata, along with the request_id. Since we're using logfmt, each data point is searchable. So, just like we did with the log attributes displayed in the log line itself, we can click on any metadata attributes to get them added to the search. If you click on the request_id_string attribute, you'll see the search query update to look like this:

Request ID Search

In fact, we can add additional metadata when we emit log lines (using calls to Logger), like this:

Logger.info("hello from the page controller", %{user_id: conn.assigns.current_user.id})
Enter fullscreen mode Exit fullscreen mode

With that, the user_id attribute is displayed in the details of each log line and searchable just like the request_id metadata.

So far, we've:

  • Configured our Phoenix application's logger to use the Appsignal.Logger.Backend.
  • Taught that backend to use logfmt.
  • Seen logs emitted and consumed into AppSignal, and got a feel for the basic logging search functionality there.

Before we continue exploring the AppSignal logging UI and some of its more useful features, we need to address a problem that's cropped up.

Framework Vs. Application Logs

Taking another look at the log statements our Phoenix app has emitted to AppSignal so far, we see something like this:

05-01-2023 10:21:04.383[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]Access StreamChatWeb.Endpoint at http://localhost:4000
+05-01-2023 10:21:04.363[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]Running StreamChatWeb.Endpoint with cowboy 2.9.0 at 127.0.0.1:4000 (http)
+05-01-2023 10:19:02.709[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]GET /asdfads
+05-01-2023 10:18:51.429[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]Sent 200 in 4ms
+05-01-2023 10:18:51.426[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]GET /
+05-01-2023 10:18:51.388[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]Sent 302 in 252ms
+05-01-2023 10:18:51.137[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]POST /users/log_in
+05-01-2023 10:18:50.201[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]CONNECTED TO Phoenix.LiveView.Socket in 16µs Transport: :websocket Serializer: Phoenix.Socket.V2.JSONSerializer Parameters: %{"_csrf_token" => "UQshLhEWIzUbAXgLQTE5bFNXRW8LZ00w3xVHuWYmHt1avWPZ4e6-97ut", "_live_referer" => "undefined", "_mounts" => "0", "_track_static" => %{"0" => "http://localhost:4000/assets/app.css", "1" => "http://localhost:4000/assets/app.js"}, "vsn" => "2.0.0"}
+05-01-2023 10:18:50.130[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]Sent 200 in 34ms
+05-01-2023 10:18:50.097[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]GET /users/log_in
+05-01-2023 10:18:48.255[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]Sent 200 in 689µs
+05-01-2023 10:18:48.254[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]GET /
05-01-2023 10:18:48.187[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]hello from the page contoller
+05-01-2023 10:18:48.215[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]Sent 200 in 10ms
+05-01-2023 10:18:48.206[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]POST /users/log_out
05-01-2023 10:19:25.413[application][ERROR][SOPHIEs-MacBook-Pro-2.local][phoenix]** (RuntimeError) boom! iex:5: (file) (elixir 1.14.2) src/elixir.erl:309: anonymous fn/4 in :elixir.eval_external_handler/1 (stdlib 4.1.1) erl_eval.erl:748: :erl_eval.do_apply/7 (stdlib 4.1.1) erl_eval.erl:987: :erl_eval.try_clauses/10 (elixir 1.14.2) src/elixir.erl:294: :elixir.eval_forms/4 (elixir 1.14.2) lib/module/parallel_checker.ex:107: Module.ParallelChecker.verify/1 (iex 1.14.2) lib/iex/evaluator.ex:329: IEx.Evaluator.eval_and_inspect/3 (iex 1.14.2) lib/iex/evaluator.ex:303: IEx.Evaluator.eval_and_inspect_parsed/3 (iex 1.14.2) lib/iex/evaluator.ex:292: IEx.Evaluator.parse_eval_inspect/3 (iex 1.14.2) lib/iex/evaluator.ex:187: IEx.Evaluator.loop/1 (iex 1.14.2) lib/iex/evaluator.ex:32: IEx.Evaluator.init/4
+05-01-2023 10:18:47.578[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]Sent 200 in 7ms
+05-01-2023 10:18:47.572[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]GET /
+05-01-2023 10:18:46.901[application][INFO][SOPHIEs-MacBook-Pro-2.local][phoenix]Sent 200 in 52ms
Enter fullscreen mode Exit fullscreen mode

There is a lot of noise in here mixed with some useful bits of information. You can see one error log that will very likely be helpful in debugging, as well as one instance of our "hello from the page controller" log statement. Those two log lines are very hard to detect amidst the many log lines emitted that indicate the server is up, the WebSocket channel is connected, certain web requests are served, and so on.

These noisy logs are emitted not from our application code, but from the Phoenix framework itself. They don't tell us anything interesting, and they are not an essential part of the overall observability picture of our application. Instead, they create noise and contribute to high log volume, making it harder to find and identify the log statements that we intend to emit from application code.

For this reason, framework logs should be emitted at the FATAL level. Application logs that are emitted when exceptions are raised or by explicit calls to Logger should log at the INFO level. Phoenix doesn't expose an easy way for us to do that, but we can build our own custom logger backend to solve this problem. Let's do that now.

Build A Custom Logger Backend

A logger backend is really nothing more than a GenServer, and must conform to the following API:

  • Implement an init/1 function
  • Implement a handle_event/3 function

Here's the skeleton for our backend:

# lib/stream_chat/logger/backend.ex
def init({__MODULE__, options}) do
    {:ok, Keyword.merge([group: __MODULE__, format: :plaintext], options)}
  end

  def handle_event(
        {level, _gl, {Logger, message, _timestamp, metadata}},
        options
      ) do
      # ...
  end
end
Enter fullscreen mode Exit fullscreen mode

The init/1 function will be called with whatever config we pass to the call to configure the logger backend in runtime.exs. It returns the {:ok, state} tuple.

The handle_event/2 function will be called with all the data we need to emit a log statement with the Appsignal.Logger, including the:

  • Level
  • Log message
  • Metadata
  • Internal state of the logger backend process that we established in init/1

Whenever the Phoenix framework itself calls the Phoenix application's logger, the metadata variable passed to handle_event/2 will include an :application key pointing to a value of :phoenix. Where logs are emitted by an explicit call to Logger or by an exception raised in our application, the metadata keyword list will not include an :application key. We want to be able to write some code that says: "Only log statements coming from Phoenix if they are at the configured log level or below". We can do exactly that by adding some logic that looks for metadata with application: :phoenix.

Before we write that code, let's take a look at how we will configure this custom logger backend in our runtime.exs file:

config :logger, :backends, [
  {StreamChat.Logger.Backend,
   [
     format: "logfmt",
     application_config: [
       phoenix: [
         level_lower_than: :fatal
       ],
       default: [
        level_lower_than: :info
       ]
     ]
   ]}
]
Enter fullscreen mode Exit fullscreen mode

Here, we include a key :application_config, which points to another keyword list. That keyword list contains configs for the :phoenix app along with the :default app. This data will be passed to the init/1 function and stored in our backend's state. So, when handle_event/2 is called later on, we can do the following:

  • Check if the metadata contains application: :phoenix.
  • If so, check the provided level and only log if it is at or below the level stored in application_config: [phoenix: [level_lower_than: level]] in state.

Here's what our function will look like:

 @levels %{
    fatal: 0,
    error: 1,
    warn: 2,
    info: 3,
    debug: 4,
    trace: 5
  }

def handle_event(
      {level, _gl, {Logger, message, _timestamp, metadata}},
      options
    ) do
  application = metadata[:application] || :default

  log_event(
    @levels[level],
    @levels[options[:application_config][application][:level_lower_than]],
    message,
    metadata,
    options
  )

  {:ok, options}
end

def log_event(level, level_lower_than, message, metadata, options)
    when level <= level_lower_than do
  Appsignal.Logger.log(
    level,
    to_string(metadata[:group] || options[:group]),
    IO.chardata_to_string(message),
    Enum.into(metadata, %{}),
    options[:format]
  )
end

def log_event(_level, _level_lower_than, _message, _metadata, _options),
  do: :noop
Enter fullscreen mode Exit fullscreen mode

We pull the application config out of state and pass that along with the provided level to the log_event helper function. That function includes a guard clause that will cause it to no-op if the provided level is higher than the level configured in runtime.exs for the given application. If we should in fact log, we do so by calling directly on the AppSignal.Logger.log function. This allows us to emit logs to AppSignal at a specified level, for a given group, with a message and any metadata.

With this, we will suppress Phoenix framework logs above the FATAL level, but ensure that logs emitted by application code are emitted at INFO.

If you're hesitant to suppress framework logs in this way, remember that logs are just one piece of the observability puzzle. Rather than recording every instance of a GET / request, we are more likely to care about trends in serving such requests over time. We want the ability to drill down into the details of some such requests, especially where they are anomalous (for example, if a request experiences a high degree of latency).

This is where traces come in, along with other telemetry data, like metrics and exceptions.

In the next section, we'll look at how you can use AppSignal to correlate logs with traces and other types of telemetry, giving you a full picture of how your application is performing.

Correlating Log Events and Other Telemetry with AppSignal

Application logs are just one part of your overall observability posture. They should be used to record specific events of interest in your application's lifecycle and provide additional details and context in the event of an error or exception.

You should leverage application logs alongside other telemetry data like metrics, traces, and exception reports to fully observe your application in production.

Here's a very high-level overview of how you can view each type of telemetry data that your application emits:

  • Logs record that an event occurred (for example, a failed user log-in attempt).
  • Metrics record trends in event occurrences over time. For example, you can record the distribution of request latency to a particular endpoint, or monitor the capacity utilization of your system.
  • Traces record details of the lifecycle for a particular piece of code flow. For example, drill down into the trace of a specific request to understand possible causes of latency.
  • Exception reports show when an unexpected or "exceptional" error occurred. If your code encounters an exception, you can see the details of that exception, including the backtrace and full error message.

These telemetry signals become especially powerful when they can be correlated. For example, when viewing a specific exception report, you may want to gain more insight into the performance of your overall system at the time the exception occurred. In this case, the ability to view logs and/or traces from that same timeframe can provide you with the extra information you need to remedy bugs and improve overall application performance.

Monitoring Your Logs Using AppSignal

Correlating telemetry data can be a particularly hard problem to solve in the observability domain, especially when using different third-party services for things like logging, tracing, and exception reporting. AppSignal provides functionality for managing all of these types of telemetry and empowers you to correlate different types of telemetry data to get the most out of various signals. Let's take a look at an example now.

We'll use the "Errors" -> "Issue list" as the entry point for this exercise. I've selected an exception report from this list, displayed here:

Exception report

We can correlate this exception report with some traces emitted by the live view that encountered this error. Note that I've configured our application to emit LiveView traces to AppSignal by following the instructions here. By clicking on "samples" and selecting an exception sample to examine, we can see the following exception details:

Trace sample

Now, in addition to the original exception report, we can see other useful information like the event that the live view was trying to handle when the exception occurred and the payload of that event, in addition to the full exception backtrace.

Finally, we can use the Time Detective feature to explore the full range of telemetry data, including traces, that our application emitted around the time of this exception. If we click on the "Time Detective" action here:

Time Detective

We'll see something like this:

Time Detective Performance

The "Performance" view brings up some trace samples from the time period that we can drill down into. Here, we can see a list of traces that occurred around the time of the incident, including traces for the various LiveView lifecycle callbacks. From the details in the exception sample, we know this error occurred when an event handler in our live view tried to handle the "update" message. So, let's select the handle_event trace. We'll see something like this:

Event trace

You can also select the "Logs" tab to view logs from this time period.

The Time Detective tool can be accessed via the logging UI as well. Just expand a given log line and select "Time Detective" from the bottom of the expanded view:

Time Detective from Log

Then you'll have access to all of the telemetry data that can be correlated to your log statement based on timeframe. This could be especially useful if you see a high volume of a particular request being logged, or note an interesting error log. For any interesting or informative log statements, a deep dive into correlated telemetry data is just a click away.

Logging in Phoenix: A Few Final Tips

You're now ready to leverage logging as one part of the overall observability story in your Phoenix application. Just a few guiding rules to keep in mind when you do:

  • Prefer structured over unstructured logs so that you can query and analyze your application's logs with tools like AppSignal.
  • Suppress framework logs to the FATAL level and emit application logs at INFO.
  • Treat logs as just one part of your observability posture. Logs are most useful when correlated with other telemetry data, like exceptions and traces.

Wrap Up

We've covered a lot of ground in this post, so let's wrap up before you go.

We discussed the benefit of structured over unstructured logging, and configured our Phoenix application to use logfmt with the console logger backend. We then saw the benefits of structured logging in action in AppSignal.

Finally, we took our log analysis skills to the next level when we suppressed framework logs and explored how our meaningful application logs correlated with other telemetry data in AppSignal.

Now take on the world of Phoenix logging. Happy coding!

P.S. If you'd like to read Elixir Alchemy posts as soon as they get off the press, subscribe to our Elixir Alchemy newsletter and never miss a single post!

Top comments (0)