DEV Community

Cover image for Rails 6 Dependency Management with Service Objects
Lori Baumgartner
Lori Baumgartner

Posted on

Rails 6 Dependency Management with Service Objects

Do you have a Ruby on Rails app that manages integrations with multiple 3rd party APIs or external data sources?

Do you struggle with keeping your application's domain logic separate from these external data sources and flows?

Do you feel like you have to customize main flows in your app for each data source or API?

Can I grow a few extra arms to raise my hands twice for all this? ✋✋✋

Here's what I'm going to cover in this post:


The Problem: The Snowball Effect

I work on an eCommerce app that takes external data from our "sources" and makes that data available to our customers via a catalog. The value of our product is that it takes the data from multiple sources and combines it into one catalog.

To simplify the examples to come, let's think of the product as an online grocery store. Each data source is the maker of an item you can purchase in a grocery store. So you have farmers that supply the produce and cereal makers who supply the cereal, and the people who make Eggo waffles, etc.

So where we got into trouble with our app is that (perhaps unsurprisingly) each data source formats their data differently. They name things differently, they have different ways of matching up a SKU (an individual thing you can buy, like a snack-size pack of Oreos vs. the family-size pack of Oreos), and just think about The Thing They Are Selling differently from other suppliers.

We started out with what we call "integrations": a directory of namespaced code that houses the fetching of the external data and translating it into the common language of our Grocery Store offerings. It looks a little like this:

 app/
  |__ integrations/
    |__ farmers/
      |__ fetch_data.rb
      |__ format_data.rb
      ...
    |__ cereal_makers/
      |__ fetch_data.rb
      |__ format_data.rb
      ...
  |__ catalog/
    |__ update_skus.rb
    ...
  ...

At first, this was great! We took in the new data, structured it how we wanted, filled in missing data with default values, sanitized data - no problemo. But then we needed to add another integration. And - guess what?! - they didn't have an API. So now we had to make a CSV upload process that handles fetching this new integration's data. But we still wanted the same output: a catalog of consistent data across all suppliers.

And then we wanted to add yet another integration...and our minds sort of exploded, to be honest 🤯 . You had to be intensely familiar with each integration's incoming data structure and know THEIR domain-specific context like whether they use an API or CSVs to send us their data. And you had to support multiple intake processes that could all break in different ways. And we needed to funnel all this data to one, common process to populate our catalog.

The Solution: Rails Engines + Structured Service Objects

I'm not going to spend much time talking about Rails Engines (that deserves a post of its own!) but there's pretty good documentation out there about it if you want to learn more. The examples to follow don't require you to also use engines.

Isolate Code Related to External Logic

So the first change we committed to is that each integration with a different data source will have its import code live inside of a Rails Engine (we call them Components). A hand-wavey definition of an engine is ruby gem that is internal to your app. It has access to the main app's code, but has its own routing, test files, services, etc. For my team, this was an easy way to draw a line in the sand and say "if the behavior I'm working on belongs to the data source and not our core app, it goes into the engine".

Our app structure doesn't look much different now:

 app/
  |__ components/ # moved these integrations into a "components" directory
    |__ farmers/
      ...
    |__ cereal_makers/
      ...
  |__ catalog/
    |__ update_skus.rb
    ...
  ...

but our main "Core" app Gemfile now looks like this:

# Gemfile
...
gem 'farmers', path: 'components/farmers'
gem 'cereal_makers', path: 'components/cereal_makers'

Now that we've pulled our integration code out of our core app, let's get down to the nitty gritty with dependency management inside our components!

Service Objects & Dependencies

To establish this pattern, we needed to commit to some non-negotiable expectations:

  • Service objects will always return a "response object" with a specific shape
  • Service objects maintain their own dependencies
  • Service objects should handle their own failure as well as surface the failures of any dependencies

Here's what our service should look like at the end of this process:

# app/components/famers/app/services/farmers/lettuces/create_service.rb
module Farmers
  class Lettuces
    class CreateService < BaseService
      self.build
        new(Farmers::Produce::InventoryManagerService.build)
      end

      def initialize(manage_inventory_service)
       @manage_inventory_service = manage_inventory_service
      end

      def call(lettuce_params)
        @lettuce_params = lettuce_params

        lettuce, created = create_lettuce
        inventory = @manage_inventory_service.call(lettuce)

        OpenStruct.new(
          created?: created,
          lettuce: lettuce,
          inventory_updated?: inventory.updated?,
          errors: [lettuce.errors, inventory.errors].join(', ')
        )
      end

      private

      def create_lettuce
        lettuce = Lettuce.new(@lettuce_params)
        if lettuce.save
          return [lettuce, true]
        end
        [lettuce, false]
      end
    end
  end
end

Expectation: Service objects will always return a "response object" with a specific shape
This is a personal decision for your team based on your own experiences and preferences. We have a completely separate frontend app and have struggled with error handling before. So for us, we wanted to always have our controllers return status: :ok but have json objects that include errors if they exist.

We decided upon a structure with these guidelines:

  • The mutated/created record is returned (usually with a key of the model name, like lettuce: lettuce)
  • There is a boolean status key that indicates if the service completed its job or not (could be success?: true or created?: false)
  • There is an errors key that returns the error message(s) or nil

We used an OpenStruct object, which is mutable (meaning a service could set OpenStruct.new(success?: true) and then elsewhere it could be overwritten OpenStruct.new(success?: false). This hasn't been an issue for our team, but is good to keep in mind.

Service objects maintain their own dependencies
This falls under the larger consideration of "how the heck do I manage all this state?!".

# app/components/famers/app/services/farmers/lettuces/create_service.rb
module Farmers
  class LettucesController < ApplicationController
    def create
      results = Lettuces::CreateService
        .build
        .call(lettuce_params)
      ...
     end
    ...
  end
end

# app/components/famers/app/services/farmers/lettuces/create_service.rb
module Farmers
  class Lettuces
    class CreateService < BaseService
      # bad
      def call(lettuce_params)
        @lettuce_params = lettuce_params

        lettuce, created = create_lettuce
        inventory = Farmers::Produce::InventoryManagerService
                      .build
                      .call(lettuce)
        ...
      end

      ...
    end
  end
end

# app/components/famers/app/services/farmers/lettuces/create_service.rb
module Farmers
  class Lettuces
    class CreateService < BaseService
      # good
      self.build
        new(Farmers::Produce::InventoryManagerService.build)
      end

      def initialize(manage_inventory_service)
       @manage_inventory_service = manage_inventory_service
      end

      def call(lettuce_params)
        @lettuce_params = lettuce_params
        ...
      end

      ...
    end
  end
end

Two Months Later: How It's Been Going

I can honestly say we've been loving this change! We've started converting over old service objects and creating all new ones in this style. It's made it so much easier to know:

  • what is going to be changed/done when I call this service
  • how will I know if it succeeded or failed?

It's made us become much more thoughtful about single-purpose services that don't need to be so generalized and vast. They can be specific and accurate to one case and then put together to handle generalized cases.

Recommended Reading

This blog post by Adam Niedzielski is the main influencer of how we handle dependency injection in Service Objects.

  • It had the best examples of multiple services that rely on each other
  • There is a github repo - with tests! - that give a more in-depth view on this approach

This blog post by Dave Copeland at Stitch Fix has some really great narrative about handling this same problem in a really big app with lots of developers working on the project.

  • Includes some insight to how they created a gem to enforce immutable (meaning it can't be changed after the fact) service object responses
  • Is opinionated (in a good way) about good vs. bad practices with service objects

This blog post on Toptal by Amin Shah Gilani is actually one of the first articles I read when searching for "a better way" of handling service objects. It ended up being a different path than we committed to, but it still a good generalized read about Rails Service Objects.

  • Includes opinions on where a service object should live in your app
  • Great example code
  • Thoughts on what a service object should return

Top comments (5)

Collapse
 
dfockler profile image
Dan Fockler

Hi! This looks awesome. How has it been working with Rails Engines?

I was looking at them for a project in the past, but decided not to do it at the time. It seems like there is some boilerplate needed to set them up, but they seem like a great idea.

Collapse
 
loribbaum profile image
Lori Baumgartner

Great question! Maybe I need to write about that next 😉.

I will be honest: engines have had their ups and downs. In my workplace, we introduced 3 engines into our app at once. One of my coworkers documented a list of all the small modifications you have to make on top of the bootstrapped engine from running that plugin new command. I would recommend that if you plan on adding more than one engine - it's a great resource.

So far the pros have been:

  • You have to be more cognizant of dependencies - which was why we decided to go in this direction. Previously it was really easy for us to leverage tightly related code that didn't need to be that tightly knit together. So plugins have definitely made us better about thinking WHERE logic should live and HOW you access it between the core app and the engine.
  • Having an isolated environment does make development on the engines easier. It's nice to be in a box and I think it makes it easier to think about the purpose of that engine vs. if it was inside the core app. Because of this, we actually ended up deleting one of the engines we thought we needed and left that code in the core app. That turned out to be a good decision to NOT use an engine.

Some of the challenges:

  • Getting them set up just right has been a bit tough; knowing when the engines DO have access to the core app and when they don't can sometimes be tricky. So things like API requests occurring across the engine and the core app and testing setup took some time to figure out.

Overall I'm still happy with our decision to use them. But they are definitely not a tool I would recommend 100% of the time to 100% of projects. They're a very niche tool that works well in a specific context.

Collapse
 
matteoredz profile image
Matteo Rossi

At current time, It's just a matter of typing rails plugin new blorgh --mountable and you're ready to code your engine

Collapse
 
dharshann profile image
Dharshan Bharathuru

Great post. I am still little confused(I can say exploring more about when I should opt for engines/component) about components. Will try exploring links shared, thanks for that.

I personally don't prefer returning 200 status code for all response. Its a good practice to return respective status code to FE. And in my sense, instead of constructing response with OpenStruct, constructing a PORO is great.

Collapse
 
loribbaum profile image
Lori Baumgartner

Thanks for commenting! Looks like there's a general curiosity about when to use Rails engines. I'm working on a post to dive deeper into the pros/cons I've personally experienced.

Totally agree about the status codes! This is unique to the way our frontend app is built and its own unique error handling. We ended up using an OpenStruct based on one of the tutorials I linked at the bottom. Looking back, we could have used many other shapes or tools, but I've been happy with the way OpenStructs enforce a specific type of notation and accessibility that we didn't have as much reliability with PORO ("plain old ruby objects"). But that is definitely a decision you can make and customize to the needs of your app!