DEV Community

Cover image for Where should API calls to third-party services live in a Ruby web app?
Matt Culpepper
Matt Culpepper

Posted on

Where should API calls to third-party services live in a Ruby web app?

I saw this question asked on r/rails on reddit.

Depending on when you asked me over the past 10 years, I probably would've had 10 different answers.

But lately, I have a new answer to this question and it was inspired by a post called Screaming Architecture by Uncle Bob over at the Clean Coder Blog. Again props to my main man @thorncp for telling me about this article.

The post can be summarized as saying each set of domain or business logic should have its own top-level directory in your repo. For example, for a To-do List app, you'd probably have something like:

  • /items
  • /lists
  • /sessions
  • /users

I'm not sure I believe this architecture would be great for a full application. Hell, maybe it would, but it would be entirely too much work to even try to refactor the legacy codebase I'm working in to something like this.

However, I did find a great use-case for it: external third-party APIs. We recently moved ALL of the code for a couple of our third-party integrations to their own directory under lib/.

In my opinion, this ended up astonishingly better than the organization of the integrations we had beforehand.

What's it look like, anyway?

Let's say I have a third-party integration called Blockfires. Here's how we organized their code:

# call the third party's event creation endpoint
lib/blockfires/services/event/create.rb

# call the third party's transaction index and load them in the background
lib/blockfires/workers/transactions/historical_load.rb

# parse an incoming webhook
lib/blockfires/webhooks/transactions/parse.rb

 # the object the previous line parsed
lib/blockfires/webhooks/transaction.rb
Enter fullscreen mode Exit fullscreen mode

Now, when I'm looking for Blockfires code, I know exactly where it's gonna be. No looking around wondering if we have a certain type of worker, what the webhook file is named, or what API calls are available. It's all right there: in its own lib directory.

Personally, it's 2022, and I'm loving it. I don't think this is limited to Ruby apps. I think you could apply it to a lot of different types of codebases and languages.

If you ask me about this in a couple years, I may have deleted the lib folder and thrown my computer in the trash can. Who knows? Opinions change so fast in software, but right now, I think this is a good one.

Top comments (3)

Collapse
 
paulscoder profile image
Paul Scarrone

What do you think about moving your third party integrations into gems. In my time in the Java/Kotlin world, in support of microservice eco systems, we would externalize all of our clients. This also ment that contracts between domains we owned were also externalized shared clients.

I think moving domains to lib is the first step in this processđź‘Ť. It helps to provide a distinction between concerns. It does may it harder to keep them from having unnecessary coupling but that can be trained. Once you move to the time when multiple concerns are too much for a single app its much easier to move those into encapsulated services.

When you package an interaction and create an artifact you are also encapsulating the dependencies security constraints. It can be scanned, approved, and deposited in an self-owned artifact vault so its trust can be assured even if not shared.

Collapse
 
mculp profile image
Matt Culpepper

The problem to that approach is that there is a lot of business logic that depends on other business logic that the gem couldn't possibly know about.

e.g. the Blockfires gem would have to know that each transaction is used by models A, B, C

If we move models A, B, C to the gem, then the gem's gotta know about the database setup, right?

All of our internal, private gems have thin wrappers in the monolith for this reason. There are interactions in the monolith that each gem doesn't know about.

Third-party integrations as gems may work better in a microservices architecture, though!

Collapse
 
paulscoder profile image
Paul Scarrone

I agree but only if the domain delineation is based on business logic. Your integration with blockfire should be concerned with communication.

Consider if you had an event driven system and each model and associated business logic was a domain distributed to its own service cluster. The service knows what model it maintains, some other system will have to probably manage the eventual consistency state of such operations but there are some interactions that all these services share related to blockfire and those should be delegated to a client. The hope is next time a new model crops up, we can stand-up a new domain that inherits how your org uses blockfire.

The distinction is, treating your toolchains like internal platforms. You don't wanna share models but you wanna make expansion of business behavior easier to onboard.