Dependency on input and output (IO) resources like databases, APIs, and user input is inevitable in software development and application design. Where you choose to include these touchpoints with IO in your code will have a substantial impact on the long term resuse and extensibility of your code as well as your ability to test it. Interfaces provide a means for hiding the implementation details required by IO interactions so that you can write the majority of your code independent of concerns about where data is coming from or where it is going.
To illustrate how functions and methods can be refactored from knowing everything about your application's IO to knowing nothing about it, we will create a small application that calculates the total price of a sales order from a list of line items. The order information is retrieved by the application from a locally stored JSON file. Each line item is represented by a struct with a description and a price.
type LineItem struct {
Description string `json:"description"`
Price float64 `json:"price"`
}
As a first attempt we can write a function that knows everything about where the order details are stored and how to retrieve them. This function reads in the file, deserializes the contents, and then iterates over each line item to produce the total cost.
func CalculateOrderTotal() float64 {
filePath := "data/order_items.json"
file, err := ioutil.ReadFile(filePath)
if err != nil {
log.Fatalf("unable to read file %s", filePath)
}
var lineItems []LineItem
err = json.Unmarshal(file, &lineItems)
if err != nil {
log.Fatalf("unable to parse json for file %s", filePath)
}
var orderTotal float64
for _, lineItem := range lineItems {
orderTotal += lineItem.Price
}
return orderTotal
}
This approach is problematic for a number of reasons. The first red flag is that this function is not focused on doing just one thing. It does more than calculate order totals as the name implies; it is reading in files and deserializing JSON as well.
The function is also restricted to retrieving the order information from one specific source. This could be addressed by passing in the file path as an argument but what happens if requirements change and order information must be retrieved from additional forms of persistent storage like a relational database or blob storage in the cloud? This first attempt at implementing CalculateOrderTotal
does not allow us to reuse the order total calculation logic for anything other than JSON files.
And there's one more problem with this function. It's difficult to test.
func TestCalculateOrderTotal(t *testing.T) {
var expected float64 = 3476
actual := CalculateOrderTotal()
if actual != expected {
t.Fatalf("expected %.2f, actual %.2f", expected, actual)
}
}
Running this code actually causes ioutil.ReadFile
to return a file not found error because the hardcoded file path is relative and the test directory uses a different root path when resolving relative file paths than the one used by the application's main
function. Again, this could be resolved by passing in the file path as an argument but tests will be dependent on the file system and CalculateOrderTotal
will still be responsible for too many things and dependent on a very specific data retrieval implementation.
We can move a step in the right direction by extracting the file reading and JSON deserialzing code into a method of a struct designed specifically to deserialize JSON from a file.
type JsonOrderProvider struct {
FilePath string
}
func (r JsonOrderProvider) GetLineItems() []LineItem {
file, err := ioutil.ReadFile(r.FilePath)
if err != nil {
log.Fatalf("unable to read file %s", r.FilePath)
}
var lineItems []LineItem
err = json.Unmarshal(file, &lineItems)
if err != nil {
log.Fatalf("unable to parse json for file %s", r.FilePath)
}
return lineItems
}
Now the file path is no longer hard coded making it easier to change which files are read. CalculateOrderTotal
can be rewritten to accept this new struct as an argument and is now focused on calculating order totals rather than dealing with local file systems.
func CalculateOrderTotal(provider JsonOrderProvider) float64 {
lineItems := provider.GetLineItems()
var orderTotal float64
for _, lineItem := range lineItems {
orderTotal += lineItem.Price
}
return orderTotal
}
Testing is much easier as well, we can now adjust the relative file path so that the test project is able to locate the JSON file and we can even use several JSON files to test our function with a variety of inputs.
func TestCalculateOrderTotal(t *testing.T) {
var expected float64 = 3476
provider := JsonOrderProvider{FilePath: "../data/order_items.json"}
actual := CalculateOrderTotal(provider)
if actual != expected {
t.Fatalf("expected %.2f, actual %.2f", expected, actual)
}
}
However this solution still isn't ideal because both our tests and the CalculateOrderTotal
function are dependent on order information being persisted through the local file system. If our requirements for persistence changed and we needed to calculate order totals from information stored in a SQL database or a cloud datastore we would not only need to write a new provider struct, but also a second CalculateOrderTotal
function with a different parameter type.
Both problems can be solved by introducing an interface.
type OrderProvider interface {
GetLineItems() []LineItem
}
By creating an OrderProvider
interface we are able to describe a contract about the type of behavior we expect without knowing anything about the implementation details. The JsonOrderProvider
already satisfies the requirements for this interface (in Go, interface implementation is implicit) so let's create a new provider, one that serves order information from an in-memory list of line items so that we can improve our testing capabilities.
type InMemoryOrderProvider struct {
LineItems []LineItem
}
func (r InMemoryOrderProvider) GetLineItems() []LineItem {
return r.LineItems
}
We also need to change the parameter of CalculateOrderTotal
to be an OrderProvider
interface type instead of the concrete type, JsonOrderProvider
.
func CalculateOrderTotal(provider OrderProvider) float64 {
lineItems := provider.GetLineItems()
var orderTotal float64
for _, lineItem := range lineItems {
orderTotal += lineItem.Price
}
return orderTotal
}
Because our previous parameter type, JsonOrderProvider
, already satisfied the GetLineItems
method of the OrderProvider
interface, the body of CalculateOrderTotal
does not need to change, only the function's parameter type. Everything else works the same.
Now let's see if this shift from concrete type to interface type improves our testing capabilities.
var testCases = []struct {
provider InMemoryOrderProvider
expected float64
}{
{
provider: InMemoryOrderProvider{
LineItems: []LineItem{
{Description: "A", Price: 85},
{Description: "B", Price: 15},
},
},
expected: 100,
},
{
provider: InMemoryOrderProvider{
LineItems: []LineItem{
{Description: "A", Price: 35.25},
{Description: "B", Price: 95.5},
},
},
expected: 130.75,
},
}
func TestCalculateOrderTotal(t *testing.T) {
for _, test := range testCases {
if actual := CalculateOrderTotal(test.provider); actual != test.expected {
t.Fatalf("expected %.2f, actual %.2f", test.expected, actual)
}
}
}
There is some additional code in this test, a list of anonymous struct types with provider and expected output fields. This list serves as a suite of test cases to be used by the test method TestCalculatedOrderTotal
. This is a testing convention common to Go unit tests and is an alternative to the more common convention of having separate methods for each test case.
Testing conventions aside, we can now define several test cases in the same file where our test method is defined without any dependency on the local file system. The function CalculateOrderTotal
no longer contains implementation details for retrieving the data nor any hint as to where the data is coming from. The OrderProvider
interface ensures that if the method GetLineItems
is invoked, the function CalculateOrderTotal
will receive a list of LineItem
structs and that's the only guarantee this function needs to perform its intended work.
In our main
function we can prove that CalculateOrderTotal
works the same whether we use a JsonOrderProvider
or an InMemoryOrderProvider
.
func main() {
jsonProvider := JsonOrderProvider{FilePath: "data/order_items.json"}
orderTotal := CalculateOrderTotal(jsonProvider)
fmt.Printf("Your total comes to %.2f", orderTotal)
inMemoryProvider := InMemoryOrderProvider{
LineItems: []LineItem{
{Description: "Leather Recliner", Price: 2499},
{Description: "End Table", Price: 249},
},
}
orderTotal = CalculateOrderTotal(inMemoryProvider)
fmt.Printf("Your total comes to %.2f", orderTotal)
}
At some point you may have been wondering, "why not pass in the list of line items as an argument rather than going through all this trouble to create an interface?" and for a function like CalculateOrderTotal
that is likely how you would write it in a real application. However this does not remove the need for the data to be retrieved. By refactoring CalculateOrderTotal
to take a list of line items instead of an OrderProvider
interface you are deferring the responsibility of data retrieval to its caller.
It might make sense for the calling function to know about file systems and to use functions like ioutil.ReadFile
but in most applications you will have this middle layer of code that needs to retrieve data that ultimately comes from an IO resource while still deferring the specifics as to which resource it is coming from to the edges of the application, keeping the core business logic free of such details.
Using interfaces to abstract away the specifics of your IO operations provides a way to push those IO implementation details further out, keeping them close to the edges while still granting functions closer to the middle the ability to influence what data is retrieved and when. This flexibility allows you to design solutions that are adaptable to change and testable in isolation from IO dependencies.
Interested in learning more? I've written a second blog post on abstracting away the implementation details of application IO using higher order functions.
All example code from both blog posts can be found here.
Top comments (1)
Very good and pragmatic view of basic principles implemented using Go. Over all a nice write-up of how to increase code quality in practice. 👍