DEV Community

Chris White
Chris White

Posted on

Python AWS testing with boto stubber

Making sure code does what it's supposed to is a crucial part of the development process. This is even more crucial when dealing with AWS code that can have unexpected costs if something goes wrong. Here I'll show how boto Stubber can help with testing code.

The Initial Setup

The boto library used for making python boto calls actually has built in functionality to help with testing. Due to wrapping around the client it's recommended to have any clients instantiated via a specific function/method to make it more easily mockable. Take for example a simple cleanup script that terminates EC2 instances:

import boto3


class InstanceCleaner(object):
    def __init__(self) -> None:
        self.client = self.create_client()

    @staticmethod
    def create_client() -> boto3.client:
        return boto3.client('ec2', region_name='us-west-2')

    def terminate_instances(self) -> None:
        instances = self.client.describe_instances()
        instance_list = []
        if not instances['Reservations']:
            return False

        for instance in instances['Reservations'][0]['Instances']:
            instance_list.append(instance['InstanceId'])

        self.client.terminate_instances(InstanceIds=instance_list)
Enter fullscreen mode Exit fullscreen mode

So nothing complicated, it's just collecting an list of instance IDs and passing them on to terminate_instances. Now I'll make a test suite for this:

from instance_cleanup import InstanceCleaner
from botocore.stub import Stubber

mock_describe_instances = {
    'Reservations': [
        {
            'Instances': [
                {
                    'InstanceId': 'i-1234567890abcdef0',
                    'State': {
                        'Code': 16,
                        'Name': 'running'
                    }
                },
            ],
        }
    ]
}

mock_terminate_instances = {
    'TerminatingInstances': [
        {
            'CurrentState': {
                'Code': 32,
                'Name': 'shutting-down',
            },
            'InstanceId': 'i-1234567890abcdef0',
            'PreviousState': {
                'Code': 16,
                'Name': 'running',
            },
        },
    ]
}

def test_instance_cleanup():
    instance_cleanup = InstanceCleaner()
    with Stubber(instance_cleanup.client) as stubber:
        stubber.add_response('describe_instances', mock_describe_instances, {})
        stubber.add_response('terminate_instances', mock_terminate_instances, {'InstanceIds': ['i-1234567890abcdef0']})
        instance_cleanup.terminate_instances()
Enter fullscreen mode Exit fullscreen mode

So here there are two JSON mocks to handle the responses for both describe_instances and terminate_instances. In the case of mock_describe_instances it's not using the full JSON return available for the call. While you can use this to slim the mocks down, there are cases where boto can complain about missing specific fields in the return structure. It is possible to ignore such issues, but I generally like to stay with what the API expects. That way if the test fail due to a new required field in the results, I can investigate to see if other changes may effect my tests. Note that I'm using a context manager here for the stubbed client:

with Stubber(instance_cleanup.client) as stubber:
Enter fullscreen mode Exit fullscreen mode

The reason why is that I'd have to use stubber.activate() to register the responses which can be easy to miss. Using the context manager instead let's the system handle it transparently for me. A quick run of the tests gives the following:

> python -m pytest
=========================================================================================================================================================================== test session starts ===========================================================================================================================================================================
platform win32 -- Python 3.9.13, pytest-7.3.2, pluggy-1.0.0
rootdir: [redacted]
collected 1 item

tests\test_instance_cleanup.py .                                                                                                                                                                                                                                                                                                                                     [100%]

============================================================================================================================================================================ 1 passed in 0.48s ============================================================================================================================================================================ 
Enter fullscreen mode Exit fullscreen mode

Stubber Gotchas

This allows for testing AWS boto calls without actually connecting to the AWS API itself. It's also important to note that the order of add_response is crucial. For example if you changed the test to do this:

def test_instance_cleanup():
    instance_cleanup = InstanceCleaner()
    with Stubber(instance_cleanup.client) as stubber:
        stubber.add_response('terminate_instances', mock_terminate_instances, {'InstanceIds': ['i-1234567890abcdef0']})
        stubber.add_response('describe_instances', mock_describe_instances, {})
        instance_cleanup.terminate_instances()
Enter fullscreen mode Exit fullscreen mode

Then you'll get an error when trying to run the test:

>           raise StubResponseError(
                operation_name=model.name,
                reason=f'Operation mismatch: found response for {name}.',
            )
E           botocore.exceptions.StubResponseError: Error getting response stub for operation DescribeInstances: Operation mismatch: found response for TerminateInstances
Enter fullscreen mode Exit fullscreen mode

Parameters also need to match what the call is made with. For example:

def test_instance_cleanup():
    instance_cleanup = InstanceCleaner()
    with Stubber(instance_cleanup.client) as stubber:
        stubber.add_response('describe_instances', mock_describe_instances, {})
        stubber.add_response('terminate_instances', mock_terminate_instances, {})
        instance_cleanup.terminate_instances()
Enter fullscreen mode Exit fullscreen mode

This will error out:

E           botocore.exceptions.StubAssertionError: Error getting response stub for operation TerminateInstances: Expected parameters:
E           {},
E           but received:
E           {'InstanceIds': ['i-1234567890abcdef0']}

..\.venv\lib\site-packages\botocore\stub.py:392: StubAssertionError
Enter fullscreen mode Exit fullscreen mode

State Mocking

As the core code is calling with a specific instance ID array so it won't match the blank parameters entry given. Now later we realize it would be more efficient to only attempt termination of instances that actually do need to be terminated, or those not in the terminated/about to be terminatd state. This leads to some code refactoring:

import boto3

TERMINATABLE_STATES = ['pending', 'running', 'stopping', 'stopped']

class InstanceCleaner(object):
    def __init__(self) -> None:
        self.client = self.create_client()

    @staticmethod
    def create_client() -> boto3.client:
        return boto3.client('ec2', region_name='us-west-2')

    def terminate_instances(self) -> None:
        instances = self.client.describe_instances(
            Filters=[{
                'Name': 'instance-state-name',
                'Values': TERMINATABLE_STATES,
            }]
        )
        instance_list = []
        if not instances['Reservations']:
            return False

        for instance in instances['Reservations'][0]['Instances']:
            instance_list.append(instance['InstanceId'])

        if instance_list:
            self.client.terminate_instances(InstanceIds=instance_list)
Enter fullscreen mode Exit fullscreen mode

Now if nothing is returned by describe_instances due to the filter then there is no need to make a termination call. Now for the test:

import pytest

from instance_cleanup import InstanceCleaner, TERMINATABLE_STATES
from botocore.stub import Stubber

INSTANCE_CODE_MAPPING = {
    'pending': 0,
    'running': 16,
    'shutting-down': 32,
    'terminated': 48,
    'stopping': 64,
    'stopped': 80
}
INSTANCE_ID = 'i-1234567890abcdef0'

test_data = [(x, True) for x in INSTANCE_CODE_MAPPING.keys() if x != 'terminated' and x != 'shutting-down']
test_data.append(('shutting-down', False))
test_data.append(('terminated', False))

def generate_instance_response(code_name: str) -> dict:
    return {
        'Reservations': [
            {
                'Instances': [
                    {
                        'InstanceId': INSTANCE_ID,
                        'State': {
                            'Code': INSTANCE_CODE_MAPPING[code_name],
                            'Name': code_name
                        }
                    },
                ],
            }
        ]
    }

def generate_termination_response(code_name: str) -> dict:
    return {
        'TerminatingInstances': [
            {
                'CurrentState': {
                    'Code': 32,
                    'Name': 'shutting-down'
                },
                'InstanceId': INSTANCE_ID,
                'PreviousState': {
                    'Code': INSTANCE_CODE_MAPPING[code_name],
                    'Name': code_name,
                },
            },
        ]
    }

@pytest.mark.parametrize('code_name,should_terminate', test_data)
def test_instance_cleanup(code_name, should_terminate) -> None:
    instance_cleanup = InstanceCleaner()
    with Stubber(instance_cleanup.client) as stubber:
        if should_terminate:
            stubber.add_response('describe_instances', generate_instance_response(code_name), {'Filters':[{'Name': 'instance-state-name', 'Values': TERMINATABLE_STATES }]})
            stubber.add_response('terminate_instances', generate_termination_response(code_name), {'InstanceIds': [INSTANCE_ID]})
        else:
            stubber.add_response('describe_instances', {'Reservations': [{'Instances': []}]}, {'Filters':[{'Name': 'instance-state-name', 'Values': TERMINATABLE_STATES }]})
        instance_cleanup.terminate_instances()
Enter fullscreen mode Exit fullscreen mode

So this is going through all the states that should be terminated and testing out how each one works. There's also one for shutting down (termination in progress) and terminated state (termination completed). Parameterized arguments are used to keep the test to a single function while being able to test multiple variations of state:

@pytest.mark.parametrize('code_name,should_terminate', test_data)
def test_instance_cleanup(code_name, should_terminate) -> None:
Enter fullscreen mode Exit fullscreen mode

in this case the first argument is the mapping to parameter names of code_name and should_terminate. The values for these are the pair of code name and value for each tuple in the test_data list:

test_data = [(x, True) for x in INSTANCE_CODE_MAPPING.keys() if x != 'terminated' and x != 'shutting-down']
test_data.append(('shutting-down', False))
test_data.append(('terminated', False))
Enter fullscreen mode Exit fullscreen mode

This data is generated off the keys of INSTANCE_CODE_MAPPING using a conditional list comprehension. Two functions are also declared to generate the appropriate mock JSON given the state in question. This could also be refactored out to a pytest mock as well.

Conclusion

Boto stubber is an excellent way to handle testing. However it might not be the best use for cases like testing DynamoDB inserts and the JSON mocks can be difficult to manage in more complex cases. In the next installment I'll be looking at using moto as a testing alternative.

Top comments (0)