DEV Community

Ivan Slavko Matić
Ivan Slavko Matić

Posted on • Edited on

Django 4.1: Controlling Events With Signals

Introduction

When we look at web applications, we often see them as well-organized systems, made out of many components that are working together in harmony. There are many ways in which those components interact with each other. For example, some component interactions might be triggered by client action, while some components are triggered internally by containing methods that have nested functions. Django itself has embedded background processes that constantly communicate with multiple parts of its framework to keep everything in sync and working order. In this article, we aim to utilize Django's built-in & custom signals to control events within an application.

Django Signals Overview

In Django Framework, we already have internal component logic sequences that are predefined by the framework itself. For instance, let us consider the 'save()' method from 'django.db.models'. We know that the save() method will be automatically triggered implicitly once we call create() method on a model (if we interpret create() method as a wrapper for the save() method). One method indirectly contacted another. With Django signals, that behaviour can be similarly recreated and customized to promote even more interconnectivity between Django components. With Django signals as our notifier, we can trigger certain events to occur via callback functions.

We can even compare Django signals with Javascript listeners. The concept is similar, we have a listener in javascript waiting for an event to trigger. In parallel, the Django signal consists of a receiver and a sender. We register our callbacks, and those callbacks are standing ready to receive input from senders.

Project Setup

For examples we are about to write and go through, we will be using the project setup 'speedster' from one of the previous articles 'Improving Database Accessibility'. There can be found instructions to get our project up and running. For signals code situating, official Django docs say the following:

"Strictly speaking, signal handling and registration code can live anywhere you like, although it's recommended to avoid the application's root module and its models module to minimize side-effects of importing code."

So to keep things nice and structured, let's create a python package in our Django app 'speedy' just for our signals housing. We will name the folder 'signals' and the folder itself will contain 'init.py'. Now, if we want, this segregation can be extended to other parts such as converting 'models.py' into individual model bundles (their own '.py' files) and storing them in a python package:

speedy/
    migrations/  
    signals/
        __init__.py
        employees_signals.py
    models/
        employees.py
        ...
        __init__.py
    ...
    __init__.py
Enter fullscreen mode Exit fullscreen mode

It all depends on our project structure preferences. We mustn't forget to import files in our python package in our newly created 'init.py':

Python package: signals/employees

from .employee_assignment import *
Enter fullscreen mode Exit fullscreen mode

One last thing (if you decide on the suggested structure above), beware of the circular imports. Order of imports we set in our 'init.py' matters.

Django Built-Ins

From my research, the most common usage of signals is for event controlling in models (or manipulating model instances). We differentiate three groups of signals that are self-explanatory:

  • pre_save/post_save

  • pre_delete/post_delete

  • pre_init/post_init

All of the above sets originate from 'django.db.models.signals'. Less familiar and used (but still useful) sets are m2m_changed and class_prepared. More about them and more can be found in the following reference [1]. With the mentioned sets above, we can create sequences of events between related models and make them reactive. Let's see these built-ins in action.

Example: Employee Assignment Automation & Maintenance

In this example, we will be auto-creating a shift (if it doesn't already exist) for our newly created employee or updating an existing shift with the new employee. One shift can have a maximum of three employees. If you followed the project setup from the link above, then you will notice slight changes to models to accommodate this example.

employees.py:

class Employees(models.Model):
    first_name = models.CharField(max_length=100)
    last_name = models.CharField(max_length=100)
    address = models.CharField(max_length=200)
    age = models.PositiveSmallIntegerField()
    date_of_birth = models.DateField(auto_now_add=True)

Enter fullscreen mode Exit fullscreen mode

shifts.py:

class Shifts(models.Model):
    employees_shift = models.ManyToManyField(Employees, through='ShiftsEmployees', related_name='shifts')
    wage_bonus = models.DecimalField(max_digits=4, decimal_places=2, default=50.00)
    total_hours = models.IntegerField(default=8)
    shift_date = models.DateField(default=now)

class ShiftsEmployees(models.Model):
    employee = models.ForeignKey(Employees, on_delete=models.CASCADE, null=True)
    shift = models.ForeignKey(Shifts, on_delete=models.CASCADE)

    class Meta:
        unique_together = ('employee', 'shift')
Enter fullscreen mode Exit fullscreen mode

I haven't found a way to directly call the 'unique_together' property for the ManyToManyField in the class meta of our model. So I've made a slight workaround, where I defined a 'through' table for our m2m field. And with direct access to foreign keys, unique_together is added for both keys. We can notice that the model is exactly the same as the default would be - it just has the unique_together property.

Optional (and off-topic): An often missed opportunity for slight optimization arises when we declare our 'through' model. As we added unique_together for our foreign keys, we can also add indexes to those keys. Note that assigning the id of 'through' tables is probably not necessary considering that we probably won't use the id field in lookups. That might come in handy if we ever use those FKs for lookups.

# Optional
indexes = [
    models.Index(fields=['employee', 'shift'])
]
Enter fullscreen mode Exit fullscreen mode

Considering that mentioned Django built-ins are related to model events, we've already done half of our task. All that's left is writing signals logic.

Django's signals detect the event on the model instance, so the pre_save event will occur before the post_save. All the signals related to the sender 'Employees' will be written in employee_assignment.py.

employee_assignment.py:

  • employee_created():

In the employee_created() signal, there are multiple queries being executed, which at the first glance doesn't seem that much of a big deal. But we should keep the big picture in mind, and what I mean by that is we call these signals every time we call the save() method on our model instance. There are no restrictions on how much logic (and by extension queries) we can stuff in signals - but I believe we shouldn't get carried away. For instance, the client clicks create simple x object, and the 'creating…' logo is spinning for 10–15 seconds. That could be a potential consequence of having multiple 'stuffed' signals. Just something to keep in mind if we are looking after our UX rating closely.

@receiver(post_save, sender=Employees)
def employee_created(sender, instance, created, **kwargs):
    # Check if employee was created
    if created:
        # Find first free shift for newly created employee
        try:
            # Check if shift is under three employees
            # Order by 'shift_date' and get first object
            free_shift = Shifts.objects.annotate(employee_count=Count('employees_shift')) \
                .filter(employee_count__lt=3).order_by('shift_date').first()
        except (Shifts.DoesNotExist, IndexError, Exception,) as e:
            print('free_shift Error: ', e)
            free_shift = None

        # Check if empty shift exists
        if free_shift:
            # Assign employee to empty shift
            free_shift.employees_shift.add(instance.id)
        else:
            # No free shift found.
            # Find the shift with the latest date, and add shift object with: 'latest date' + 1 day
            try:
                try:
                    latest_date = Shifts.objects.order_by('-shift_date').values('shift_date').first()
                    latest_date = latest_date['shift_date'] + datetime.timedelta(days=1)
                except (Shifts.DoesNotExist, IndexError, Exception) as e:
                    print('latest_date Error: ', e)
                    latest_date = None
                # Check if there is at least one object in DB (Fresh DBs)
                if latest_date:
                    new_shift = Shifts.objects.create(shift_date=latest_date)
                    # Save object
                    new_shift.save()
                    # Add M2M relation
                    new_shift.employees_shift.add(instance.id)
                else:
                    # There aren't any Shifts objects in DB, default will input current date
                    new_shift = Shifts.objects.create()
                    new_shift.save()
                    new_shift.employees_shift.add(instance.id)
            except (Shifts.DoesNotExist, Exception) as e:
                print(e)
Enter fullscreen mode Exit fullscreen mode
  • wipe_expired_shifts():

All shifts before the current date in our database, are considered redundant. Those objects are not reusable unless we have an automated sequence that updates the expired object's data. Now we can debate which is faster, creating an object or updating it - but I digress. To keep things simple, let's go with the option of deleting objects and creating them anew. Doing maintenance through wiping data brings many benefits. Not only are we freeing storage in our DBs, but we are also accelerating query lookups by doing fewer comparisons with data in DB.

@receiver(pre_save, sender=Employees)
def wipe_expired_shifts(sender, **kwargs):
    current_date = datetime.date.today()
    try:
        Shifts.objects.filter(shift_date__lt=current_date).delete()
    except(Shifts.DoesNotExist, Exception) as e:
        print('Wipe redundant shifts Error: ', e)
Enter fullscreen mode Exit fullscreen mode

We are almost done :). The last thing we need to do is import our signals inside our related app's ready() function. Notice, that we are importing a python package and not individual files which if we remember, are already imported in init.py.

class SpeedyConfig(AppConfig):
    default_auto_field = 'django.db.models.BigAutoField'
    name = 'speedy'

    def ready(self):
        import speedy.signals
Enter fullscreen mode Exit fullscreen mode

Code can be tested by creating a simple function-based view and calling a .create() method on the Employee model. Like this:

def test_signals(request):
    Employee.objects.create(<<insert_test_data>>)
    return HttpResponse()
Enter fullscreen mode Exit fullscreen mode

This is an alternate way of calling a signal directly to the event. For example, we can use a built-in event post_save and append out a custom function with the model instance:

post_save.connect(my_custom_function, sender=Employees)
Enter fullscreen mode Exit fullscreen mode

Model Methods vs. Built-in Signals

If we think about it, all the logic we put in the callback function of the signal can be easily put in a custom model method or override of the save() method. Both can perform an action over a model instance that is exactly the same and both are triggered when called upon. So why should we bother with signals configuration, when we can put all of the logic in a simple method of a model?

Here are a few cool benefits we get from moving model method logic to callback functions of signals:

  • Allows unobstructed cooperation between different models - signals are easier to connect to models without editing the models themselves

  • Reusable apps prefer signals due to easier adaptation to a new environment, in addition to signals being available to all models at all times unlike embedded model methods

  • Official Django docs state that overridden model methods are not used on delete bulk operations (For more info, check reference [2])

  • Quality slug & already generated data processing

  • A great solution for circular import errors

It should also be mentioned, that we do not benefit much from signals if we do not intend to connect multiple models or other components of Django. So if the logic is contained inside a single model - then we just create a model method or override the existing one. Still, no harm is done if we decide to go deep into signals and move everything there. The result will be the same.

Rules of thumb:

  • If the task is completely related to the model at hand, keep the logic inside the model itself. No need to implicate signals. We override our save() method, do our thing and call super() at the end (pre_save/post_save situation)

  • If the task involves other models (or in other words, the task is related to more than one model), signals would be perfect for the task

Custom Signals

Even though we can do various 'magic' in our callback functions, the main intention of signals is to act as a notifier. Signals should be interpreted as add-ons to an already existing logic. Events in custom signals work the same way as built-ins. We have pre_event_x and post_event_x situations. Let's prepare for the next example.

We start by creating a custom_signals.py file in our speedy/signals python package. Next, we open the init.py and add a line to import all from custom_signals.py:

from .custom_signals import *
Enter fullscreen mode Exit fullscreen mode

We are all set. Let's start with a simple example of logging events.

Example: App Event Logging

We will start by declaring a simple custom logging function. In custom_signals.py we declare a log_event() function:

def log_event(message):
    # Creates file 'log_event.log' in project root with append mode 'a'
    f = open('log_event.log', 'a')
    # Write current timestamp with user submitted message
    f.write(datetime.now().strftime("%d_%m_%Y_%H_%M_%S") + ' -- ' + message + '\n')
    f.close()
Enter fullscreen mode Exit fullscreen mode

The function creates a file or appends to an existing file, a custom message sent by the user. The created file is situated in the project root. Next up is custom signal setup which comes in three steps: signal declaration, receiver setup and connection. Declaration and receiver we will situate right below our log_event() function, like this:

def log_event(message):
    # Creates file 'log_event.log' in project root with append mode 'a'
    f = open('log_event.log', 'a')
    # Write current timestamp with user submitted message
    f.write(datetime.now().strftime("%d_%m_%Y_%H_%M_%S") + ' -- ' + message + '\n')
    f.close()

# Declare signals
event_signal = django.dispatch.Signal()

@receiver(event_signal)
def view_event_logging(sender, **kwargs):
    print(sender, kwargs['message'], kwargs['request'])
    message = '{} -- {}'.format(sender, kwargs['message'])
    log_event(message)
Enter fullscreen mode Exit fullscreen mode

As we can see, the declaration is quite simple, but on the receiver, we should point out a few things. Unlike built-ins, we don't append the sender parameter, only our above-declared signal. In custom signals, a sender can be anything - which means we can pass (at the very minimum) a simple string that indicates where that signal came from. Now, for the callback function itself, we can send as many parameters as we want. The callback function prints out sent parameters, form a message from them and calls the log_event() function. That should be enough for the start.

Virtually, signals can be sent from anywhere. Let's demonstrate that, by creating a simple view:

def test_logging(request):
    # Empty string
    message = ''
    try:
        # Create an object
        emp_inst = Employees.objects.create(first_name='user1',
                                            last_name='last_user',
                                            address='address',
                                            age=93)
        message += 'Employees object created: {} {}'.format(emp_inst.first_name, emp_inst.last_name)
    except (Employees.DoesNotExist, ValueError) as e:
        print('Error during employees creating occured: ', e)
        # Convert ValueError error to string for successful concatenation
        message += str(e)
        emp_inst = None
        pass

    event_signal.send(sender='View: test_logging', message=message, instance=emp_inst)
    return HttpResponse()
Enter fullscreen mode Exit fullscreen mode

Our view returns an empty HttpResponse(), which once added to urls.py and when accessed - creates an Employee object and sends our signal with a success/error message and newly created instance.

Some example data that is being written to log_event.log, an error and a success:

01_11_2022_01_15_45 -- View: test_logging -- Field 'age' expected a number but got 'age'. -- None
14_11_2022_14_31_12 -- View: test_logging -- Employees object created: user1 last_user -- Employees object (72)
Enter fullscreen mode Exit fullscreen mode

If we look at the code closely - you might wonder and say: "Why not just call log_event() directly into view and remove redundant signals middleman?". And that is good thinking. Signals may seem perfect for some situations and redundant for others. It's very contextual in nature of what we're trying to do. In those moments of doubt, the power of projection is extremely important. If we believe that our written signal above is complete and will not receive further feature upgrades - then we can conclude with certainty, that it's not needed. However, if we decide to introduce additional models, and logic from other Django components that might clash with other imports, elements and segments of code, in that case, having extra accessibility from Django signals will come in handy. Think of it as an alternate route to do the same logic we did before.

Conclusion

Django Signals should not be used in every situation. If the code gets too big and hard to understand, debugging and testing can get very complicated and even messy. And that comes directly from official Django docs (reference [0]), where they state that we should opt for direct code calling instead of having signals as our intermediary between components. Signals are a wonderful alternative if the need arises. We can set it up quickly and write it as any other method as we did before. Its usage is situational. The programming world is ever-evolving, with new client demands, and new challenges to face - having another tool and an extra bit of knowledge, should make us able to cover more situations in an elegant manner.

References

[0] "docs.djangoproject.com", "Signals", https://docs.djangoproject.com/en/4.1/topics/signals/

[1] "docs.djangoproject.com", "Signals (Library Details)", https://docs.djangoproject.com/en/4.1/ref/signals/

[2] "docs.djangoproject.com", "Overriding predefined model methods", https://docs.djangoproject.com/en/dev/topics/db/models/#overriding-predefined-model-methods

[3] "lexev.org", "Django: signal or model method?", http://www.lexev.org/en/2016/django-signal-or-model-method/

[4] "django-advanced-training.readthedocs.io", "Creating and triggering custom signals", https://django-advanced-training.readthedocs.io/en/latest/features/signals/

[5] "codeunderscored.com", "Custom Signals in Django", https://www.codeunderscored.com/custom-signals-in-django/, March 17, 2022, Humphrey

Top comments (1)

Collapse
 
artu_hnrq profile image
Arthur Henrique

Hey, Nice blog post, @slavkus !
I learned a little more over implement my own custom signals
Thanks for the sharing!