DEV Community

Cover image for How to Automate Targeted Advertising in a Non-Standard Way - Advice from a Data Engineer
NIX United
NIX United

Posted on

How to Automate Targeted Advertising in a Non-Standard Way - Advice from a Data Engineer

Ivan Harahaychuk, Scala Data Engineer at NIX

It would seem that targeted advertising services are already sufficiently automated. But our team of data engineers decided to look at some familiar technologies from a different angle. In the end, we found new, effective solutions for the client. In this article, I will share the most interesting findings and describe what should be considered for anyone who wants to repeat something like this.

Let's recall the principles of targeted advertising

Once upon a time, in order to place banners on the Internet, it was necessary to negotiate directly with the owners of advertising resources. They determined the cost of the service, collected information about their audience, reported the number of clicks, etc. Over time, all these steps were automated thanks to the following services:

  • Supply-Side Platform. These platforms manage advertising spaces on third-party sites and applications. Their main goal is to sell advertising space to users of such a service at a favorable price. SSP also provides comprehensive information about visitors. In fact, it is the supply side of the market.

  • Demand-Side Platform. Designed to place ads on sites offered by SSP. They form requests, that is, demand. DSP helps advertisers place ads on quality sites at a minimal cost.

  • Real-Time Bidding. This is a mechanism for advertising auctions in real time. SSP and DSP participate in it. The principle of operation of RTB is depicted in this diagram:

Image description

The user enters the mobile application, where there are banners. Information about this login is sent to the SSP that manages the ad slots in this app. Next, a request is created to the service that conducts the auction — AD Exchange. It sends a bid request to all DSPs to which the SSP is subscribed (in the example shown, there are three of them). All platforms make bids that return to the auction service. The service compares rates. Whoever placed more gets the right to place a banner in the application. Everything happens automatically, in a fraction of a second.
In the context of our topic, it is worth mentioning the following terms:

  • User Acquisition is the process of finding new customers, drawing attention to the site or application with the help of advertising. This is the basis of advertising as such.

  • Retargeting is directing advertising to an audience that is already familiar with the product being presented. Here, the focus is on using a certain site or application.

Let's move to the project. What was the goal?

Our team had to improve the system of targeted advertising, which would modify and optimize User Acquisition and retargeting. The scale of activity of this system is truly fascinating. On average, the service processes almost a million requests per second! In addition, traffic comes from many regions: from the USA and Brazil to Europe and Japan.

Image description

The service is built on many modules, so the technology stack is very diverse.

  • To handle bid requests, we used bider, the main request handling module written in Scala. For its API, we take the Akka library stack. They are also required for other Scala modules.

  • Apache Kafka was chosen as a message broker for the transmission of bid requests. It can not only transmit bid requests to other modules, but also withstand a significant load in real time.

  • To process data arrays - we use Spark Job for various purposes of processing such large volumes of information.

  • We use Angular for the frontend. The service that manages the frontend API is written in Java using the Spring framework.

  • Bid forecasting requires a powerful tool for predicting the optimal level of price per ad impression. Python with PMML models helped us with this.

  • For data storage, MySQL was chosen for data that should be stored for more than a week, Aerospike and Redis for frequently changing data and cache, and Apache and Druid for analytical data.

  • Elasticsearch was used to work with logs. The platform generates a huge number of logs. He will cope best with such an array.

  • Web services. For deployment and other related tasks, we have connected many AWS services: EC2, ECS, EMR, S3, S3 Glacier, etc.

Difference between Redis and Aerospike

You probably noticed the use of Redis and Aerospike at the same time. Aren’t those two NoSQL databases with similar functionality? Why not keep one of them then? However, we need both options (we have open source versions). Here it is worth paying attention to their differences - critical for our project.

  • Using flash memory. The service generates a lot of data that is difficult to store exclusively in RAM. Redis in the free version works only with RAM. The Aerospike doesn't have this problem, so we use an SSD with it.

  • Support for triggers. Aerospike does not have such functionality, but Redis has it. This is very important for our project. We use the publish/subscribe mechanism for some data, and its change should trigger a specific method.

  • Horizontal scaling. Unlike Aerospike, Redis does not scale well horizontally. This database is more suitable for handling heavy loads.

  • Data consistency. Redis does not guarantee data consistency, which is critical for our project. Aerospike has full support for this.

  • AWS integration. Redis is successfully integrated with Amazon Web Services through ElastiCache. Aerospike does not have this. It is actually deployed only on EC2 instances.

How the Reporting UI works

In the project, all modules are interesting in their own way. But I would single out the two most unusual parts. The first is Reporting UI. This module sends reports, but the implemented mechanism is different from the usual methods. Usually, reports with important business information are sent to email or various BA tools. In our case, reports can also be sent in Slack. All project communication takes place in this messenger.

We have added other features:

  • the ability to receive a report on channel profit in Slack in

  • the form of a detailed chart;

  • subscription to the required reports;

  • integration with chat teams to use bots to generate a profit report for a specific SSP. This method of delivering reports was appreciated by both business analysts and the customer.

Image description

The technical implementation of the module is not difficult. First, we query Amazon Athena to retrieve the required information from the bid logs. We bring these data into the formats required for reports. Next, it remains to choose the method of distribution: by email, in a Slack channel or a chatbot (if there is a corresponding command).

The illustration below shows an example of such a report. Here is a graph with profit data and some numbers. The two lines represent today's and yesterday's data.

Image description

At a certain point, the number of service clients increased dramatically. In order to increase the flexibility of configuration and scaling of the system, there was a need to switch to a more modular architecture. But after the update, there were errors in the modules, which led to the loss of money. These issues are usually easy to spot because we have a lot of analytics and metrics. So it was necessary to stop the bider and fix the bug. Then we decided to make a high-level Exception Handler, our Stopper. Its purpose is to automate the identification of problems and stop the handling of requests.

Stopper implementation is quite simple:

Image description

As a rule, we look at the number of events and the lag of some topics on Kafka to track problems. Our module either looks at the lag or reads the number of events passing through the topic. Since we understand benchmarks, we can set minimum and maximum levels for topics and delay timeouts. In case of excess or shortage of events, data about these problems is sent to a MySQL table. It is there that the Stopper checks the information and decides whether to stop the handling of requests or not.

You might also notice Redis in the diagram. This is explained by the fact that we are currently testing the stopping of requests against this database as well. If the number of keys becomes critically large or greatly reduced, the system must do everything the same as for Kafka.

What should be considered before starting work?

  • There is no need to limit yourself to one solution
    As you can see, we did not limit ourselves to one NoSQL database. This allowed us to combine the best of Redis and Aerospike, increase the scalability of the system, and ultimately save money, which is also appreciated by the business.

  • Use familiar approaches and tools in unusual ways
    We tried to send important BA information to messengers. This is not so traditional in most projects, but that's what makes it interesting. And what is important — such a solution has increased the mobility of business analysts many times.

  • Automate everything you can
    Even if it seems impossible, look for implementation options. Everything is real. We made sure of this on the example of the handling of some exceptions, which freed up some of the resources of our specialists.

Top comments (0)