loading...

Sensible Feign client configuration

philhardwick profile image Phil Hardwick Originally published at blog.wick.technology on ・3 min read

Feign clients make it easy to write restful client based on the Spring annotations you already know. It also includes integrations with loads of other netflix libraries and good microservices patterns (like service discovery, load balancing and circuit breakers).

Feign is configurable but you’ll usually want to change the configuration before using it for service to service calls.

Logging

You’ll want to see what requests are being made. This requires you to do two things: set the loggerLevel for your feign config in your application.yml and set the logging level of the feign client class to DEBUG.

logger.level:
  root: INFO 
  com.example.clients.InvoiceClient: DEBUG


feign:
  client:
    config:
      default:
        loggerLevel: basic

You can set loggerLevel to full if you want to see headers and response bodies in the logs too.

Whether to decode 404s

If you want an FeignException to be thrown when your client receives a 404 Not Found response, you can set decode404 to false, which is the default. Otherwise if you set decode404 to true you will receive null as a response or Optional if you’ve wrapped your retrun types in Optionals.

Error decoder

How to deal with errors is the most important thing because you wan’t to return sensible responses to your client if you have one, and make it easy to investigate when things go wrong. I believe the sensible thing to do is:

  1. retry on any server error (status > 499)
  2. return the same server error when retry is exhausted
  3. retry on any 429 or when a Retry-After header is set
  4. return a 500 when any other client error occurs
  5. log the error with the status, what method caused it and the response body

That’s quite a requirements list. Let’s build an error decoder which satisfies this:

public class CustomErrorDecoder implements ErrorDecoder {

    private static final Logger LOG = LoggerFactory.getLogger(CustomErrorDecoder.class);
    private ErrorDecoder defaultDecoder = new ErrorDecoder.Default();

    @Override
    public Exception decode(String methodKey, Response response) {
        //Requirement 5: log error first and include response body
        try {
            LOG.error("Got {} response from {}, response body: {}", response.status(), methodKey, IOUtils.toString(response.body().asReader()));
        } catch (IOException e) {
            LOG.error("Got {} response from {}, response body could not be read", response.status(), methodKey);
        }
        Exception defaultException = defaultDecoder.decode(methodKey, response);
        if (defaultException instanceof RetryableException.class) {
            //Requirement 3: retry when Retry-After header is set
            //Will be true if Retry-After header is set e.g. in case of 429 status
            return defaultException;
        }
        if (HttpStatus.valueOf(response.status()).is5xxServerError()) {
            //Requirement 1: retry on server error
            return new RetryableException("Server error", response.request().httpMethod(), new ServerErrorResponseFromOtherSystemException(HttpStatus.valueOf(response.status()), defaultException), null);
        } else {
            //Requirement 4: return 500 on client error
            return new ClientErrorResponseFromOtherSystemException("Client error " + response.status() + " from calling other system", defaultException);
        }
    }

}

And the exceptions in throws:

//Requirement 4: return 500 on client error
@ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR)
public class ClientErrorResponseFromOtherSystemException extends Exception {

    public ThirdPartyClientErrorResponseException(String message, Exception exception) {
        super(message, exception);
    }
}

public class ServerErrorResponseFromOtherSystemException extends Exception {
    private HttpStatus responseStatusFromOtherSystem;

    public ServerErrorResponseFromOtherSystemException(HttpStatus httpStatus, Exception exception) {
        this.responseStatusFromOtherSystem = httpStatus;
        super(message, exception);
    }

    public HttpStatus getStatus() {
        return responseStatusFromOtherSystem;
    }
}

The @ResponseStatus annotation means spring will return that status when the annotated exception reaches the controller. To determine the response status of the ServerErrorResponseFromOtherSystemException we need an exception handler for the controller:

@RestController
public class InvoiceController {

    //Handler methods

    @ExceptionHandler(ServerErrorResponseFromOtherSystemException.class)
    public void ResponseEntity handleServerErrorResponseException(ServerErrorResponseFromOtherSystemException ex) {
        //Requirement 2: return the same error when retry is exhausted
        return ResponseEntity.status(exception.getStatus()).build();
    }

}

For feign to instrument the retryer it will need to be exposed as a bean. To be able to set the exception propagation policy to unwrap we also need to expose a feign builder with our customisations as a bean. See the example below:

@Bean
public Retryer retryer() {
    return new Retryer.Default();
}

@Bean
public Feign.Builder feignBuilder(Retryer retryer) {
    return Feign.builder()
            .exceptionPropagationPolicy(ExceptionPropagationPolicy.UNWRAP)
            .errorDecoder(new ServerErrorRetryingErrorDecoder())
            .retryer(retryer);
}

Setting the exception propagation policy to unwrap means that the exception inside the RetryableException is the one that will be thrown when the retries are exhausted, allowing application specific exceptions to be thrown and then handled by exception handlers (again, assuming you’re using Spring MVC).

Graceful handling of errors is essential and building it in from the start is a good practice - don’t wait for customers to notice that third party systems are failing. I’ve found even the most reliable of third party systems can occasionally timeout and return a 502. This happened to me in production, where a bad gateway error occurred, which I had never seen in other environments. Luckily I had this retry mechanism implemented and the request was retried and the customer carried on the journey after a small wait.

Discussion

pic
Editor guide