New: Asynchronous container filters
By Stephane Epardaud | June 18, 2018
JAX-RS 2.0 shipped with support for filtering requests and responses, which enabled a lot of great use-cases for delegating duplicated code away from resources and into filters that would do the same processing for every resource method.
Request filters work by overriding the ContainerRequestFilter.filter method and observe or modify the given context object, or abort the filter chain with a response if the filter already has a response and the other filters and resource method are not required. Simply returning from the filter method will cause the next filter to be called, or when we have run all the filters, it will invoke the resource method.
Response filters are very similar, but execute after the resource method has been executed and produced an entity, status code, headers, which the filter can then modify if required, or simply return to let the next filters run, or the response be sent to the client.
This is all great, but how does it work in an asynchronous ecosystem ? It doesn't, really, because even though JAX-RS supports suspending the request, it only supports it within the resource method: filters are too early (for request filters), or too late (for response filters).
In RESTEasy 3.5 and 4.0.0, we introduced the ability to suspend the request in filters. To do that, write your request or response filter as usual, but then cast your context object down to SuspendableContainerRequestContext or SuspendableContainerResponseContext (for response filters), and you can then:
-
suspend the request with SuspendableContainerRequestContext.suspend()
-
resume it normally with SuspendableContainerRequestContext.resume(), to proceed to the next filter or resource method
-
resume it with a response with the standard ContainerRequestContext.abortWith(), to directly send that response to the client
-
resume it with an exception with SuspendableContainerRequestContext.resume(Throwable)
Similarly, for response filters, you can:
-
suspend the request with SuspendableContainerResponseContext.suspend()
-
resume it normally with SuspendableContainerResponseContext.resume(), to proceed to the next filter or return the response to the client
-
resume it with an exception with SuspendableContainerResponseContext.resume(Throwable)
Of course, the resume() methods only work after you've called suspend() , but otherwise you can call resume() right after suspend() , before returning from the filter, in which case the request will not even be made asynchronous, or you can call resume() later after you return from the method, or even from another thread entirely, in which case the request will become asynchronous.
The fact that filters may turn requests asynchronous has no impact at all on the rest of your code: non-asynchronous and asynchronous resource methods continue to work exactly as normal, regardless of the asynchronous status of the request, so you don't need to modify your code to accommodate for asynchronous filters.
Asynchronous rate-limiter example with Redis
Asynchronous filters are useful for plugging in anything that requires asynchrony, such as reactive security frameworks, async response processing or async caching. We will illustrate how to use asynchronous filters with a rate-limiter example.
For that, we will use RateLimitJ for Redis, which uses Redis to store rate-limiting information for your API. This is very useful for sharing rate-limit between your API server cluster, because you can store that info in a Redis cluster, and you don't have to worry about blocking clients while you're waiting for Redis to give you the info: you just become asynchronous until you have an answer from Redis.
We will first import the right Maven dependency for RateLimitJ:
<dependency>
<groupId>es.moki.ratelimitj</groupId>
<artifactId>ratelimitj-redis</artifactId>
<version>0.4.2</version>
</dependency>
And let's not forget to install and run a local Redis cluster.
We will start by declaring a @RateLimit annotation that we can use on our resource methods or classes to indicate we want rate limiting:
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE, ElementType.METHOD})
public @interface RateLimit {
/**
* Number of {@link #unit()} that defines our sliding window.
*/
int duration();
/**
* Unit used for the sliding window {@link #duration()}.
*/
TimeUnit unit();
/**
* Maximum number of requests to allow during our sliding window.
*/
int maxRequest();
}
And we have to declare a DynamicFeature that enables the filter on annotated methods and classes:
@Provider
public class RateLimitFeature implements DynamicFeature {
private StatefulRedisConnection<string,string> connection;
public RateLimitFeature(){
// connect to the local Redis
connection = RedisClient.create("redis://localhost").connect();
}
public void configure(ResourceInfo resourceInfo, FeatureContext context) {
// See if we're rate-limiting
RateLimit limit = resourceInfo.getResourceMethod().getAnnotation(RateLimit.class);
if(limit == null)
limit = resourceInfo.getResourceClass().getAnnotation(RateLimit.class);
if(limit != null) {
// add the rate-limiting filter
Set rules = new HashSet<>();
rules.add(RequestLimitRule.of(limit.duration(), limit.unit(), limit.maxRequest()));
context.register(new RateLimitFilter(new RedisSlidingWindowRequestRateLimiter(connection, rules)));
}
}
}
And this is how we implement our asynchronous filter:
public class RateLimitFilter implements ContainerRequestFilter {
private RedisSlidingWindowRequestRateLimiter requestRateLimiter;
public RateLimitFilter(RedisSlidingWindowRequestRateLimiter requestRateLimiter) {
this.requestRateLimiter = requestRateLimiter;
}
public void filter(ContainerRequestContext requestContext) throws IOException {
// Get access to the remote address
HttpServletRequest servletRequestContext = ResteasyProviderFactory.getContextData(HttpServletRequest.class);
// Suspend the request
SuspendableContainerRequestContext suspendableRequestContext = (SuspendableContainerRequestContext) requestContext;
suspendableRequestContext.suspend();
// Query and increment by remote IP
requestRateLimiter.overLimitAsync("ip:"+servletRequestContext.getRemoteAddr())
.whenComplete((overlimit, error) -> {
// Error case
if(error != null)
suspendableRequestContext.resume(error);
// Over limit
else if(overlimit)
suspendableRequestContext.abortWith(Response.status(429).build());
// Good to go!
else
suspendableRequestContext.resume();
});
}
}
Now all we have left to do is to implement a resource with rate-limiting:
@Path("/")
public class Resource {
@Path("free")
@GET
public String free() {
return "Hello Free World";
}
@RateLimit(duration = 10, unit = TimeUnit.SECONDS, maxRequest = 2)
@Path("limited")
@GET
public String limited() {
return "Hello Limited World";
}
}
If you go to /free you will get an unlimited number of requests, while if you go to /limited you will get two requests allowed every 10 seconds. The rest of the time you will get an HTTP response of Too Many Requests (429).
If you have the need for asynchronous request or response filters, don't hesitate to give RESTEasy 3.5.1.Final or 4.0.0.Beta2 a try.
Useful Links
YourKit supports open source projects with innovative and intelligent tools for monitoring and profiling Java and .NET applications. YourKit is the creator of YourKit Java Profiler, YourKit .NET Profiler, and YourKit YouMonitor