How not to get confused by Spring Boot web security auto-configuration

In my project (Spring Boot + Security + Thymeleaf) I wanted to configure custom web security as is described in "getting started" article on the Spring's official web. I've followed steps in the article and created my custom configuration. But I forgot to add a @EnableWebSecurity annotation. Everything seemed to work fine. Except for Thymeleaf's sec:authorize-url attribute.

What happened?

By not specifying @EnableWebSecurity annotation I didn't disable Spring Boot's default security auto-configuration in SpringBootWebSecurityConfiguration.ApplicationWebSecurityConfigurerAdapter. This creates a security configuration (HttpSecurity object) that disables all access to application for unauthorised users, except for some static resources, and creates a user with a role ROLE_USER and generates some random password for testing purposes.

My configuration also created HttpSecurity object with my own security configuration and my own user repository.

Both HttpSecurity configurations has been passed to WebSecurity which is responsible for creating Spring Security Filter Chain. As you may know a security request goes through this chain until one of the filters "catches" it and process it.

Because my custom configuration bean had a higher priority, it was located "on a higher place" in security filter chain. So all requests has been caught and processed by my filter and not by the filter created by Spring Boot auto-configuration. So far so good.

Problem with two HttpSecurity configurations

When configured, the WebSecurity object holds an instance of FilterSecurityInterceptor. There is only a single filed that can hold the interceptor so no more than one interceptor can be held by WebSecurity object.

The FilterSecurityInterceptor is the crucial part of the Spring Security project. Actually it is its parent class AbstractSecurityInterceptor and its implementations that makes Spring Security breath.

  • FilterSecurityInterceptor intercepts ServletRequests and decides whether a current user has permission to proceed or not. The FilterSecurityInterceptor uses a help of other classes like SecurityContextHolder (holds information about current user, his security context), AccessDecisionManager (evaluates a request against a current security context) and so on.
  • MethodSecurityInterceptor intercepts method calls similarly to FilterSecurityInterceptor. Uses Spring's proxies.
  • AspectJMethodSecurityInterceptor similar to MethodSecurityInterceptor with support of AspectJ.

The FilterSecurityInterceptor is based on information in HttpSecurity. In WebSecurityConfigurerAdapter.init method, the Spring populates WebSecurity by the FilterSecurityInterceptor instance. If more than one FilterSecurityInterceptor are created later overwrites interceptor that is already present in WebSecurity!

Thymeleaf-Spring Security integration gets the FilterSecurityInterceptor from WebSecurity and uses it to evaluate value of sec:authorize-url attribute. Obviously, in this case it gets Spring Boot's default configuration which is not what I wanted.

So, don't forget to disable default configuration by adding @EnableWebSecurity annotation to your security configuration class.


Spring Boot + PostgreSQL + Custom Type = exception!

I do have demo application that shows features of various ORM frameworks. Application performs several basic operation on a database. To have clean database for each application's run I wanted to recreate whole database at its start. This means dropping whole schema and then re-initialising it by executing following script.


CREATE TYPE plane_dimensions AS (
  length_meters   DECIMAL,
  wingspan_meters DECIMAL,
  height_meters   DECIMAL

  name         VARCHAR(250)     NOT NULL,
  dimensions   plane_dimensions NOT NULL


Unfortunately this doesn't work.

When the application connects to the database it loads existing types and stores it to a cache. Then my type is dropped and immediately re-created by the script. But re-created type does have different OID than the former (cached). So if you try to use it from the application you will get an exception.

Caused by: org.postgresql.util.PSQLException: ERROR: cache lookup failed for type 1234567

Solution is to re-initialise connection or not to modify the type.

Query to find type's OID:

SELECT oid FROM pg_type WHERE typname = 'plane_dimensions';


Shared test sources in Gradle multi-module project

Basic assumptions

Let's suppose following project structure.

|    +--- main source set
|    \--- test source set
\--- :module1
     |    +--- main source set
     |    \--- test source set
     \--- :core-module
              +--- main source set
              \--- test source set
Shared test-utility classes shouldn't be placed in test source set

In some legacy projects I've seen utility classes to be located in test source set. This is problematic in multi-project applications because Gradle does not pull test sources to a dependent module. So module1 won't "see" classes from core-module's test source set. This makes sense because you don't want to have module1's test-classpath flooded with core-modules's test classes.

Note: IDEA 14 does not respect this rule and it pull test sources to a dependent module. This behaviour has been fixed lately.

Test-utility classes shouldn't be placed in main source set

For obvious reasons you don't want to pollute main source set with test classes. In main source set there should be production code only. Tests are usually executed on application's build phase so there is no need to put them into final build.

In Spring application test classes shouldn't be located in component-scanned package

This is mandatory for Spring developers who uses component scan. In some cases you may want to create annotated class used exclusively by tests.

IDEA places all source sets marked as sources to classpath. So if you place your test-only annotated class to a package scanned by Spring your application context will ends up polluted.

Shared test classes should be kept in module where it belongs

Because of previous point some people have tendency to put shared test classes to a stand-alone module. Although core-module-test (main) -> core-module (main), core-module (test) -> core-module-test (main) is legal construct in Gradle you don't want to complicate your project's dependency structure by creating "unnecessary" modules.

Module dependency resolution basics in Gradle and IDEA 14 and 16


Gradle allows you to define dependency between two module configurations.

// Declared in root-module
dependencies {
    // Current module's testCompile configuration depends on another module's default configuration.
    testCompile project(':module1')
    // Current module's configuration depends on another module's test configuration.
    testCompile project(path: ':core-module', configuration: 'test')

Dependency to a default configuration of another module pulls module's dependencies and sources to a dependent module. Dependency to a test configuration pulls only dependencies without sources. So in our case root-module won't see any core-module's test classes.


In contrast to Gradle the IDEA 14 pulls all sources to a dependent module. Even test sources.

If you open Gradle project IDEA will behave oddly. It will let you run tests that Gradle won't even compile. Unfortunately IDEA 14 doesn't have any feature that can be used to fix this so you have to think about it when you will write your test.


Latest version of IDEA does not pull test sources to a dependent module which is consistent behaviour to Gradle. It also allows you to define dependency to a specific source set. You just have to choose create separate module per source set option when importing Gradle project. It has been added lately to harmonize IDEA's dependency resolution with Gradle.

Using shared test sources by dependent module

Declaring dependency to a module's output directory

In projects having utility test-classes in test source set there is usual hot-fix to add testCompile dependency to other module's output directory. So in our example the module1's tests will depend on core-module's compiled test classes.

// Declared in module1
dependencies {
  testCompile project(':core-module').sourceSets.test.output

This is quick solution but it has some cons:

  • In your IDE you have to rebuild shared classes on every change to refresh dependencies. Remember IDEA 16 does not pull test sources.
  • It's against the requirements in first chapter of this article.
Placing utility classes to a custom source set

The good solution is to create custom source set for shared test-classes and declare dependencies to this source set from dependent modules. This approach allows you to keep classes in modules where it belongs still separated from main and test sources. This approach is used by Gradle project itself.

Implementation is quite straightforward. First create script plugin that will add new testFixtures source set and appropriate configurations to a module.

Then apply script plugin on a module and declare necessary dependencies.

// Declared in core-module
apply from: 'testFixtures.gradle'

dependencies {

Finally add dependency to the newly created source set.

// Declared in module1
dependencies {
    testCompile project(path: ':core-module', configuration: 'testFixturesUsageCompile')

Note for IDEA users: There is a small drawback. If you open the project without create separate module per source set option your custom-source set will be available in main source set. If you use a shared test-class in main source set IDEA won't complain about it but Gradle won't be able to compile. So you have to be careful about what you are using.


Zero downtime deployment with Nginx proxy

At Factorify we wanted to make deployent of our application unnoticed by users. Application is AngularJS client connected to Spring backend. User's requests are proxied by Nginx running on FreeBSD.

Basic idea was to hold user's requests during application deployment. Nginx does not provide such function by default but it is possible to extend it by scripts written in Lua language.

I wasn't able to compile Nginx with Lua support. Fortunately there is Nginx "distribution" called OpenResty which integrates Lua compiler.

Nginx configuration

  1. Nginx listens on port 8080.
  2. Request is sent to primary node. If primary node is alive response will be served to user.
  3. If primary node fails to return response request will be send to backup node.
  4. Backup node suspends request for 10 seconds and then forwards it to the backend. 10 seconds should be sufficient time for application to restart. If no response is returned by backend in 10 seconds then error will be send to user.
http {
    upstream backend {
        # 2) Primary node.
        server localhost:8667;
        # 3) Backup node.
        server localhost:8666 backup;

    server {
        # 1) Endpoint exposed to users.
        listen 8080;

        location / {
            proxy_pass http://backend;

    server {
        listen 8666;

        location / {
            # 4) Suspend request for 10 second.
            access_by_lua '
            proxy_pass http://localhost:8080/;

    server {
        # This is primary node that emulates backend application.
        listen 8667;

        location / {
            default_type text/html;
            content_by_lua '
                ngx.say("Hello World!")

To fine-tune the configuration please refer to Nginx documentation.


Spring Boot internationalisation with database stored messages and IBM ICU


In Java world there is a common approach to store localisation messages in property files. I want to have messages stored in database so users can manage them on runtime. I also want better plural forms handling provided by project ICU.

Introduction to Spring's localisation process

Following figure shows localisation processing in a Spring application.

  1. Requested locale (language and country codes) is passed to the Spring application as part of ServletRequest object. In ServletRequest there are several properties that can hold locale value. For example HTTP header Accept-Language, a cookie or query string parameter. Locale is resolved from the request object in DispatcherServlet.render method by instance of LocaleResolver class. There are several resolvers available in the Spring. You can check out org.springframework.web.servlet.i18n package for more details.
  2. Locale value is send by user's browser by default (as a Accept-Language header). To allow your application to change locale you have to configure LocaleChangeInterceptor. The interceptor reads locale value from query string and sets it to the request. Please read my older article Request mapping on demand in Spring MVC if you want to know more about request processing.
  3. Message code and resolved locale are passed to MessageSource via a view. Message source is responsible for loading message from storage, processing it and returning localised message back to the view. The view incorporates processed message into template.
  4. A message can be simple text or a pattern consisted of placeholders that will be replaced while pattern is processed. Placeholders are used to render text in a locale sensitive way. Patterns are processed by java.text.MessageFormat by default. In this article I will also describe enhanced version of message format (com.ibm.icu.text.MessageFormat) provided by ICU.

LocaleChangeInterceptor and LocaleResolver configuration

AcceptHeaderLocaleResolver is Spring Boot's default locale resolver. The resolver reads locale from Accept-Language header. Unfortunately it does not support locale change on runtime because it's setLocale method is not implemented.

No locale change interceptor is configured by default.

Locale resolver can be changed by creating LocaleResolver bean. I've decided to use CookieLocaleResolver which resolves locale stored in a cookie.

Locale change interceptor can be added by extending WebMvcConfigurerAdapter. Following code snippet shows configuration of locale change interceptor that reads new language code from query string. By adding lang parameter to a query string user can change requested locale.

Database aware MessageSource

To create message source you just need to implement MessageSource interface.

And then configure template engine so it will load messages from newly created message source. Thymeleaf template engine is used in this article.

Message patterns and plural forms handling

Plural forms handling is very common case in multilingual applications. Plural forms handling has to be robust because some languages have pretty complicated plural rules. Here is a example of three sentences that should be generated from a single message pattern:

  • There is 1 apple in 1 basket.
  • There are 2 apples in 2 baskets.
  • There are 0 apples in 2 baskets.
Standard MessageFormat

To deal with plurals Java offers MessageFormat formatter. This formatter can evaluate conditions in message pattern which can be used for plural forms handling. Let's revisit DatabaseMessageSource.resolveMessage method to incorporate the formatter into the application.

private String resolveMessage(String code, Object[] args, Locale locale) {
    String message = ""; // TODO Load message from database...
    MessageFormat messageFormat = new MessageFormat(message, locale)
    return messageFormat.format(args);

Then you need to create a message pattern with choice conditions. Curly braces are used as a placeholders for message parameters and can contain message formatting syntax.

There {0,choice,0#are|1#is|2#are} {0} {0,choice,0#apples|1#apple|2#apples} in {1} {1,choice,0#baskets|1#basket|2#baskets}.

To pass values into message you just need to add braces with values at the end of message code.

<p th:text="#{apple.message(1,1)}"></p>
<p th:text="#{apple.message(2,2)}"></p>
<p th:text="#{apple.message(0,2)}"></p>

Shown example is official Java recommended approach for plural forms handling. Unfortunately it's choice conditions are quite complicated even for English language which has only two plural forms. For languages like Polish it's almost unreadable.

IBM ICU MessageFormat

Project ICU focus on dealing with internationalisation. It offers its own implementation of MessageFormat that is more robust and easier to use. To incorporate this formatter to the application you need to slightly change DatabaseMessageSource.resolveMessage method.

private String resolveMessage(String code, Object[] args, Locale locale) {
    String message = ""; // TODO Load message from database...
    MessageFormat messageFormat = new MessageFormat(message, locale);
    StringBuffer formattedMessage = new StringBuffer();
    messageFormat.format(args, formattedMessage, null);
    return formattedMessage.toString();

For plural forms handling ICU formatter offers purpose built plural pattern argument.

There {0,plural,one{is # apple}other{are # apples}} in {1,plural,one{# basket}other{# baskets}}.

Localised URLs

In the Request Mapping on Demand article I've described possibilities of URL localisation on the fly.


There are several more internationalisation approaches available in Java world. For example there is possibility to use GNU gettext in Java. But it's not that easy to implement it in Spring. I've done some research and found out that combination of ICU and message codes seems to be the best choice.


Request mapping on demand in Spring MVC


I want to generate request mappings during execution time of my application. I want to have several URL paths that points to one controller method. Paths will be generated from e-shop product names. Each path should be bound to a language code send in a header. Of course paths can vary as user changes product names during an application's excution.

Here is an example list of mappings. Mapping for English language will match only if lang-header is set to en code. Same rule should apply for Czech lang-header.

[header: Accept-Language=en*]

[header: Accept-Language=cs*]

These prerequisites disqualifies @RequestMapping annotation because:

  • Request mapping conditions are evaluated on application's init phase and cannot be changed on runtime.
  • It's not possible to attach a mapping path to a specific header value. There is many to many relation between paths and headers defined as a request mapping condition.
    @RequestMapping(value = {"/spinach", "/carrot", "/spenat", "/mrkev"}, headers = {"Accept-Language=en*", "Accept-Language=cs*"})

Theory of mapping a request to a controller method in Spring MVC

Following figure shows request's lifecycle in a Spring application.

  1. First place where request arrives is at DispatcherServlet. There is only one servlet of this kind which processes all requests coming to an application.
  2. Then DispatcherServlet in its doDispatch method tries to find handler capable of processing request. DispatcherServlet holds a list of HandlerMappings built on application start. These mappings tells the servlet which particular mapping is suitable for processing request. RequestMappingHandlerMapping is interesting in our case because it maps a controller method according to @RequestMapping annotation. If request matches conditions specified by @RequestMapping annotation then a request handler will be send back to the servlet. The handler is instance of HandlerMethod wrapped in HandlerExecutionChain object and actually it's kind of pointer to the controller method.
  3. Handler (HandlerMethod) is passed to the HandlerAdapter's specialization RequestMappingHandlerAdapter and then executed. Actually RequestMappingHandlerAdapter can be easily renamed as HandlerMethodHandlerAdapter because this adapter doesn't have almost no relation to @RequestMapping annotation. This will be useful in one of my solutions.
  4. The handler is executed in RequestMappingHandlerAdapter.invokeHandlerMethod method. Result is wrapped into ModelAndView object and then send back to the servlet.
  5. The servlet resolves a view, processes it, and so on.

Solution #1: Custom request condition

Spring allows to extend @RequestMapping by a custom condition. By overwriting RequestMappingHandlerMapping.getCustomMethodCondition or RequestMappingHandlerMapping.getCustomTypeCondition methods you can create your own condition that will (or will not) match requests. If request matches mapping's condition and your custom conditions then handler is returned by the handler mapping.

Custom conditions are created during application's runtime so it can also be changed on application's runtime.

To demonstrate this approach I've created a demo application Request Mapping On Demand (Custom Condition).


  • Sort of Spring-way approach.
  • Less custom code.


  • In Spring Boot it's not possible to (clean way) extend existing RequestMappingHandlerMapping. By adding new HandlerMapping to the dispatcher you can create pretty mess. Check the comment in CustomConditionRequestMappingHandlerMapping. Occasionally this will be fixed soon.
  • You have to process raw HttpServletRequest object in your condition.

This solution was inspired by Rob Hinds's article Spring MVC & custom routing conditions.

Solution #2: Creating a new handler mapping

There is also demo application which demonstrates this approach: Request Mapping On Demand Demo.

In this solution the @RequestMapping annotation is replaced by custom @RequestMappingOnDemand. All conditions of this new annotation are created by application. So there's no conditions evaluated on application's init phase. Except for a pointer to a condition manager which creates these conditions.

The handler mapping RequestMappingHandlerMapping is replaced by RequestMappingOnDemandHandlerMapping. This custom handler mapping is capable of processing conditions attached to a controller method by @RequestMappingOnDemand annotation.

Because the custom handler mapping returns HandlerMethod there is no need to change anything in the adapter.


  • More robust handling of requests.
  • Works flawlessly with current version of Spring Boot.


  • More custom code.
  • Code duplication in RequestMappingOnDemandHandlerMapping and RequestMappingHandlerMapping.


I've used second solution in my project because of Spring Boot's inability to extend RequestMappingHandlerMapping. I will probably switch to first solution when it will be fixed.


Spring Boot + JPA (Hibernate) + Atomikos + PostgreSQL = exception!

If you try to use combination of Spring Boot (1.3.3), Spring Data JPA, Atomikos and PostgreSQL database you will probably experience an exception during start of an application.

java.sql.SQLFeatureNotSupportedException: Method org.postgresql.jdbc.PgConnection.createClob() is not yet implemented.
 at org.postgresql.Driver.notImplemented(Driver.java:642) ~[postgresql-9.4.1209.jre7-20160307.201142-10.jar:9.4.1209.jre7-SNAPSHOT]

com.atomikos.datasource.pool.CreateConnectionException: an AtomikosXAPooledConnection with a SessionHandleState with 0 context(s): connection is erroneous
 at com.atomikos.jdbc.AtomikosXAPooledConnection.testUnderlyingConnection(AtomikosXAPooledConnection.java:116) ~[transactions-jdbc-3.9.3.jar:na]

These exceptions appears because JPA (Hibernate) supported by Atomikos is trying to verify PostgreSQL CLOB feature. This feature is not implemented in JDBC driver so driver throws unimportant exception. Unfortunately Atomikos has an exception listener which marks connection as erroneous when any exception occurs.

To suppress this behaviour you have to disable driver's feature detection and configure it's features manually. This is kind of shady undocumented way of how to do it but it works.

# Disable feature detection by this undocumented parameter. Check the org.hibernate.engine.jdbc.internal.JdbcServiceImpl.configure method for more details.
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults = false

# Because detection is disabled you have to set correct dialect by hand.

For more details about configuration of distributed transactions with Atomikos check the Fabio Maffioletti's article.