Reference to a static method + instrumentation = fail!

In a Java project I am using ActiveJDBC ORM. Part of the ORM is an instrumentation process that adds new methods into compiled classes. In my case the instrumentation adds a few static factory methods into my Record class. For example Record.findAll() to get all records from the database.

I have an execute method that accepts a single parameter of type Callable. The method opens a connection, executes what's in the parameter and then closes it.

I implemented this a simple way. In a repository class, I just passed a reference to a static method of instrumented class into the execute method. After building the application in IntelliJ I got well known failed to determine Model class name, are you sure models have been instrumented? exception.

Even though, the model class was instrumented it didn't work. Then I made few simple changes in a repository class, not related to the execute method itself, and it started working. I was confused.

After a disassembly of repository class I noticed that there is a reference to Record's parent class instead of Record itself. Making small changes in repository and recompilation, fixed the reference.


  • First compilation went this way:

    1. Java compiled clean project (all classes were compiled).

      First, it compiled model class. The class was not instrumented so there were no static methods at this time.

      Then, it compiled repository class. Because model did not have any static methods at that time, a reference to a static method in the model class was "pointed" to a parent class method.

    2. ActiveJDBC plugin instrumented the model class but reference from repository to model's parent was already set, so it didn't make any change.

    At this moment, the bytecode of static method call looked like this:

    Method arguments:
      #33 ()Ljava/lang/Object;
      #34 invokestatic org/javalite/activejdbc/Model.findAll:()Lorg/javalite/activejdbc/LazyList;
      #35 ()Ljava/util/List;
  • Second compilation, after small changes in repository:

    1. Java compiled only the changed repository class. That means at this moment model class had static methods so the reference was set correctly.

    2. ActiveJDBC plugin instrumented the model class but that was not important at the moment.

    Now, the bytecode was correct:

    Method arguments:
      #33 ()Ljava/lang/Object;
      #34 invokestatic demo/Record.findAll:()Lorg/javalite/activejdbc/LazyList;
      #35 ()Ljava/util/List;

How to avoid this

Instead of reference to a static method use lambda method call.

So instead of this:

public List<Record> findAll() throws Exception {
    return execute(Record::findAll);

write this:

public List<Record> findAll() throws Exception {
    return execute(() -> Record.findAll());

In this case Java will not create a reference to parent class Model.findAll() but will ponint correctly to Record.findAll().


Running Firefox in kiosk mode on Ubuntu Gnome 16.04

So we bought a few industrial all-in-one PCs from Chinese company QI YU TAI. These PCs came with pre-installed Windows, but because of possible legal issues, we decided to replace the Windows with Ubuntu.

Basic parameters of those PCs are:

  • Various Celeron processors.
  • Screen size from 12 inch, up to 19 inch.
  • eGalaxTouch resistive touch layer.
  • SSD hard disk.

In the beginning I planned to install Ubuntu with a default desktop environment, Unity. Unfortunately Unity lacks support for on screen keyboard, so I had to switch to Gnome Desktop, which seems to be the best option for a touch screen in the Linux world.


  1. Instal Ubuntu Gnome 16.04.

    Ubuntu homepage has several tutorials on creating a bootable USB stick.

  2. Calibrate touch screen and save calibration, so it will be restored on every boot.

    By default the touch screen has inverted axis. To calibrate it, follow tutorial on Ubuntu homepage https://wiki.ubuntu.com/Touchscreen

    I've used following scripts to load calibration on start of the system. But, for example, you can run xinput_calibrator without parameters, then follow printed instructions and store calibration as a X11 configuration.

    Optionally, you can install driver from touch screen manufacturer's site to support multitouch, gestures and more features http://www.eeti.com.tw/drivers_Linux.html

    Note: calibration of a rotated screen can be quite challenging. I had to do it manually by changing calibration values and by testing each configuration by hand.

  3. Turn off all power saving and "lock a screen" features of the system.

    • Settings -> Power: Turn off "Dim screen when inactive" option.
    • Settings -> Power: Set "Blank screen" option to never.
    • Settings -> Privacy: disable screen lock.
    • Disable DPMS (Energy Start) by executing xset s off -dpms. Actually you want to execute this command on each start of the system by adding it to Startup Applications.
    • Disable suspend, according to Debian.org sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
    • Disable a screen shield that makes you to unlock the screen when inactive for a while: https://github.com/lgpasquale/gnome-shell-extension-disable-screenshield
  4. Disable crash reports, so user won't be bothered by a "crash report message" when something exceptional happens.

    Do it by setting enabled property to 0 in /etc/default/apport file.

  5. Enable an on screen keyboard.

    • First you will need to install few packages, so QT applications like Firefox will have a support for touch.

      apt-get install qt-at-spi caribou
    • Settings -> Universal Access: Turn on "Screen Keyboard" option.

    Note: keyboard should disappear if you focus out of an input field. This does not wok in current release of qt-at-spi. It was fixed just recently https://bugzilla.mozilla.org/show_bug.cgi?id=789038#c12

  6. Show system's ip address on startup.

    I want to show system's IP address on startup so administrator can easily connect to the machine via network, so he can manage it without need of external keyboard. To do that, just create a simple script that will get an ip address and displays it as a notification bubble via notify-send application. Then add this script to Startup Applications.

    ip_addresses=$(ifconfig | grep -oE "inet addr:[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+")
    notify-send "$ip_addresses"
  7. Set up Firefox for kiosk mode.

    Install mKiosk plugin and configure it according to your needs.

    Install Click Drag Scroll extension so you can scroll a page by simply dragging it.

    Then add Firefox into Startup Applications.

  8. Install Unclutter and hide mouse cursor.

    apt-get install unclutter
    unclutter -idle 0.01

    Or you can add it into Startup Applications.

  9. Install SSH server to be able to connect to the machine remotely.

    sudo apt-get install openssh-server


Running Chrome in kiosk mode on Windows 10 Home

For Factorify we wanted to create a touch screen where an employees could log their attendance. We got a cheap all-in-one PC from China (PiPO X9) with pre-installed Windows 10 Home and RFID card reader.

Our application is an web page. We wanted to start an application in a browser automaticaly and to prevent users from exiting it or doing anything harmful on the PC. At least users without a keyboard.

Configuration steps:

  • Create an user with limited permissions. So in case that anything goes horribly wrong he wouldn't be able to destroy the system.

    • Configure the system to automaticaly login as the newly created user. To do this you need to start a neplwiz application, select the user and disable the "users must enter a user name and password to use this computer" option.
  • Then, disable the Windows shell (tiles). To do this I decided to run a "dummy" program instead of explorer.exe, on system startup.

    Go to regedit and set the value for the following path:

    HKEY_CURRENT_USER/Software/Microsoft/Windows NT/CurrentVersion/Winlogon/Shell = rundll32

    • We added the value under the HKEY_CURRENT_USER because changing the value in HKEY_LOCAL_MACHINE would disable shell for all users using the computer.
    • Parameter Shell won't be probably present so you will need to create one.
    • Empty vaule doesn't work. So instead we execute rundll32 which actually does nothing.
  • Create a task in Task Scheduler that will start Chrome browser as soon as internet connection is available. To do this, start the Task Scheduler and create a new task. Configure "start only if the following network connection is available" option.

    Then define an action:

    cmd /C "chrome --incognito --disable-pinch --kiosk http://www.factorify.me/"

    • Chrome is executed by cmd so its started as a "different" process. That is because when connection gets unavailable the Task Scheduler actually kills the process it started.
    • Parameter --incognito prevents Chrome from showing an "application crashed bubble" in case of incorrect application exit. Incognito mode is good for kiosk mode anyway.
    • --disable-pinch disallows an user to zoom the page using multi-touch gestures.
  • Then fine-tune the system.

    • Disable the swipe gesture that can load a page from history. Do it by setting Chrome's chrome://flags/#overscroll-history-navigation parameter to disabled.
    • Configure an English keyboard layout so RFID reader can work properly.
    • Connect the PC to the internet.

Flaws, possible improvements:

  • Disable automatic Windows updates by turning off the update service.
  • PiPO X9 doesn't have any sort of internal battery. So when electricity goes off, your kiosk goes off as well. Newer version of the machine, the PiPO X10 solves the problem.
  • My original plan was to use Android system for the task. PiPO X9 actually has two system preinstalled. Windows 10 and Android 4.4. Unfortunately to create a kiosk in Android older than 5.x is kind of problematic. PiPO X10 ships with Android 5.1 and that should solve the problem.
  • There is a custom boot-loader that will let you chose a operating system on start of the machine. I did not manage to turn this off, so when restarted the machine will always show you system-select menu. The last started system is automatically selected after 10 seconds or so, but you have to wait.
  • I did not introduce any kind of health status monitoring of the machine and applications running on it. But, one thing that comes to my mind is: you can configure the URL where Chrome will send crash reports in case of fail. That would be useful, I guess.


Wi-Fi and Bluetooth USB dongles for Raspberry Pi 1 model B with Raspbian Jessie

Things got really improved since last time I've been trying to expand my Raspberry Pi 1 model B by Wi-Fi and Bluetooth.

I am still using cheap Chinese USB dongles. This time my Raspberry is running Raspbian Jessie.

  • 802.11n Wi-Fi dongle with Realtek's RTL8188EUS chipset
  • Bluetooth adapter compatible with Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) adapter.


To make Wi-Fi work, just add information about your network to /etc/wpa_supplicant/wpa_supplicant.conf file. Actually, you can add multiple networks. Raspberry will try to connect to the first available.


Maybe you will need to restart the interface by sudo wpa_cli reconfigure. Check out the official documentation.


First of all, start the bluetooth console by typing bluetoothctl. Everything else will be done in the console.

agent on


scan on
    ...here you should se list of "pairable" devices

connect XX:XX:XX:XX:XX:XX
    ...in case you get following error Access denied: org.bluez.Error.Rejected in syslog,
       you need to "trust" the device.


How not to get confused by Spring Boot web security auto-configuration

In my project (Spring Boot + Security + Thymeleaf) I wanted to configure custom web security as is described in "getting started" article on the Spring's official web. I've followed steps in the article and created my custom configuration. But I forgot to add a @EnableWebSecurity annotation. Everything seemed to work fine. Except for Thymeleaf's sec:authorize-url attribute.

What happened?

By not specifying @EnableWebSecurity annotation I didn't disable Spring Boot's default security auto-configuration in SpringBootWebSecurityConfiguration.ApplicationWebSecurityConfigurerAdapter. This creates a security configuration (HttpSecurity object) that disables all access to application for unauthorised users, except for some static resources, and creates a user with a role ROLE_USER and generates some random password for testing purposes.

My configuration also created HttpSecurity object with my own security configuration and my own user repository.

Both HttpSecurity configurations has been passed to WebSecurity which is responsible for creating Spring Security Filter Chain. As you may know a security request goes through this chain until one of the filters "catches" it and process it.

Because my custom configuration bean had a higher priority, it was located "on a higher place" in security filter chain. So all requests has been caught and processed by my filter and not by the filter created by Spring Boot auto-configuration. So far so good.

Problem with two HttpSecurity configurations

When configured, the WebSecurity object holds an instance of FilterSecurityInterceptor. There is only a single filed that can hold the interceptor so no more than one interceptor can be held by WebSecurity object.

The FilterSecurityInterceptor is the crucial part of the Spring Security project. Actually it is its parent class AbstractSecurityInterceptor and its implementations that makes Spring Security breath.

  • FilterSecurityInterceptor intercepts ServletRequests and decides whether a current user has permission to proceed or not. The FilterSecurityInterceptor uses a help of other classes like SecurityContextHolder (holds information about current user, his security context), AccessDecisionManager (evaluates a request against a current security context) and so on.
  • MethodSecurityInterceptor intercepts method calls similarly to FilterSecurityInterceptor. Uses Spring's proxies.
  • AspectJMethodSecurityInterceptor similar to MethodSecurityInterceptor with support of AspectJ.

The FilterSecurityInterceptor is based on information in HttpSecurity. In WebSecurityConfigurerAdapter.init method, the Spring populates WebSecurity by the FilterSecurityInterceptor instance. If more than one FilterSecurityInterceptor are created later overwrites interceptor that is already present in WebSecurity!

Thymeleaf-Spring Security integration gets the FilterSecurityInterceptor from WebSecurity and uses it to evaluate value of sec:authorize-url attribute. Obviously, in this case it gets Spring Boot's default configuration which is not what I wanted.

So, don't forget to disable default configuration by adding @EnableWebSecurity annotation to your security configuration class.


Spring Boot + PostgreSQL + Custom Type = exception!

I do have demo application that shows features of various ORM frameworks. Application performs several basic operation on a database. To have clean database for each application's run I wanted to recreate whole database at its start. This means dropping whole schema and then re-initialising it by executing following script.


CREATE TYPE plane_dimensions AS (
  length_meters   DECIMAL,
  wingspan_meters DECIMAL,
  height_meters   DECIMAL

  name         VARCHAR(250)     NOT NULL,
  dimensions   plane_dimensions NOT NULL


Unfortunately this doesn't work.

When the application connects to the database it loads existing types and stores it to a cache. Then my type is dropped and immediately re-created by the script. But re-created type does have different OID than the former (cached). So if you try to use it from the application you will get an exception.

Caused by: org.postgresql.util.PSQLException: ERROR: cache lookup failed for type 1234567

Solution is to re-initialise connection or not to modify the type.

Query to find type's OID:

SELECT oid FROM pg_type WHERE typname = 'plane_dimensions';


Shared test sources in Gradle multi-module project

Basic assumptions

Let's suppose following project structure.

|    +--- main source set
|    \--- test source set
\--- :module1
     |    +--- main source set
     |    \--- test source set
     \--- :core-module
              +--- main source set
              \--- test source set
Shared test-utility classes shouldn't be placed in test source set

In some legacy projects I've seen utility classes to be located in test source set. This is problematic in multi-project applications because Gradle does not pull test sources to a dependent module. So module1 won't "see" classes from core-module's test source set. This makes sense because you don't want to have module1's test-classpath flooded with core-modules's test classes.

Note: IDEA 14 does not respect this rule and it pull test sources to a dependent module. This behaviour has been fixed lately.

Test-utility classes shouldn't be placed in main source set

For obvious reasons you don't want to pollute main source set with test classes. In main source set there should be production code only. Tests are usually executed on application's build phase so there is no need to put them into final build.

In Spring application test classes shouldn't be located in component-scanned package

This is mandatory for Spring developers who uses component scan. In some cases you may want to create annotated class used exclusively by tests.

IDEA places all source sets marked as sources to classpath. So if you place your test-only annotated class to a package scanned by Spring your application context will ends up polluted.

Shared test classes should be kept in module where it belongs

Because of previous point some people have tendency to put shared test classes to a stand-alone module. Although core-module-test (main) -> core-module (main), core-module (test) -> core-module-test (main) is legal construct in Gradle you don't want to complicate your project's dependency structure by creating "unnecessary" modules.

Module dependency resolution basics in Gradle and IDEA 14 and 16


Gradle allows you to define dependency between two module configurations.

// Declared in root-module
dependencies {
    // Current module's testCompile configuration depends on another module's default configuration.
    testCompile project(':module1')
    // Current module's configuration depends on another module's test configuration.
    testCompile project(path: ':core-module', configuration: 'test')

Dependency to a default configuration of another module pulls module's dependencies and sources to a dependent module. Dependency to a test configuration pulls only dependencies without sources. So in our case root-module won't see any core-module's test classes.


In contrast to Gradle the IDEA 14 pulls all sources to a dependent module. Even test sources.

If you open Gradle project IDEA will behave oddly. It will let you run tests that Gradle won't even compile. Unfortunately IDEA 14 doesn't have any feature that can be used to fix this so you have to think about it when you will write your test.


Latest version of IDEA does not pull test sources to a dependent module which is consistent behaviour to Gradle. It also allows you to define dependency to a specific source set. You just have to choose create separate module per source set option when importing Gradle project. It has been added lately to harmonize IDEA's dependency resolution with Gradle.

Using shared test sources by dependent module

Declaring dependency to a module's output directory

In projects having utility test-classes in test source set there is usual hot-fix to add testCompile dependency to other module's output directory. So in our example the module1's tests will depend on core-module's compiled test classes.

// Declared in module1
dependencies {
  testCompile project(':core-module').sourceSets.test.output

This is quick solution but it has some cons:

  • In your IDE you have to rebuild shared classes on every change to refresh dependencies. Remember IDEA 16 does not pull test sources.
  • It's against the requirements in first chapter of this article.
Placing utility classes to a custom source set

The good solution is to create custom source set for shared test-classes and declare dependencies to this source set from dependent modules. This approach allows you to keep classes in modules where it belongs still separated from main and test sources. This approach is used by Gradle project itself.

Implementation is quite straightforward. First create script plugin that will add new testFixtures source set and appropriate configurations to a module.

Then apply script plugin on a module and declare necessary dependencies.

// Declared in core-module
apply from: 'testFixtures.gradle'

dependencies {

Finally add dependency to the newly created source set.

// Declared in module1
dependencies {
    testCompile project(path: ':core-module', configuration: 'testFixturesUsageCompile')

Note for IDEA users: There is a small drawback. If you open the project without create separate module per source set option your custom-source set will be available in main source set. If you use a shared test-class in main source set IDEA won't complain about it but Gradle won't be able to compile. So you have to be careful about what you are using.