Future and FutureTask

Futures and FutureTasks are a great way to represent the result of an operation that is running on a thread. It also provides facilities like future.get() that is blocking while the Future operation is not yet complete.

Submitting a Callable (that defers from Runnable in that it returns a value and can throw exceptions as opposed to Runnable) forces the Executor to return back a Future. This is very useful technique for all those cases where we have expensive operations and the sooner we start them the better. On the other end when we invoke future.get() this operation blocks until we get a result back from the operation. For most of our purposes it is safe to assume the result of our operation to be represented by a Future that when it completes it is notifying us.

We can create a Future:

final Future future = Executors.newCachedThreadPool().submit(ourCallable);

or alternatively a FutureTask:

FutureTask<Callable> future = new FutureTask(ourCallable);
Thread thread = new Thread(futureTask);
thead.start();

On the other end we can expect the result of our expensive operation as follows:

Object result = future.get();

Executors

Have you ever done the following:

new Thread(runnable).start();

all over the place, all over the code? What if there was a centralised pool for all sorts of threads to live in? This way we could achieve a certain level of thread reuse and do other smart stuff like pre-initialisation etc.

Enter Executors. As part of the java.util.concurrent package comes this amazing utility class with static factory methods as convenience methods to offer different flavours of ThreadPoolExecutor.

FixedThreadPool

We can create a FixedThreadPool as follows:

final Executor executor = Executors.newFixedThreadPool(5);

behind the scenes the static factory method is

public static ExecutorService newFixedThreadPool(int nThreads) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());
    }

This basically creates a thread pool of core size 5 and max pool size of 5. Also the pooled threads are not released (0 minutes as the third and fourth argument) and are maintained, even in idle state, for all the lifetime of the FixedThreadPool Executor. Lastly, a LinkedBlockingQueue of Runnables is provided to keep hold of the tasks and submit to worker threads.

If we now execute some runnable like this:

executor.execute(new Runnable(){public void run(){/*do work here*/}});

say for instance passing 10 runnables then 5 of them will be submitted as pooled threads and all the remaining ones would be cached in the queue. From that point onwards the 5 queued runnables will be passed one by one to the running 5 pool threads. Interestingly, if there are no more runnables for the 5 created threads, then these threads remain in idle state inside of the FixedThreadPool patiently waiting for new runnables until the fixed thread pool executor is shut down.

If we wish not to lose time we can initialise the thread pool and create our core max pool of threads before the first runnables to be submitted:

int count = ((ThreadPoolExecutor) executor).prestartAllCoreThreads();

SingleThreadExecutor

We can create a SingleThreadExecutor instance by calling the static factory method:

final Executor executor = Executors.newSingleThreadExecutor();

which behind the scenes calls the ThreadPoolExecutor constructor in the following flavour:

public static ExecutorService newSingleThreadExecutor() {
        return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1, 1,
                                    0L, TimeUnit.MILLISECONDS,
                                    new LinkedBlockingQueue()));
    }

Based on these arguments core pool size = max pool size = 1 and this thread is kept even as idle in the pool until the end of the single thread executor. Similarly as the FixedThreadPoolExecutor there is a LinkedBlockingQueue of Runnables to hold on the tasks before submitting them one by one on to the sole running thread.

CachedThreadPool

A CachedThreadPool Executor can be created as follows:

final Executor executor = Executors.newCachedThreadPool();

which makes a call to the ThreadPoolExecutor constructor with the following arguments:

public static ExecutorService newCachedThreadPool() {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>());
    }

So although the static factory method has no arguments the ThreadPoolExecutor constructor which is called underneath creates an unbounded ThreadPoolExecutor where all runnables are making it into the executor and after successful completion they stay in idle state in the thread pool for 60 seconds before they get decommissioned if not getting used earlier.

Log4j Properties VS XML IDE autocompletion and code assistance

Here’s why I prefer Log4j configured via an XML rather than a properties file:

I love my IDEs and a good reason for that is their autocompletion capabilities. XML structured configuration files tend to give you their grammar/syntax/doc and rules on an XSD or DTD file that are publicly available to help IDEs (and patient humans) autocomplete while editing the XML file.

I personally highly value this convenience.

Now for some reason everywhere in the Internet when I query “log4j xml file” I get back lots of sample config files that they all start by:

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">

<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">

This simply doesn’t work and doesn’t help the IDE (let it be IntelliJ or Eclipse) that is looking in vain for guidance and instructions out of a missing log4j.dtd file (found under org.apache.log4j.xml) or a meaningful http://jakarta.apache.org/log4j/ url.

To keep the IDE happy replace with the url of the DTD and remove the latter url (log4j:configuration is defined in the DTD file anyway therefore keeping happy the IDEs autocompletion system).

<?xml version="1.0" encoding="UTF-8" ?>
                <!DOCTYPE log4j:configuration SYSTEM "http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/xml/doc-files/log4j.dtd">
        <log4j:configuration>

Spring, ActiveMQ, Maven example

This is a quick showcase of the simplest skeletal implementation of Spring and ActiveMQ under a Maven project.

First off, the POM looks like this:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.dimitrisli.activemq</groupId>
  <artifactId>SpringActiveMQ</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <build>
  	<plugins>
  		<plugin>
  			<groupId>org.apache.maven.plugins</groupId>
  			<artifactId>maven-compiler-plugin</artifactId>
  			<version>2.3.2</version>
  			<configuration>
  				<source>1.6</source>
  				<target>1.6</target>
  				<encoding>${project.build.sourceEncoding}</encoding>
  			</configuration>
  		</plugin>
  	</plugins>
  </build>
  <dependencies>
    <dependency>
      <groupId>org.apache.activemq</groupId>
      <artifactId>activemq-all</artifactId>
      <version>5.1.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>org.springframework</groupId>
      <artifactId>spring-jms</artifactId>
      <version>3.1.0.RELEASE</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
    	<groupId>log4j</groupId>
    	<artifactId>log4j</artifactId>
    	<version>1.2.16</version>
    </dependency>
    <dependency>
    	<groupId>org.slf4j</groupId>
    	<artifactId>slf4j-log4j12</artifactId>
    	<version>1.6.4</version>
    </dependency>
    <dependency>
    	<groupId>commons-pool</groupId>
    	<artifactId>commons-pool</artifactId>
    	<version>1.5.7</version>
    </dependency>
    <dependency>
    	<groupId>org.apache.geronimo.specs</groupId>
    	<artifactId>geronimo-jta_1.1_spec</artifactId>
    	<version>1.1.1</version>
    </dependency>
  </dependencies>
  <properties>
  	<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  </properties>
</project>

Then the JMS context looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">

	<bean id="jmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
		<property name="brokerURL">
			<!-- value>tcp://localhost:61616</value -->
			<value>vm://localhost</value>
		</property>
	</bean>

	<bean id="pooledJmsConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory"
		destroy-method="stop">
		<property name="connectionFactory" ref="jmsConnectionFactory" />
	</bean>

	<bean id="destination" class="org.apache.activemq.command.ActiveMQQueue">
		<constructor-arg value="jmsExample" />
	</bean>

	<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
		<property name="connectionFactory" ref="pooledJmsConnectionFactory" />

	</bean>

</beans>

Thinks to notice:

  • ActiveMQConnectionFactory is taking care of the JMS connections providing the brokerURL which we specify vm at this example for simplicity since we don’t want to run a separate broker.
  • We wrap the previous JMS connection under a PooledConnectionFactory since we don’t want to restart connection in every message sending.
  • We specify an ActiveMQQueue as our destination.
  • Any access to the JMS implementation API is done through Spring’s convenience JmsTemplate class

Finally our test example:

package com.dimitrisli.activemq;

import javax.jms.JMSException;
import javax.jms.TextMessage;

import org.apache.activemq.command.ActiveMQDestination;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.jms.core.JmsTemplate;

public class SpringActiveMQTest {

	public static void main(String[] args) throws JMSException {

		ApplicationContext context = new ClassPathXmlApplicationContext("spring/jms/jms-context.xml");

		JmsTemplate template = (JmsTemplate) context.getBean("jmsTemplate");
		ActiveMQDestination destination = (ActiveMQDestination) context.getBean("destination");

		// sending a message
		template.convertAndSend(destination, "Hi");

		// receiving a message
		Object msg = template.receive(destination);
		if (msg instanceof TextMessage) {
			try {
				System.out.println(((TextMessage) msg).getText());
			} catch (JMSException e) {
				System.out.println(e);
			}
		}

	}
}

The code can be found in this Github repository.

All available methods to Hack Spring’s Bean Lifecycle

There are three ways to intercept Spring’s Bean lifecycle just after initializing and moments before being destroyed. Namely (and given chronological order of appearance) this can be done via interfaces, via bean definition methods and finally via annotations.

I’ve put a sample trivial application to showcase all these three methods.

The POM looks like so:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.dimitrisli.spring</groupId>
  <artifactId>SpringBeanInitializationDestruction</artifactId>
  <version>1.0</version>
  <properties>
  	<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  </properties>
  <dependencies>
  	<dependency>
  		<groupId>org.springframework</groupId>
  		<artifactId>spring-context</artifactId>
  		<version>3.1.0.RELEASE</version>
  	</dependency>
  	<dependency>
  		<groupId>org.jboss.spec.javax.annotation</groupId>
  		<artifactId>jboss-annotations-api_1.1_spec</artifactId>
  		<version>1.0.0.Final</version>
  	</dependency>
  </dependencies>
  <build>
  	<plugins>
  		<plugin>
  			<groupId>org.apache.maven.plugins</groupId>
  			<artifactId>maven-compiler-plugin</artifactId>
  			<version>2.3.2</version>
  			<configuration>
  				<source>1.6</source>
  				<target>1.6</target>
  				<encoding>${project.build.sourceEncoding}</encoding>
  			</configuration>
  		</plugin>
  	</plugins>
  </build>
</project>

The first way involves calling Spring’s marker interfaces: InitializingBean and DisposableBean. The POJO bean looks like this:

package app.withinterface;

import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.InitializingBean;

public class InterfaceBeanInitializationDestructionPojo implements InitializingBean, DisposableBean{

	private String text;

	public String getText(){
		return this.text;
	}

	public void setText(String text){
		this.text = text;
	}

	public void destroy() throws Exception {
		System.out.println("During bean destruction, wired by the DisposableBean interface...");
	}

	public void afterPropertiesSet() throws Exception {
		System.out.println("During bean initialization, wired by the InitializingBean interface...");
	}

}

The equivalent definition on the context config file looks like this:

<bean id="interfaceBeanInitializationDestructionPojo"
		class="app.withinterface.InterfaceBeanInitializationDestructionPojo"
		scope="singleton">
		<property name="text" value="withInterfaces-DuringBeanBigTimeInContextLife..."></property>
	</bean>

which is basically nothing special other than specifying the bean in the config file. The marker interfaces do all the hard work during runtime being invoked by the Spring framework.

Secondly, we have the XML config method definition. The POJO bean in this case looks like this:

package app.xml;

public class XMLBeanInitializationDestructionPojo {

	private String text;

	public String getText() {
		return text;
	}

	public void setText(String text){
		this.text = text;
	}

	public void myInitBeanMethod(){
		System.out.println("During init of the bean, wired by XML...");
	}

	public void myDestroyBeanMethod(){
		System.out.println("During destruction of the bean, wired by XML... ");
	}
}

and its corresponding context config entry:

<bean id="xmlBeanInitializationDestructionPojo"
		class="app.xml.XMLBeanInitializationDestructionPojo"
		init-method="myInitBeanMethod" destroy-method="myDestroyBeanMethod"
		scope="singleton">
		<property name="text" value="withXML-DuringBeanBigTimeInContextLife..."></property>
	</bean>

Please note the explicitely defined init-method and destroy-method

Finally the annotation driven definition happens (surprisingly) using JSR-250 @PreDestroy and @PostConstruct annotations.

Here’s the POJO:

package app.annotation;

import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;

public class AnnotationsBeanInitializationDestructionPojo {

	private String text;

	public void setText(String text){
		this.text=text;
	}
	public String getText(){
		return this.text;
	}

	@PostConstruct
	public void myInitMethod(){
		System.out.println("During bean initialization, wired by the Annotations...");
	}

	@PreDestroy
	public void myDestroyMethod(){
		System.out.println("During bean destruction, wired by the Annotations...");
	}
}

and in order to make these annotations being scanned by the Spring framework during runtime all we need is the definition of the context component-scan:


Having a simple main method demonstration:

package app;

import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

import app.annotation.AnnotationsBeanInitializationDestructionPojo;
import app.withinterface.InterfaceBeanInitializationDestructionPojo;
import app.xml.XMLBeanInitializationDestructionPojo;

public class Main {

	public static void main(String[] args) {
		ConfigurableApplicationContext context = new ClassPathXmlApplicationContext("application-config.xml");

		AnnotationsBeanInitializationDestructionPojo viaAnnotations = (AnnotationsBeanInitializationDestructionPojo) context.getBean("annotationsBeanInitializationDestructionPojo");
		InterfaceBeanInitializationDestructionPojo viaInterface = (InterfaceBeanInitializationDestructionPojo) context.getBean("interfaceBeanInitializationDestructionPojo");
		XMLBeanInitializationDestructionPojo viaXML = (XMLBeanInitializationDestructionPojo) context.getBean("xmlBeanInitializationDestructionPojo");

		System.out.println(viaAnnotations.getText());
		System.out.println(viaInterface.getText());
		System.out.println(viaXML.getText());

		context.close();
	}
}

produces the output:


...
During bean initialization, wired by the InitializingBean interface...
During init of the bean, wired by XML...
During bean initialization, wired by the Annotations...
withAnnotations-DuringBeanBigTimeInContextLife...
withInterfaces-DuringBeanBigTimeInContextLife...
withXML-DuringBeanBigTimeInContextLife...
During bean destruction, wired by the Annotations...
During destruction of the bean, wired by XML...
During bean destruction, wired by the DisposableBean interface...
...

The code can be found in this Github repository.

Maven Surefire plugin patterns

Today I hit a case where some of my tests wouldn’t be picked up to run during Maven’s testing phase. The reason for that is the way the Surefire plugin is picking up names to include by default, namely the **/*Test.java, **/Test*.java, **/*TestCase.java name patterns.

If we want to include all the tests adhering to a common naming convention we can use regex to explicitly specify inclusion in the Surefire plugin:

<project>
  [...]
  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>2.11</version>
        <configuration>
          <includes>
            <include>%regex[.*MyNamingConvention*]</include>
          </includes>
        </configuration>
      </plugin>
    </plugins>
  </build>
  [...]
</project>