Configuration Client and RefreshScope for core Spring and Spring Boot Configuration Server

Content:

Introduction
I am sure everybody will accept that one of the initial steps of building a new application, is developing a configuration system.

I was part of a several green field projects and nearly every one them, one of the initial tasks was always to develop the configuration system. So in total, I guess I developed 7-8 configuration systems, after first 4-5 times, I asked myself isn’t a better way to do this. Later stages of my carrier, I encounter the cloud programming concepts and mainly Spring Boot and to my surprise, they already have an answer to my dilemma, Spring Cloud Config (https://cloud.spring.io/spring-cloud-config/).

Of course, with the initial joy, I start experimenting with it, with the help of the Spring Boot and build some ‘Hello World’s and proof of concepts, the thing was perfect for the Spring Boot applications, integration with Git, Profiles, configuration values automatically bind to ‘@Value’ annotations, ‘@RefreshScope’ so we can change the configuration values of our application on the run. So if we only program Spring Boot applications, we don’t have to spend any second about developing a configuration system.

Then I thought, would not be cool to use the Spring Cloud Config on the applications we already developed, that is the time point I experienced the shock, Spring Cloud Config does not support anything other then Spring Boot tool chain. Yes, you understood correctly, Spring outside of the Spring Boot technology stack can’t use the Spring Cloud Config, no central location for configuration, no refresh on the runtime, nada….

So my first question, how hard it can be to integrate Spring Cloud Config with core Spring, specially the ‘@RefreshScope’, the rest of the blog describes what we have to do to achieve that goal.

Solution
Ok, lets talk about the first good news then the bad news. First, Spring Cloud Config present all the configuration information over REST interfaces, so it will be easy to access any platform that communicate REST to access that information, fortunately this is no problem for core Spring.

Bad news are, core Spring does not know anything about the Environment configuration of the Spring Boot, it does not know anything about ‘@RefreshScope’ those are the things we have to implement.

So lets list our tasks,
– we have to create communication middle for accessing Spring Cloud Config
– we have to inform Spring with Environment and PropertySource about the configuration values we got from Spring Cloud Config
– a JMX facade to be able to access configuration values from the all over the Java Virtual Machine
– a @RefreshScope implementation
– a JMX Facade to be able to refresh @RefreshScope’d beans
– a Refresh REST Endpoint to refresh the configuration values externally for the application
– a bootstrap for Configuration Client

These to do’s will also form our project structure, instead of having one single project, I preferred little bit the plug and play approach, if people are not interested with the @RefreshScope or refreshing the application from outside, they can leave these dependencies out in their project so they will have no effect.

Project Structure
-‘configuration-api’ – this project is there mainly to be able to externalize the API of the configuration server to be used from several projects in the same JVM(via JMX) this way not all projects has to initialize the service. This pattern is quite useful in Micro Service design to save resource, so not every micro service investing resource to initialize the configuration services.
-‘configuration-core’ – is where the main functionality lies, it is responsible to store the configuration information and distribute to the interested parties. It is also responsible for the bootstrap configuration of the configuration system.
-‘configuration-client’ – is responsible for contacting the Spring Cloud Config and downloading the configuration information.
-‘configuration-refresher’ – is there provide the refresh endpoint to trigger the system to get the latest configuration information from Spring Cloud Config.
-‘configuration-scope’ – is there to introduce the ‘@RefreshScope’ annotation and for the implementation of the Refresh Scope outside of the Spring Boot.
-‘configuration-yaml’ – is there to be able to bootstrap the ‘configuration-core’ via YAML.
-‘configuration-driver’ – is there to test the concepts in this blog.
-‘configuration-tomcat’ – is to there start the Tomcat container for driver project.

Operation Principles
To be able to use our configuration service, first of all we would be able to make bootstrap configuration, for this purpose the configuration service will try to read from the property ‘org.salgar.configurationserver.propertyfile’ the location of the bootstrap configuration file(this functionality is realized in ‘YamlInitializer.java’ class in ‘configuration-yaml’ project). ‘YamlFileConfiguration’ is class based on ‘Apache Commons Configuration’ s ‘org.apache.commons.configuration.AbstractFileConfiguration’ so it bring the functionality to read the property files on the run and reflect the changes from bootstrap configuration to the configuration service.

Which looks like the following initially.

application:
 name: test
configuration:
 initial: HalliHallo
 server:
  url: http://localhost:8888

snippet 1

Really relevant part of it, our application is called ‘test’, which is really important while Spring Cloud Config can hold the configuration information about several applications and only way that application gets relevant information about itself is to have the application name.

Second one is naturally the URL of the Spring Cloud Config so our application can get is configuration information.

Now with this information, with the start of the Spring Application Context, our configuration service will load all the configuration information from the Spring Cloud Config. We need the ‘configuration-client’ communicate with Spring Cloud Config and deliver the results, unfortunately Spring Boot Configuration Server delivers the results in Spring Boot propriety format, namely as ‘org.springframework.cloud.config.environment.Environment’, to be able to process this information we have to port this class to core Spring world, which is realized in ‘configuration-api’ projects ‘org.salgar.configuration.api.environment.Environment’ class. This way we can interpret the information from Spring Cloud Config and feed to normal Spring’s configuration system.

Spring Cloud Config contains for test purposes following configuration information in Yaml format.


stageall:
 environment01:
  serviceUrl:
   'instance01': https://192.168.107.16/api/v0/orderApplication
   'instance02': https://192.168.107.17/api/v0/orderApplication
   'instance03': https://192.168.107.18/api/v0/orderApplication
   'instance04': https://192.168.107.19/api/v0/orderApplication
   'instance05': https://192.168.107.20/api/v0/orderApplication

stage01:
 environment01:
  serviceUrl:
   'instance01': https://140.18.159.16/api/v0/orderApplication
   'instance02': https://140.18.159.17/api/v0/orderApplication
   'instance03': https://140.18.159.18/api/v0/orderApplication
   'instance04': https://140.18.159.19/api/v0/orderApplication
   'instance05': https://140.18.159.20/api/v0/orderApplication
   
 environment02:
  serviceUrl:
   'instance01': https://environment02.server.xxx/api/v0/orderApplication
   'instance02': https://environment02.server.xxx/api/v0/orderApplication
   'instance03': https://environment02.server.xxx/api/v0/orderApplication
   'instance04': https://environment02.server.xxx/api/v0/orderApplication
   'instance05': https://environment02.server.xxx/api/v0/orderApplication

stage02:
 environment01:
  serviceUrl:
   'instance01': https://119.108.55.16/api/v0/orderApplication
   'instance02': https://119.108.55.17/api/v0/orderApplication
   'instance03': https://119.108.55.18/api/v0/orderApplication
   'instance04': https://119.108.55.20/api/v0/orderApplication
   'instance05': https://119.108.55.21/api/v0/orderApplication

snippet 2

We should naturally say to Spring Cloud Config where to find this file which happens in “bootstrap.yml” file of the Spring Cloud Config.

spring:
  cloud:
    config:
      server:
        native:
          searchLocations: file:///C:/config
  profiles:
    active: native

server:
  port: 8888

snippet 3

the configuration values are ‘searchLocations’ where the configuration file lies and the spring profile ‘native’ while we don’t want to use ‘git’ to keep our configuration values but a plain text file(for feasibility study purposes, otherwise it is totally fine to use Git). Our configuration file is also called ‘test.yml’ because as you might remember we configured our applications name in bootstrap as ‘test’.

Now that we know how to get the configuration information from Spring Cloud Config, how do we feed to the normal Spring configuration system. Two classes are responsible for this ‘org.salgar.configuration.initialisation.BootstapPropertySourceConfiguration’, which is responsible for feeding the bootstrap information and ‘org.salgar.configuration.initialisation.ConfigurationServerPropertySourceInitializer’, which is responsible for feeding the Spring Cloud Config information. They both implement the ‘org.springframework.core.Ordered’ interface and uses the highest precedence to initialize before every other component in Spring.

   @Override
    public int getOrder() {
        return Ordered.HIGHEST_PRECEDENCE + 30;
    }

snippet 4

and these information will be placed into the ‘org.springframework.core.env.ConfigurableEnvironment’ of the Spring when ‘org.springframework.context.ApplicationListener’s ‘onApplicationEvent’ method with ‘org.springframework.context.event.ContextRefreshedEvent’ triggered.

    @Override
    public void onApplicationEvent(ApplicationEvent event) {
        if (event instanceof ContextRefreshedEvent) {
            MutablePropertySources propertySources = configurableEnvironment.getPropertySources();
            propertySources.addLast(getPropertySource());
        }
    }

snippet 5

This way our configuration value are added to the PropetySource chain of the Spring Configuration Environment.

Please pay attention that this way, the configuration values will be available after Spring initialization (so your application can use those to fulfill business cases) if you need the configuration values to configure the Spring itself, you need another approach, as you can remember we used ‘Ordered’ interfaces to guarantee that our PropertySource configurer components are one of the first Spring Components that will be initialized, now if these classes will also implement ‘InitializingBean’ interface, they can register the PropertySources during the initialization of the Spring, as following.

    @Override
    public void afterPropertiesSet() throws Exception {
        MutablePropertySources propertySources = configurableEnvironment.getPropertySources();
        propertySources.addLast(getPropertySource());
    }

snippet 5a

the decision is up to you, depending when you need the configuration values.

Now we can access to the these configuration values either via ‘org.salgar.configuration.api.ConfigurationFacade’

String parameter = configurationFacade.getProperty("stage1", "environment1", "instance01", "serviceUrl");

snippet 6

of course this is the classical way of using properties in any application, we are getting over the configuration facade. I used here that our configuration system has the concepts of ‘stage’, ‘environment’, ‘instance’ and ‘property’ but you can just ask one property also, following will also work.

String parameter = configurationFacade.getProperty("stage1.environment1.instance01.serviceUrl");

snippet 7

Now we have to naturally inject the ‘configurationFacade’ to our class

@Autowired
@Named("proxyConfigurationFacade")
ConfigurationFacade configurationFacade;

snippet 8

interesting part here is ‘proxyConfigurationFacade’, why I am injecting this, as said before we can have several application in the JVM that needs the configuration service, so we registered our configuration service to JMX and accessing over it via Spring with the following snippet…

<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean id="proxyConfigurationFacade" class="org.springframework.jmx.access.MBeanProxyFactoryBean">
        <property name="objectName" value="configuration:name=configuration-service,type=org.salgar.ConfigurationService,artifactId=configuration-service"/>
        <property name="proxyInterface" value="org.salgar.configuration.api.ConfigurationFacade"/>
    </bean>
</beans>

snippet 9

and the ‘org.salgar.configuration.api.ConfigurationFacade’ is registered to the JMX with the following class ‘org.salgar.configuration.jmx.ConfigurationFacadeJmx’…

package org.salgar.configuration.jmx;

import org.salgar.configuration.api.ConfigurationFacade;
import org.springframework.jmx.export.annotation.ManagedOperation;
import org.springframework.jmx.export.annotation.ManagedOperationParameter;
import org.springframework.jmx.export.annotation.ManagedOperationParameters;
import org.springframework.jmx.export.annotation.ManagedResource;

@ManagedResource(objectName = "configuration:name=configuration-service,type=org.salgar.ConfigurationService,artifactId=configuration-service")
public class ConfigurationFacadeJmx implements ConfigurationFacade {
    private ConfigurationFacade configurationFacade;

    public ConfigurationFacadeJmx(ConfigurationFacade configurationFacade) {
        this.configurationFacade = configurationFacade;
    }

    @Override
    @ManagedOperation(description = "delivers the configuration parameters")
    @ManagedOperationParameters({
            @ManagedOperationParameter(name = "name", description = "Name of the configuration parameter")
    })
    public String getProperty(String name) {
        return this.configurationFacade.getProperty(name);
    }

    @Override
    @ManagedOperation(description = "delivers the configuration parameters")
    public void refresh() {
        this.configurationFacade.refresh();
    }

    @Override
    @ManagedOperation(description = "delivers the configuration parameters")
    @ManagedOperationParameters({
            @ManagedOperationParameter(name = "channel", description = "stage for configuration values, like, stage01, stage02, etc"),
            @ManagedOperationParameter(name = "stage", description = "environment of the configuration values, like, environment01, environment02, etc"),
            @ManagedOperationParameter(name = "instance", description = "instance for the configuration values"),
            @ManagedOperationParameter(name = "name", description = "Name of the configuration parameter")
    })
    public String getProperty(String channel, String stage, String instance, String name) {
        return this.configurationFacade.getProperty(channel, stage, instance, name);
    }
}

snippet 10

and initialize via ‘org.salgar.configuration.jmx.JmxInitializer’ in ‘configuration-core’ project…

@Configuration
public class JmxInitializer {
    @Bean(name = "coreRegisterer")
    public MBeanExporter getMBeanExporter() {
        MBeanExporter mBeanExporter = new MBeanExporter();
        Map<String, Object> beans = new HashMap<>();
        beans.put("configuration:name=configuration-service,type=org.salgar.ConfigurationService,artifactId=configuration-service", new ConfigurationFacadeJmx(ConfigurationContainer.getInstance()));
        mBeanExporter.setBeans(beans);
        MetadataMBeanInfoAssembler metadataMBeanInfoAssembler = new MetadataMBeanInfoAssembler();
        metadataMBeanInfoAssembler.setAttributeSource(new AnnotationJmxAttributeSource());
        mBeanExporter.setAssembler(metadataMBeanInfoAssembler);
        mBeanExporter.setRegistrationPolicy(RegistrationPolicy.REPLACE_EXISTING);
        return mBeanExporter;
    }
}

snippet 11

until now this is the classical of way of using configuration parameter, lets go now for little bit for more modern approach and use Spring annotation for configuration value, namely with the annotation ‘org.springframework.beans.factory.annotation.Value’…

@Value("${stage1.environment01.serviceUrl.instance01}")
private String serviceUrl;

snippet 12

with this annotation while we configured our property source in the Spring’s ‘ConfigurableEnvironment’ now Spring can access our configuration values directly.

At this point, we have to discuss my biggest motivation to develop this solution. ‘@RefreshScope’, in my several previous projects, I developed configuration systems that are able to refresh their configuration values during the runtime, but they were simplistic systems, instead assigning concrete values to configuration properties, we just placed the configuration values to a map and they were read from there, so every time they were refreshed, the system saw the actual values. When I first discover the ‘@RefreshScope’ I found it much more elegant then our own solutions, so naturally I really like to adapt it to our project but I experienced the disappointment that is only available in the context of Spring Boot.

It was little bit a challenge but I managed to port the Spring Boot implementation to normal Spring.

To be able to use ‘@RefreshScope’, we have to annotated a class with it, for ex…

@Configuration
@RefreshScope
public class DriverFacadeImpl implements DriverFacade {
    @Value("${stage1.environment01.serviceUrl.instance01}")
    private String oneOtherProperty;

    ....
}

snippet 13

This way, Spring will place every instance of this class in a scope and if any client requires serve this class from this scope, until a refresh signal arrives. When the signal arrives, Spring will invalidate the scope container and if any client require the class again, it will recreate it and serve them (in the process it will re-read the configuration properties). Simple but it is elegant.

There is some interested details about how the ‘@RefreshScope’ is defined (you can see the whole code here, it is little bit too much to copy\paste here).

Basic details of the ResreshScope are as following, when Spring discovers a bean with ‘@RefreshScope’ annotated, it will call this Scope’s get method…

    @Override
    public Object get(String name, ObjectFactory<?> objectFactory) {
        BeanWrapper value =  this.cache.get(name);
        if(value == null) {
            value = new BeanWrapper(name, objectFactory);
            this.cache.put(name, value);
        }

        return value.getBean();
    }

snippet 14

the first check will be ‘is the Spring Bean in cache of the refresh scope’, if yes it is delivered from there, otherwise it will be wrapped with ‘BeanWrapper’ then placed in to cache and delivered back.

So when the configuration must be refreshed from the outside and the signal is received, ‘destroy’ method called and cache is cleared, this way when Bean is demanded again, it will be re-created via ObjectFactory(this way all the configuration parameter will be re-read) and delivered back to Spring.

public void destroy() {
        this.cache.clear();
    }

snippet 15

Also one interesting point here, with the following snippet ‘RefreshScope’ will register itself to Spring.

    @Override
    public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
        beanFactory.registerScope(this.name, this);
        setSerializationId(beanFactory);
    }

snippet 16

To be able to deliver the refresh signal, the bean that realize the implementation of the ‘@RefreshScope’ will be registered to the JMX Bean, so when the Refresh Endpoint receives the signal, it will be transferred via JMX.

And some interesting points from the ‘@RefreshScope’ annotation,

@Target({ElementType.TYPE, ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
@Scope("refresh")
@Documented
public @interface RefreshScope {
    ScopedProxyMode proxyMode() default ScopedProxyMode.TARGET_CLASS;
}

snippet 17

%90 of it standard Java Annotation stuff but what interesting there is ‘@Scope(“refresh”)’ which tell Spring that this is a Scope annotation and Spring should search for a scope called ‘refresh’ in this which is already defined in

public class RefreshScope implements Scope, BeanFactoryPostProcessor {
    ....
    private String name = "refresh";
    
    ....

    @Override
    public String getConversationId() {
        return this.name;
    }

    ....
}

snippet 18

Finally, lets look to the Refresh Endpoint implementation, it is a straight forward REST interface,

@RestController
public class RefreshController {
    @Autowired(required = false)
    @Named("proxyRefreshScopeFacade")
    private RefreshScopeFacade refreshScopeFacade;

    @Autowired(required = false)
    @Named("proxyConfigurationFacade")
    private ConfigurationFacade configurationFacade;


    @RequestMapping("/refresh")
    public void refresh() {
        if(refreshScopeFacade != null) {
            refreshScopeFacade.refresh();
        }
        if(configurationFacade != null) {
            configurationFacade.refresh();
        }
    }
}

snippet 19

which here use GET Method for demonstration purposes but actually should use POST for more security, when the ‘localhost:7777/refresher/refresh’ called it just triggers JMX methods to refresh the Scope and Configuration Facade so they will get the actual values,

Tests
Now that we discuss the theory and show the implementation details, lets show case that it is exactly doing what we are expecting or not.

To be able to execute our tests, we first need a Spring Cloud Config instance, luckily that is really easy, first we need a Maven project with an one single class.

@EnableConfigServer
@SpringBootApplication
public class ConfigServiceApplication {
    public static void main(String[] args) {
        SpringApplication.run(ConfigServiceApplication.class, args);
    }
}

snippet 20

the class has nothing other then one single main method and couple annotations to configure the Spring Cloud Config , ‘@SpringBootApplication’ and ‘@EnableConfigServer’.

And two configuration files, ‘bootstrap.yml’,

spring:
  cloud:
    config:
      server:
        native:
          searchLocations: file:///C:/config
  profiles:
    active: native

server:
  port: 8888

snippet 21

which defines where is the configuration files lies, native active profile (normally Spring Cloud Config read its configuration information from Git for this example we don’t want to deal with that) and port number where Spring Cloud Config listens the http traffic.

First, lets start the Spring Boot Configuration with the following command..

Configuration Server Start
picture 1

after the maven build, with the help of the ‘spring-boot-maven-plugin’, the jar that is created will contain all the dependencies so we can start the configuration server with such a simple command.

After successful start, shell for SBCS will look like the following…

Configuration Server Successful Start
picture 2

@Configuration
@RefreshScope
public class DriverFacadeImpl implements DriverFacade {
    @Autowired
    @Named("proxyConfigurationFacade")
    ConfigurationFacade configurationFacade;

    @Value("${stage01.environment01.serviceUrl.instance02}")
    private String oneOtherProperty;

    public String driver() {
        String parameter = configurationFacade.getProperty("stage01", "environment01", "instance01", "serviceUrl");

        return parameter;
    }

    public String driver1() {
        return this.oneOtherProperty;
    }
}

snippet 21a

Now that SBCS is running, let look how we are going test this, in the snippet 21a you saw a piece of code accessing properties in a two different way, one was reading it from ‘ConfigurationFacade’ and the other one via Spring’s ‘@Value’ annotation, to create a realistic test, I will deploy Maven Artifact that is containing the test code to the Tomcat Servlet Container. Personally I don’t want to install the Tomcat myself, I want it that Maven does it for me, so I created another project called ‘configuration-tomcat’ with the following ‘pom.xml’ configuration.

......
           <plugin>
                <groupId>org.apache.tomcat.maven</groupId>
                <artifactId>tomcat7-maven-plugin</artifactId>
                <version>2.2</version>
                <configuration>
                    <path>/${project.build.finalName}</path>
                    <port>7777</port>
                    <ajpPort>7019</ajpPort>
                    <useSeperateTomcatClassLoader>false</useSeperateTomcatClassLoader>
                    <systemProperties>
                        <configuration.server.url>http://localhost:8888</configuration.server.url>
                        <org.salgar.configurationserver.propertyfile>C:\config\test.yml</org.salgar.configurationserver.propertyfile>
                    </systemProperties>
                    <webapps>
                        <webapp>
                            <groupId>org.salgar.configuration</groupId>
                            <artifactId>configuration-refresher</artifactId>
                            <version>1.0-SNAPSHOT</version>
                            <type>war</type>
                            <asWebapp>true</asWebapp>
                            <contextPath>refresher</contextPath>
                        </webapp>
                        <webapp>
                            <groupId>org.salgar.configuration</groupId>
                            <artifactId>configuration-driver</artifactId>
                            <version>1.0-SNAPSHOT</version>
                            <type>war</type>
                            <asWebapp>true</asWebapp>
                            <contextPath>driver</contextPath>
                        </webapp>
                    </webapps>
                </configuration>
            </plugin>
........

snippet 22

which lets the maven uses ‘tomcat7-maven-plugin’ and configure to use to maven artifact ‘configuration-refresher’ under web context ‘refresher’ and ‘configuration-driver’ under ‘driver’.

Now when we execute the ‘mvn tomcat7:run’ this will start the Tomcat container and deploy the artifacts.

So when I want to test ‘driver’ end point which will read the configuration value from ‘ConfigurationFacade’…

Driver ConfigurationFacade
picture 3

As you might see, it reads the value…

String parameter = configurationFacade.getProperty("stage01", "environment01", "instance01", "serviceUrl");

snippet 23

and delivers it as ‘https://140.18.159.16/api/v0/orderApplication&#8217;.

Now lets try the ‘driver1’ endpoint,

Driver1 over @Value
picture 4

it reads the parameter with ‘@Value’,

    @Value("${stage01.environment01.serviceUrl.instance02}")
    private String oneOtherProperty;

snippet 24

and delivers it as ‘https://140.18.159.17/api/v0/orderApplication&#8217;.

Now lets see, what happens when we refresh the values, first lets edit the configuration file and change the following values,

stage01:
 environment01:
  serviceUrl:
   'instance01': https://140.18.159.160/api/v0/orderApplication
   'instance02': https://140.18.159.170/api/v0/orderApplication
   'instance03': https://140.18.159.18/api/v0/orderApplication
   'instance04': https://140.18.159.19/api/v0/orderApplication
   'instance05': https://140.18.159.20/api/v0/orderApplication

snippet 25

to refresh the Spring Cloud Config, we have to make POST request to ‘http://localhost:8080/refresh&#8217;, easiest ways to this is with the following curl command ‘curl –noproxy localhost -d{] http://localhost:8080/refresh&#8217;.

SBCS is refreshed and re-read the configuration values, now we must refresh the client application so it can refresh itself also (I previously mentioned, actually this should be also POST request for simplicity reasons I am using it as GET).

Client Refresh
picture 5

Now if we repeat the calls,

Refreshed driver
picture 6

Now we are getting ‘https://140.18.159.160&#8217; as response instead of ‘https://140.18.159.16&#8217; for ‘driver’ response,


picture 7

and ‘https://140.18.159.170&#8217; instead of ‘ https://140.18.159.17&#8217; as ‘driver1’ response, this proves that our concept is working.

Conclusion
I think in this blog, I managed to show why it is a good idea to use a Configuration Server, instead of developing your own for the 10th time, and the concepts behind the ‘@RefreshScope’ and how to use it in core Spring.

I hope this will be useful for somebody.

If you decide the use the Spring Cloud Config and the ideas in this blog, I strongly recommend to watch this video Extending Spring Cloud Config it is a really nice video what to extend in Spring Cloud Config to get a better millage for it. Somethings in this blog must be modified to be able to use all ideas there, if you need help for it let me know, I will see what I can do about it.

Appendix
How to get the source code

You can get the source of the blog from github with the following command

git clone https://github.com/mehmetsalgar/configuration-service.git

and build it with ‘mvn clean install’

Advertisements
Posted in Micro Services, Software Development, Spring, Spring Boot, Spring Boot Configuration Server, Spring Cloud Config | Leave a comment

Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka

Content:

Introduction:
We are all aware that ‘Micro Services’ is one of the most popular buzz words in the last years. I always found the idea interesting but outside of the some specific business cases not so practicable. Sure Amazon, Netflix are doing unbelievable things with those but I found their Business Cases are quite different then classical businesses like Telecommunication, Banks, Insurances. Those branches Business Cases are much more glued together that they have to be transactional and have to depend to Transactions. How should Micro Services which should exists by design in their islands, can be adapted to these highly transactional environments. There are also other technical problems, if the Microservices should present their interfaces as REST and many Microservices must go into an orchestrations to fulfill complex Business Cases, how those will perform under high load because of the Network traffic and Serialization costs. I know some big Enterprises tried to use Microservices in their projects which hit this wall, which I call Fanout Problem (fan-out, fan out, which ever you want as a new buzz word :)), too much network communication even bringing Gigabit networks down. These are in my opinions biggest obstacles people who would try to implements Microservices in old school Enterprise environments will encounter. Well, if you want to learn how I did solve these problems with JBoss Wildfly, Spring Boot and Netflix Eureka, you have to read the rest of the article also.

Detailed Problem Definition:
If you look to the internet, you will not find any practicable solution about implementing Transactions over several Micro Services, you will only get an advice about that you should design the Micro Services so that transactions can be self contained, designed with bounded context principle from Domain Driven Design, in which it is stated that the business object related in the solution of a problem must be grouped and act together. This way these business objects can have their own databases and care about only about transaction between themselves, this way they are good candidate for Micro Services. My problem with this, we are living in an Agile world isn’t, what would be the consequences if I designed two bounded contexts/micro services and ran like two years now have a change and they should belong together, this would cause quite big technological and organizational challenge.

One other proposal is Event Queuing, which state instead of having databases modeling several business objects, which should only have one database and one table, a table recording all the events in the system, for a bank, for ex, all the money a customer deposit/draws to its account saved as events and his/hers current account balance is calculated from aggregation of values from all those events. This is such different way of thinking and designing systems, I am not even sure the classical businesses I mentioned can grasp it, forget about adapting, I have problems convincing these organizations much smaller changes then this.

One final proposed solution is to delegate to transactional queues and they will get guaranteed run to completion and reach eventually consistent states, again I am not even sure this can be accepted from above mentioned classical businesses.

You can find all these discussions in the following Youtube video Microservice Design Principles. Also the traditional Transaction mechanism and configurations would make the services tightly coupled and denies the main advantages of the Micro Service architecture and bring very complex configurations to the services actually that should not know anything about each other. What if I would tell you that I have designed a solution here that makes the self configuration of Transactions possible without any of the services knowing anything about each other. Would that be interesting?

Also not too many people discussing the Fanout Problem openly at the moment caused by extreme overhead of Serialization via REST/JSon or Java serialization mechanisms (you can see benchmarks about these costs here). The problem is so real that because of the Serialization costs and chattiness of Micro Services on the Network it even brings Gigabit Networks down.

Either people are in the middle of developing their systems, didn’t discovered the problem and will hit the wall soon or they discovered but they are not speaking loud about it, some of the concern about the topic you can see in the following youtube video Microservice Performance. If you follow the topic closely in the internet, read the articles of major Micro Services evangelist’s or watch their presentations, you will get clues about when they have to orchestrate big amount Micro Services, they are not going for REST – REST communication for orchestration but for binary protocols. Where are these hints you may say, for ex, watch the following video at time index Netflix Way. Even the renowned Netflix framework which are providing the best Open Source libraries for Micro Services/Cloud functionalities is using binary protocols for inter Micro Service orchestration. Off course I can’t exactly say what Netflix is doing, it is not openly published until now as I know. I heard some hints that people are using Apache Kafka for streaming the communication between the Micro Services but is this a solution for bare mortals. Do you want to develop software for your Business Cases or spending countless hours for trying to configure and maintain such a streaming solution? What if I would told you I have a solution that will solve this problem without having high end space shuttle technology but somethings that existed there for years and not betraying the Micro Services principles. Would that be interesting?

Solutions:
The following picture is an overview of how I will approach to solve the Fanout and Transaction Problems for Micro Services.
scenario_1
picture 1

Basically we are implementing the Use Cases that our Micro Services should cover as JMX Components, the reasons for it I will explain shortly. For every JMX Component, we will have a REST Facade first to present the functionality that implemented by JMX Components to outside world, secondly to register those REST Facades to Netflix Eureka so it will provide us Failover and Load Balancing services. In the above picture you will see three Micro Services, Customer, Product and Order. All of them having an API component (containing interfaces), a JMX Component, a REST Facade. Additionally you will also see a ProductProcess Component which is mainly responsible for Orchestration. This will be the component which will cause most headaches for Fanout and Transactions. If all its communication occurs over REST with other Micro Services, it will cause too much network overhead, serialization costs and it will not be possible to build transactional unit if the whole conversation occurs over REST Facades.

So how do we solve this, by injecting same JMX Components also to the ProductProcess Orchestration layer/Composite Service, if the ProductProcess runs in the same Java Virtual Machine as the Customer, Product, Order Micro Services whole communication will happen over binary protocol of JMX, no serialization, no TCP communication and while all 3 Micro Services and Orchestration layer runs on the same JVM it is possible to run inside of a Transactional unit.

The orchestration components will be smart enough to discover other Micro Services without any configuration, if a Micro Service JMX Components exist in the same JVM and they will will discover it automatically and use those (this will happen via Spring and Dependency Injection). If it can’t detect those and it need those Micro Services functionality, it will discover those over Netflix Eureka and communicate over REST, off course this will mean serialization cost and no means of using transactional orchestrations but you have the full ability to configure your application via deployment. If you want your application to have top performance and transactions, you have the deploy the components necessary for orchestration in the same JVM, if not, you can deploy them to different JVM and let the orchestration layer find them via Netflix Eureka. Off course, if something goes wrong and some of the components can’t initialize correctly then orchestration layer will again use REST Facades, if transactions are not required.

You can see in the following Sequence Diagram representing how the system should work if it is deployed to one single JVM….

localdeployment
picture 2

and this one representing if the Micro Services are not deployed to the same JVM.

remotedeploymentorjmxnotavailable
picture 3

Now you can ask does this not break the Micro Services principles, there comes two important libraries from Netflix to the rescue, Eureka and Hystrix. By definition Micro Services must have service discovery/fail-over/load balancing feature, by registering the REST Facades to Eureka, we let the Eureka know every instance of the Services running in the environment. I also implemented a functionality to the JMX Components so that JMX Components reporting their health status to Eureka. So if a JMX Component will goes down, Eureka will mark this and will not send any further request to this REST Facade instance of Micro Service of running in this JVM.

In the case of a failure occurs during processing at Orchestration Layer, while I used the Hystrix Annotations, Hystrix Fallback Mechanism will redirected failing call to the Eureka/Ribbon to find a healthy instance of Micro Service in the Cloud. This way, Micro Service is not accessed with binary protocol but at least the user didn’t see an error, his request is completed and this fulfills the High Availability requirement of the Micro Service.

Another important concept in the Micro Services is the avoidance of tight coupling, at the end that is what constitutes the Monoliths. So if you look to my solution, you might ask installing so many JMX Component under one JVM to fulfill Use Cases is not tight coupling. Actually not, first no JMX Component does know anything other then the API of the other JMX Component, secondly everything is plug and playable, if you install all the necessary JMX Component for your Business Case to one single JVM, they will auto discover each other and if not all JMX Component installed to one single JVM, the solution will find an instance over Netflix Eureka (but then off course the communication will be over REST and not over binary protocol). You have the complete freedom how to design your system, you need Transactional Support/Performance then install JMX Component to one single JVM. You need more Fail-over/Balancing then deploy them to different JVM with REST Facade and let them discovered by Netflix Eureka.

Another big advantage this solution has against Monoliths, is the ability to scale where it is needed. In a monolith, you might have a functionality that, lets say, called once per month and other service called 1000 times per second, when you want to scale your Monolith you will start deploying new instances of it, congratulations your service that called once a month now lives and eat resources on two deployments. With the above solution and with Micro Service you can scale only what is needed not everything while they just live in the Monolith.

Solution Walkthrough:
First thing I have to say this will be JBoss inclined solution. There are two features that JBoss possess that make implementation of this solution quite easy. Can these functionalities be replicated in other Application Servers like WebLogic, WebSphere, … etc, most probably but again this is a feasibility study and I choose the path that showed the minimum resistance.

Ok, first big advantage is how JBoss functions, it has a concept of Micro Kernel architecture, which provides possibility of additional services plugging in to this kernel via JMX, services like Transaction Service, Java Messaging Service, Security, Data Sources, etc…., sound familiar? That means the way the JBoss operates inherently support Micro Service ideas. So to take advantage of this, we will deploy the functionality we need from our Micro Service as JMX Components to the JBoss AS(WildFly, AS7, can be any version), this option will helps us a lot in the next steps of our solution.

Second big advantage is the class loading mechanism of the JBoss Application Server up version AS7. Many of you probably heard OSGi containers, now don’t panic if you burned your finger with OSGi Containers and stay with me until I finish explaining. If we are going to deploy our Micro Services as JMX Components and if these Components should communicate for Orchestration then they have to know each others Interfaces. Now we can’t include this information to each others container, this will cause ClassCastException’s and ClassLoading problems. Standard procedure to solve this problem in Application Servers to place these library in Root Library directory, so they will be visible to Root Classloader and can be loaded with every component. So you say now, everything nice and dandy, why to mention of OSGi.

As many of you already know, placing the libraries this way in Root Library directory can also cause all sort of problems. OSGi saw this and they build it a revolutionary Classloading Mechanism which allow components to load exactly the libraries they need to their ClassLoader’s and nothing more. The problem with OSGi was, it was too restrictive and Java World was not ready for it (OSGi libraries needs special descriptors, as today, still not all libraries in the internet has these descriptors), it caused too much problem and nearly abandoned in Java Enterprise development. But I guess JBoss Developers saw the power of it and took the all good ideas to their JBoss AS7 ClassLoading mechanism and leave the problematic things out. Now this gives to the Components in the JBoss Application Server to load exactly the libraries they need and nothing more with the help of the descriptor. This is specially important for us when we want that several Micro Services orchestrate, that they will be only the interface information about each other and nothing more. Also the Java World thinks this is a good idea and will implement this with Java 9 and with JSR-376 (JSR-376 Java Module System). This will help us alot when we start discussing the versioning for our Micro Services.

Next point I like to explain is about the integration of JBoss Micro Kernel structure with Spring. Spring contains a really useful library for accessing and publishing JMX functionality. This is extremely important for us while we want the Plug and Play functionality for our JMX Component, what I mean by that when I deploy those JMX Components to JBoss, they should publish themselves to JMX Server and also discover other JMX Components via Spring Autowiring functionality, this way we don’t have to configure anything explicitly.

Next step is to configure those JMX Components, so they can report their Health status to Netflix Eureka. This will give us failover and load balancing abilities. Netflix provides some interfaces that if implemented by the Services, during the discovery phase will be registered by Netflix Eureka and periodically ping to Eureka. With these pings Netflix Eureka will ask the status of the service and if it does not get an OK response, it mark the service DOWN and will not send any further requests to these instances. How this works in my project internally? Every Rest Facade must have an according JMX component injected, if not, Health Check component will send down signal (now this can be for 2 reasons, it is either there is a problem with JMX component then it can’t be injected or this JMX component is not installed to the local JVM and if this service want to communicate with this server it should find an instance over Netflix Eureka and communicate via Rest) or JMX component is injected but responded inquiries from Rest Facade as ‘NOT OK’ and for this reason it will be set to DOWN by Netflix Eureka.

There is an another interesting detail here, remember the orchestration component we spoke about a while ago, the one that should be responsible for the Transactional behavior, this Health Check is really important for this feature. I developed a special annotation (you can see here how exactly configured and internal workings), when it is placed on a method in the orchestration service, it will signal our transactional intentions. This annotation will also contain the name of the Services this transaction will be dependent and registers automatically the Health Check of these services, if any of these Services signal DOWN state, then Orchestration Service will mark itself DOWN in the Netflix and should not receive any further traffic for this orchestration service instance. We need this because if a Transaction must run over three Services, every one of these services must be available and installed to same JVM, if this is not achieved and this Service will not accept any further traffic.

In the following Diagram you can see the representation of the topics we discussed in an activity diagram.

fanoutactivity
picture 4

I guess this will end the overview of the system, we can look things more detailed with code samples in the next section.

Project Structure Overview and Code Samples
Lets first look to the project structure…

project_structure_scenario_1
picture 5

So lets go over the project structure…

api‘ projects are the interface contracts of our micro services, which will be placed under the Module Classloading mechanism of the JBoss Aplication Server so our JMX Beans can communicate with each other. We have 6 ‘api’ projects, product_api, customer_api, order_api and vesion 1, version 2 of those (we will also observe how this concept works with multiple versions of Micro Services which happens also all the time in real projects).

sar‘ projects are the real implementation of our Micro Services. ‘sar’ is here Service Archives, a special packaging form for JBoss AS for JMX beans (mainly for JBoss Micro Kernel).  We have again 6 ‘sar’ projects product_sar, customer_sar, order_sar and version 1 and 2 of those.

services‘ projects are the Rest Facades for our JMX Beans, outside client will communicate with our JMX Bean over these Rest Facade and these will be registered to Netflix Eureka for Service Discovery.

processes‘ projects are the orchestration level Rest Facades. In this layer, the JMX Beans that are necessary for the orchestration scenarios will be injected , if they are present in the same JVM. If not Netflix Eureka will be queried for a Rest Facade instancse of the JMX Bean and business logic will be executed via these facades.

Transactional behavior will also be implemented in Orchestration Services, if ‘ProcessService’ class is annotated with the annotation ‘@TransactionalFanout‘ all the necessary Services cited in the annotation must be available as JMX beans for this ‘ProcessService’ instance to process this request otherwise it must mark itself DOWN in Netflix Eureka, so no additional request will be routed here.

One additional functionality that we use in ‘ProcessService’ is Hystrix library from Netflix. Every communication with other Micro Services are annotated with Hystrix annotations, if a call to another Micro Service as JMX Bean fails, Hystrix guarantees that as a fallback a Rest Facade instance is discovered from Netflix Eureka and called (a failed call will also guarantee that JMX Bean instance will be marked DOWN in Netflix Eureka so it will not get the same instance).

sar-utility‘ project contains necessary utility classes to introduce Spring capabilities to JBoss JMX Service Archives which they don’t possess out of the  box. There are already other libraries in the internet that can provide this functionality but one limitation with them they starting up a JBoss AS wide application context, which we don’t want for our Micro Service concept. We want that every Micro Service starts its own application context isolated from other Micro Services.

annotation‘ and its underlying projects are responsible of the implementation of @TransactionalFanout and @TransactionalOrchestration annotations. The goal we want to achieve with the first one, any class that has this annotation should have access to the all the JMX Bean defined in this annotation and if not this ‘Orchestration Service’ must mark itself down in Netflix Eureka and for the later to change the failure behavior of Hystrix.

eureka_patch‘ @TransactionFanout depends health check feature of the Netflix Eureka but unfortunately the ‘@ConditionalOnClass’ annotation in the class ‘EurekaDiscoveryClientConfiguration’ was causing problems because @TransactionalFanout annotation tries to configure Health Check on the run and this was too late for the Spring application context configuration lifecycle, for this reason I have to patch this class and remove this annotation from the static class declaration ‘EurekaHealthCheckHandlerConfiguration’.

hystrix_patch‘, we need this project for the modifications we have to do to the Hystrix framework for Transactional Orchestration.

health-check‘ this project contains basic interfaces and components that necessary to have for Netflix Eureka health check mechanisms.

support/eureka‘ project is a Spring Boot application to initialize and run Netflix Eureka framework.

support/zuul‘ project is also Spring Boot application to initialize and run Netflix Zuul framework, which is an Edge Server (by definition our Micro Services should not be open to outside world without security measure, it is also use the best practice to not to open simply Micro Service directly but only the services making the Orchestration, this way we will only allow external user to access ProcessService over Zuul Server).

assembler‘ this project is there to assemble our Micro Service artifacts via Maven Assembler plugin to not creates a nightmare for our Operation guys during the delivery(depending the number of Micro Services you can 100 of Web Archives (war’s) or jar’s to deploy and while we are not using Enterprise Archives (ear’s) to prevent Monolith behavior we need such constructs).

Detailed Project Analysis
The source code for the project is available at Github under the following link….
Github
but I would like to give here some hint here about the magic happening so people would have pointer when searching the Github.

– API Projects – Modular Classloading
The base of the solution depends on JBoss Modular Classloading system, for this we need the API projects because those would be only thing that we will share between Micro Services.

For ex, Product class, which is defining one of our model objects, in the ‘product_api_v1’ project looks like the following.

package org.salgar.product.api.v1.model;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;

@Entity
@Table(name = "PRODUCT_TA")
public class Product {
	@Id
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	private Integer productId;
	private String name;

	public Integer getProductId() {
		return productId;
	}

	public void setProductId(Integer productId) {
		this.productId = productId;
	}

	public String getName() {
		return name;
	}

	public void setName(String name) {
		this.name = name;
	}
}

snippet 1

It is a standard POJO and only interesting thing about it is ‘javax.persistence’ annotations, while our final solution will have database persistence and transactional capabilities these POJO’s must be decorated this way. Purist can say, define these POJOs as interfaces and annotate the classes in another layer but JBoss Modular Classloading mechanism can deal with it without any problem.

We also have to define our Service interfaces while we are going to communicate with other Micro Services via JMX Beans.

package org.salgar.product.api.v1;

import org.salgar.healthcheck.HealthCheck;
import org.salgar.product.api.v1.model.Product;

public interface ProductService extends HealthCheck {
	Product giveProduct(Integer productId);
	Product saveProduct(Product product);
}

snippet 2

This is a simple service only interesting thing here is the extended HealthCheck interfaces which containing standard method that will be useful in Netflix Eureka health check mechanisms.

The Maven artifacts that are produced for ‘product_api_v1’ must be placed in a special directory with a special descriptor in JBoss AS. The descriptor (called module.xml) will look like the following.

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="com.salgar.product_api.v1">
    <resources>
        <resource-root path="product_api_v1-1.0-SNAPSHOT.jar" />
    </resources>
    <dependencies>
        <module name="org.salgar.healthcheck.api" export="true"/>
        <module name="javax.persistence.api" export="true" />
        <module name="org.hibernate" />
        <module name="org.javassist" />
    </dependencies>
</module>

snippet 3

This will be placed in the following directory structure in JBoss.

JBoss Module 1
picture 5

and directory content will look like the following.

JBoss Module 2
picture 6

Path structure where we place the artifacts reflect the ‘name’ attribute of module.xml, ‘com.salgar.product_api.v1’ and the ‘resource-root’ element indicate which jars this module should provide.
The real interesting part is the ‘dependencies’ elements, JBoss Modular Classloading mechanism give us the possibility to reference other modules that are already configured in JBoss. As you see while we used javax.persistence annotations in our POJOs the modules ‘javax.persistence.api’, ‘org.hibernate’, ‘org.javassist’ are referenced. As I previously mentioned before, we will also use the Health Check functionality of the Netflix Eureka, so we are referencing that module also.

– Service Archive’s(SAR)
As we discussed previously, we will implement our business logic via JMX Beans to be able to use Plug and Play functionality of JBoss Micro Kernel provide so we design our Micro Services independent from each other, in a loosely coupled fashion.

There are two sticking points here, first, JMX definition/configuration files are really verbose and we don’t want to deal with them because those will make daily development effort quite tedious. Second, to be able to use loosely coupling, we want to use Dependency Injection, if a JMX Bean is dependent on another JMX Bean in the same VM, it should locate the Bean instance and inject it via Autowiring. If a Bean instance not there it should continue without causing any error. This is the Auto Discovery feature that I promised.

Fortunately Spring Framework can fulfill these two requirements for us, Spring has a special library for registering and discovering JMX Beans (and use them as normal Spring Beans after the discovery so they can be autowired).

Unfortunately out of the box, there is no Spring support in JBoss AS or in SAR concept (in the later version of the JBoss there is a support but they are starting Application wide Spring Application Context which we don’t want with our Micro Services, we want that every Micro Service to be self contained).

To be able to provide Spring support I have to write some code in the project ‘sar-utility’.

First of all, there are some conventions in the JMX Beans lifecycles, like ‘start’ and ‘stop’, if we follow these conventions and create a class implementing start and stop methods and place those in a SAR, then JBoss will automatically discover this JMX Bean, register it and execute ‘start’ method when Bean is initialized and call ‘stop’ when container is shutdown. One special point here, JMX Bean must implement an interface containing ‘MBean’ in its name. This is for us ‘SpringInitializerMBean’ and JMX Bean implementation that will configure Spring Application Context look like the following.

package org.salgar.mbean;

import java.lang.reflect.Constructor;
import java.lang.reflect.Method;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

public class SpringInitializer implements SpringInitializerMBean {
	private static final Log LOG = LogFactory.getLog(SpringInitializer.class);
	private static final String SPRING_CTX = "classpath:/META-INF/jboss-spring.xml";
	private Object ctx;
	
	public SpringInitializer() {
		LOG.info("Starting....");
	}
	
	public void start() throws Exception {
		LOG.info("starting");
		installApplicationContext();
	}
	
	public void stop() throws Exception {
		closeApplicationContext();
	}
	
	@SuppressWarnings("rawtypes")
	private void installApplicationContext() {
        try {
            Class contextClass = Class.forName("org.springframework.context.support.ClassPathXmlApplicationContext");

            @SuppressWarnings("unchecked")
			Constructor constructor = contextClass.getConstructor(new Class[] {String.class});
            Object tmpCtx = constructor.newInstance(new Object[] { SPRING_CTX });
            
            if (tmpCtx != null) {
                this.ctx = tmpCtx;
            }
        } catch (Throwable e) {
            LOG.error(" Unable to load Application Context '" + SPRING_CTX + "'. keeping existing context. Reason: " + e.getMessage(), e);
        }
    }

    private void closeApplicationContext() {
        if (ctx != null) {
            try {
                Method close = ctx.getClass().getMethod("close", null);
                close.invoke(ctx, null);
                LOG.info("applicationContext closed.");
            } catch (Throwable t) {
                LOG.error("Unable to close applicationContext '" + SPRING_CTX + "'. Reason: " + t.getMessage()
                        + ". Restart JBoss if possible.", t);
            }
        }
    }

	@Override
	public void test() {
		LOG.info("Starting ....");
	}
}

snippet 4

If you look closely to the code, in the ‘start’ method we are looking for a Spring Application Context configuration file called ‘classpath:/META-INF/jboss-spring.xml’ (which should exist under every SAR project that we implement) to initialize the Spring Application Context.

We will now look to the implementation logic for Order Micro Service in ‘order_sar_v1’.

First of all, we have to Maven to package everything as a Service Archive (SAR), fortunately JBoss provides such a Maven Plugin. Following is an excerpt from the important parts of the pom.xml of ‘order_sar_v1’.

 
 ......
 <packaging>jboss-sar</packaging>
 ......
  <plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>jboss-packaging-maven-plugin</artifactId>
    <extensions>true</extensions>
    <version>2.2</version>
  </plugin>
  .......

snippet 5

Important points here are the ‘packaging’ type ‘jboss-sar’ and ‘jboss-packaging-maven-plugin’, those will create an ‘order_sar_v1.sar’.

Then we can look to the famous ‘/META-INF/jboss-spring.xml’,

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">
        
        <context:component-scan base-package="org.salgar"/>
        
        <import resource="classpath:/META-INF/context-export/context-applicationContext.xml" />
        <import resource="classpath:/META-INF/order-service/applicationContext-orderService.xml" />
        <import resource="classpath:/META-INF/order-service/applicationContext-dao.xml" />
</beans>

snippet 6

which will activate ‘component-scan’ for autowiring, then initialize Spring JMX Components to register the beans to JVMs JMX Server ‘context-applicationContext.xml’, initialize the beans for our business logic ‘applicationContext-orderService.xml’ and finally ‘applicationContext-dao.xml’ for ‘javax.persitence’ layer.

‘context-applicationContext.xml’

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">

    <context:mbean-export registration="failOnExisting" server="mbeanServer" />

    <bean id="mbeanServer" class="org.springframework.jmx.support.MBeanServerFactoryBean">
        <property name="locateExistingServerIfPossible" value="true" />
    </bean>

</beans>

snippet 7

locates JMX Server and register MBean export mechanism.

applicationContext-orderService.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

	<!-- MBEAN EXPORTER -->
	<bean id="exporter" class="org.springframework.jmx.export.MBeanExporter" lazy-init="false">
		<property name="beans">
                         <map>
				<entry key="salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1" value-ref="order-service" />
			</map>
         	</property>
		<property name="registrationBehaviorName" value="REGISTRATION_REPLACE_EXISTING" />
		<property name="assembler" ref="assembler" />
	</bean>

	<!-- will create management interface using annotation metadata -->
	<bean id="assembler" class="org.springframework.jmx.export.assembler.MetadataMBeanInfoAssembler">
		<property name="attributeSource" ref="jmxAttributeSource" />
	</bean>

	<bean id="jmxAttributeSource" class="org.springframework.jmx.export.annotation.AnnotationJmxAttributeSource" />

	<bean id="order-service" class="org.salgar.order.v1.imp.OrderServiceJmx" />
</beans>

snippet 8

creates Order Service and registers it with

<!-- MBEAN EXPORTER -->
	<bean id="exporter" class="org.springframework.jmx.export.MBeanExporter" lazy-init="false">
		<property name="beans">
                        <map>
				<entry key="salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1" value-ref="order-service" />
			</map>
		</property>
		<property name="registrationBehaviorName" value="REGISTRATION_REPLACE_EXISTING" />
		<property name="assembler" ref="assembler" />
	</bean>

snippet 9

at this point we have to look to the interface of the ‘org.salgar.order.v1.imp.OrderServiceJmx’

package org.salgar.order.v1.imp;

import java.util.List;

import org.salgar.order.api.v1.OrderService;
import org.salgar.order.api.v1.model.Order;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jmx.export.annotation.ManagedOperation;
import org.springframework.jmx.export.annotation.ManagedOperationParameter;
import org.springframework.jmx.export.annotation.ManagedOperationParameters;
import org.springframework.jmx.export.annotation.ManagedResource;

@ManagedResource(objectName = "salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1", description = "Order Service V1", log = true, logFile = "jmx.log")
public class OrderServiceJmx implements OrderService {
	@Autowired
	private OrderService orderService;

	@Override
	@ManagedOperation(description = "Gets a parameter as String and delivers an Order")
    @ManagedOperationParameters({
    	@ManagedOperationParameter(name="orderId", description="Id of the order that we want to load.")
    })
	public Order giveOrder(Integer id) {
		return orderService.giveOrder(id);
	}

	@Override
	@ManagedOperation(description = "Saves an order object")
    @ManagedOperationParameters({
    	@ManagedOperationParameter(name="order", description="Order that we want to save.")
    })
	public Order saveOrder(Order order) {
		return orderService.saveOrder(order);
	}

	@Override
	@ManagedOperation(description = "Keep alive test")
    @ManagedOperationParameters()
	public String giveAlive() {
		return orderService.giveAlive();
	}

	@Override
	@ManagedOperation(description = "Gives the orders of the customer")
    @ManagedOperationParameters({
    	@ManagedOperationParameter(name="customerId", description="Id of the customer who owns the orders")
    })
	public List<Order> giveCustomerOrders(Integer customerId) {
		return orderService.giveCustomerOrders(customerId);
	}
}

snippet 10

as you might see, there are special Spring Annotations for JMX. Spring MBean Exporter uses this information to register MBean to JMX Server. For ex, ‘@ManagedResource(objectName = “salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1”, description = “Order Service V1”, log = true, logFile = “jmx.log”)’ register OrderServiceJmx class as JMX Bean ‘salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1’ with ‘@ManagedOperation(description = “Gives the orders of the customer”)’ will register operation ‘public List giveCustomerOrders’ with input paramer ‘@ManagedOperationParameter(name=”customerId”, description=”Id of the customer who owns the orders”)’ customerId.

OrderServiceJmx is just a Facade for JMX functionality registration, autowired ‘OrderServiceImpl’ class is responsible of the implementation of the Business Logic.

package org.salgar.order.v1.imp;

import java.util.List;

import org.salgar.order.api.v1.OrderService;
import org.salgar.order.api.v1.model.Order;
import org.salgar.order.v1.dao.OrderRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;

@Component
@Transactional
public class OrderServiceImpl implements OrderService {
	@Autowired
	private OrderRepository orderRepository;
	
	@Override
	@Transactional(readOnly = true, propagation = Propagation.NEVER)
	public String giveAlive() {
		return alive_signal;
	}

	@Override
	@Transactional(readOnly = true)
	public Order giveOrder(Integer id) {
		return orderRepository.findById(id);
	}

	@Override
	@Transactional(readOnly = false, propagation = Propagation.REQUIRED)
	public Order saveOrder(Order order) {
		return orderRepository.saveOrder(order);
	}
	
	@Override
	@Transactional(readOnly = true)
	public List<Order> giveCustomerOrders(Integer customerId) {
		List<Order> results = orderRepository.giveCustomerOrders(customerId);
		
		return results;
	}
}

snippet 12

As you might notice this class accessing the to API classes that we defined before, now it is time to show how to configure this SAR to use JBoss Modular Classloading. We need this for a special JBoss configuration file ‘jboss-deployment-structure.xml’ which will be placed in the META-INF directory, content of the file looks like the following….

<?xml version="1.0"?>
<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.1">
    <deployment>
        <dependencies>
        	<module name="org.salgar.order.api.1_0"/>
        	<module name="org.salgar.product.api.1_0"/>
        	<module name="org.salgar.customer.api.1_0" />
        	<module name="org.hibernate" />
       </dependencies>
        <resources>
        	<resource-root path="lib/sar-utility-1.0-SNAPSHOT.jar" />
        	<resource-root path="lib/spring-context-4.2.6.RELEASE.jar" />
        	<resource-root path="lib/spring-beans-4.2.6.RELEASE.jar" />
        	<resource-root path="lib/spring-core-4.2.6.RELEASE.jar" />
        	<resource-root path="lib/spring-aop-4.2.6.RELEASE.jar" />
        	<resource-root path="lib/spring-expression-4.2.6.RELEASE.jar" />
        </resources>
        <exclusions>
        </exclusions>
    </deployment>
</jboss-deployment-structure>

snippet 13

The modules that we defined before ‘org.salgar.order.api.1_0’, ‘org.salgar.product.api.1_0’, ‘org.salgar.customer.api.1_0’ are configured as dependency here, with these instructions our SAR classloader will load these libraries from JBoss Modular Classloading. Also while we are using ‘javax.persistence’ ‘org.hibernate’ is also loaded from JBoss Modular Classloading while JBoss is already providing this library (this also prevents that we get class cast exception when we later try to inject these JMX Bean to ‘ProcessService’ and try to orchestrate the transactions).

Finally we tell the classloader to load the following libraries from ‘lib’ directory of the SAR first ‘sar-utility-1.0-SNAPSHOT.jar’ which is responsible for initializing the Spring for our SAR and then necessary Spring libraries (This is also really critical, this give us possibility of deploying Micro Services with different versions of Spring if it is required. In our solution it is not required like a Monolith constant version number of libraries for whole application).

Other interesting details from ‘OrderServiceImp’ java class is the Autowiring of the ‘OrderRespository’ which is the Repository implementation of the ‘javax.persistence’ for Order beans. In this instance concrete implementation….

import org.springframework.stereotype.Repository;

import java.util.List;

import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import javax.persistence.Query;

@Repository
public class JpaOrderRepository implements OrderRepository  {
	@PersistenceContext
	private EntityManager entityManager;
	
	@Override
	public Order saveOrder(Order order) {
		return entityManager.merge(order);
	}

	@Override
	public Order findById(Integer id) {
		return entityManager.find(Order.class, id);
	}

	@Override
	public List<Order> giveCustomerOrders(Integer customerId) {
		Query query = entityManager.createQuery("SELECT o FROM Order o WHERE o.customer.id= :id ");
		@SuppressWarnings("unchecked")
		List<Order> results = (List<Order>) query.setParameter("id", customerId).getResultList();
		return results;
	}
}

snippet 14

which is requiring the Autowiring of EntityManager with @PersistenceContext annotation, to be able to do that we have to configure the persistence context, which happens in ‘META-INF/applicationContext-dao.xml’ and ‘persistence.xml’.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jee="http://www.springframework.org/schema/jee" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd">
        
        <jee:jndi-lookup id="datasource" jndi-name="java:jboss/datasources/microMySqlDS" />
        
        <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
        	<property name="jtaDataSource" ref="datasource" />
        </bean>
        
        <!-- bean id="txManager" class="org.springframework.orm.jpa.JpaTransactionManager">
        	<property name="entityManagerFactory" ref="entityManagerFactory" />
        </bean-->
        <bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager"/>
        
        <tx:annotation-driven transaction-manager="txManager" />
</beans>

snippet 15

The points that are interesting for us, first of is the choice of the TransactionManager, the way I designed the solution, Micro Services deployed in a Plug and Play fashion and they have to discover each other, this is specially critical for joining Transactions, which can only be realized with Container Managed transactions defined with Java Transaction API (JTA). For this reason we have to use ‘org.springframework.transaction.jta.JtaTransactionManager’ as transaction manager, which will discover automatically underlying Container Transaction Manager of JBoss. Secondly, we have to define here a datasource, our datasource is deployed to the JBoss and detected over the JNDI tree, ” (here we are connecting to a MySQL database but I will modify the sample to work with hypersonic database to be able to run this project with less configuration, I use here the MySQL to be able to simulate cluster feature.)

And the ‘persistence.xml’

<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.0" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
	<persistence-unit name="ORDER_V1_SAR">
		<provider>org.hibernate.ejb.HibernatePersistence</provider>
		<class>org.salgar.order.api.v1.model.Order</class>
		<class>org.salgar.product.api.v1.model.Product</class>
		<class>org.salgar.customer.api.v1.model.Customer</class>

		<properties>
			<property name="hibernate.bytecode.use_reflection_optimizer" value="false" />
			<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver" />
			<property name="hibernate.connection.password" value="${org.salgar.ds.pass}" />
			<!-- property name="hibernate.connection.url" value="jdbc:h2:data/micro" /-->
			<property name="hibernate.connection.username" value="root" />
			<property name="hibernate.show_sql" value="true" />
			<property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect" />
			<property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform"/>
		</properties>
	</persistence-unit>
</persistence>

snippet 16

Again interesting point, first we have to declare the classes that we want to use with ‘javax.persistence’, in this case ‘Order’, ‘Product’, ‘Customer’ and defining ‘hibernate.transaction.jta.platform’ to ‘org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform’ for Container Managed JTA Transactions from JBoss.

One final point, do you remember the HealtCheck interface that OrderService extends, we implemented the functionality for it in OrderServiceImp.

@Override
@Transactional(readOnly = true, propagation = Propagation.NEVER)
public String giveAlive() {
	return alive_signal;
}

snippet 17

it is quite primitive and you can think something definitely better in a production environment but the general idea is that, Netflix Eureka will always calls this methods and expect to get ‘alive_signal’ as correct response to be able direct traffic to this JMX Bean.

Other then these details OrderServiceImpl is not too complex, just doing some basic use cases that you will see in real life.

– Rest Facades
Now we configured our SARs to implement the Business Cases of our Micro Services. Unfortunately all the libraries in the market for providing support for Micro Services infrastructure (mainly Netflix Eureka) does not know anything about JMX, they are build to interact with REST. So remember what I said previously for orchestration purposes when several Micro Services communicates, they will do this over JMX as long as they are deployed to same JVM. For the outside clients (outside JVMs) they will provide their services via these Rest Facades.

For an easy integration with Netflix Eureka, we will implement Rest Facades as Spring Boot Application (mainly because of the Spring Cloud libraries) but instead of deploying them as standalone Spring Boot applications, we will deploy them as Web Archive’s (WAR) into JBoss AS.

For ex, in ‘order_v1_rest’ project, first of all, to implement a Spring Boot Application, we should prepare a class that implements initialization duties….

package org.salgar.service;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.boot.context.web.SpringBootServletInitializer;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.context.annotation.ImportResource;

@SpringBootApplication
@EnableDiscoveryClient
@ImportResource(locations = {"classpath:/META-INF/spring/org/salgar/orderservice/v1/applicationContext.xml"})
public class OrderServiceRestApplication extends SpringBootServletInitializer {
	
	@Override
	protected SpringApplicationBuilder configure(SpringApplicationBuilder builder) {
		return builder.sources(OrderServiceRestApplication.class);
	}
	
	public static void main(String[] args) {
		SpringApplication.run(OrderServiceRestApplication.class, args);
	}
}

snippet 18

Interesting parts here, @SpringBootApplication annotation which configure the spring boot application, @EnableDiscoveryClient annotation which configure the Netflix Eureka libraries and @ImportResource annotation which will load the application context that is necessary for the discovery of the JMX Services.

While we are deploying the Spring Boot application in a WAR, we have to extend the class ‘OrderServiceRestApplication’ from ‘SpringBootServletInitializer’ this will insure that application will run in a Servlet Container.

Spring Boot also need some configuration files, first ‘bootstrap.yml’..

spring:
  application:
    name: ${project.artifactId}
  jmx:
    default-domain: ${project.artifactId}
  cloud:
    config:
      uri: ${vcap.services.${PREFIX:}configserver.credentials.uri:http://user:password@localhost:8888}

snippet 19

here some entries are obvious, like application name that will pop up in Netflix Eureka but jmx entry is little bit tricky. Spring Boot also registers some beans into the JMX Server and while we will also deploy several Micro Services as Spring Boot applications in WAR’s these will collide, to prevent this we have to define a namespace with ‘default-domain’ entry.

And now application.yml,

eureka:
  appinfo:
    replicate:
      interval: 10
  instance:
    leaseRenewalIntervalInSeconds: 10
    metadataMap:
      instanceId: ${vcap.application.instance_id:${spring.application.name}:${spring.application.instance_id:${random.value}}}
  client:
    registryFetchIntervalSeconds: 5
    healthcheck:
      enabled: true
    instanceInfoReplicationIntervalSeconds: 10

snippet 20

this file contains mainly the configuration parameters that are necessary for Netflix Eureka, most interesting thing for us is the property ‘healthcheck->enabled’ this will activate the mechanism that the Netflix Eureka asks periodically to Rest Facades their Health Status.

Before we look what is happening in Java Classes, lets look to the Spring Application Context.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jee="http://www.springframework.org/schema/jee" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd">
	
	<context:component-scan base-package="org.salgar"/>
	
	<mvc:annotation-driven />
	
	<!-- jee:jndi-lookup id="mbeanServerConnection" jndi-name="jmx/rmi/RMIAdaptor" expected-type="javax.management.MBeanServerConnection" /-->

	<bean id="proxyOrderService" class="org.springframework.jmx.access.MBeanProxyFactoryBean">
		<property name="objectName" value="salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1" />
		<property name="proxyInterface" value="org.salgar.order.api.v1.OrderService" />
		<!-- property name="server" ref="mbeanServerConnection" /-->
	</bean>
</beans>

snippet 21

Once again, we see that how Spring JMX libraries can help us, ‘org.springframework.jmx.access.MBeanProxyFactoryBean’ class is automatically discovering the JMX Server, locate the Service ‘salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1’ and cast to the interfaces ‘org.salgar.order.api.v1.OrderService’ so we can inject it to the Rest Facades.

Now lets look how our Rest Facades are looking.

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class OrderServiceRest {
	@Autowired
	@Named("proxyOrderService")
	private OrderService orderService;
	
	@RequestMapping("/order/{orderId}")
	public Order giveOrder(@PathVariable("orderId") Integer id) {
		Order order =  orderService.giveOrder(id);
		
		return order;
	}
	
	@RequestMapping(path = "/save_order", method = RequestMethod.POST)
	public Order saveOrder(@RequestBody Order order) {
		return orderService.saveOrder(order);
	}
	
	@RequestMapping("/customerOrders/{customerId}")
	public List<Order> giveCustomerOrders(@PathVariable("customerId") Integer customerId) {
		List<Order> result =  orderService.giveCustomerOrders(customerId);
		
		return result;
	}
}

snippet 22

For the demonstration purposes, this is an extremely simple service, ‘OrderServiceRest’ is annotated with ‘@RestController’ so that Spring can configure it as a Rest Service. We also inject our JMX Bean which will do the real work for us, as you can remember Rest Facade nothing but delegating calls to JMX Bean. Now you can ask what happens for some reason JMX Bean is not available, what will prevent that Rest Facade would not crash? This will bring us to our next point, as a principle in Micro Service client layer should never communicate with with a certain instance of Rest Facade, we should have composite service which are orchestrating the Micro Services and these orchestration services should get the instance from Netflix Eureka, precisely said from Netflix Ribbon. As I previously mentioned, our JMX Beans will have a Health Check functionality, the moment Health Check reports problems Netflix marks the Rest Facades that these JMX Beans are injected as ‘DOWN’ state. So when an Orchestration Service ask an instance from Netflix Eureka/Ribbon, it would not deliver any instance that is in ‘DOWN’ state.

Java code that implements the Health Checks looks like the following.

package org.salgar.service.healthchecker;

import javax.inject.Named;

import org.salgar.healthcheck.RestHealthIndicator;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class HealthCheckerV1Factory {
	@Autowired
	@Named("proxyOrderService")
	private org.salgar.order.api.v1.OrderService orderService;
	
	@Bean
	public RestHealthIndicator<org.salgar.order.api.v1.OrderService> getHealtIndicator() {
		return new RestHealthIndicator<org.salgar.order.api.v1.OrderService>(orderService);
	}
}

snippet 23

‘HealthCheckerV1Factory’ autowires ‘OrderService’ JMX Bean because we will decide this Rest Facade will be ‘UP’ or ‘DOWN’ status depending on the health of the JMX Bean, then we create another bean ‘HealtIndicator’ with return type ‘RestHealthIndicator’ now this is the critical part, ‘RestHealthIndicator’ extends a ‘org.springframework.boot.actuate.health.HealthIndicator’ class….

package org.salgar.healthcheck;

import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;

public class RestHealthIndicator<T extends HealthCheck> implements HealthIndicator {
	private T service;
	
	public RestHealthIndicator(T service) {
		this.service = service;
	}
	
	@Override
	public Health health() {
		try {
			String result = service.giveAlive();
			if(result == null || "".equals(result)) {
				return Health.down().withDetail("result", result).build();
			}
		} catch (Throwable t) {
			return Health.down().withDetail("" + t.getMessage(), t).build();
		}
		return Health.up().build();
	}
}

snippet 24

this is the class Netflix Eureka search in the classpath and registers to do the Health Check is the option ‘healthcheck->enabled’ turned on in ‘application.yml’, which in our case is. Now Netflix Eureka will periodically check this JMX Bean instance is healthy or not, if not it will be marked ‘DOWN’ and wan’t be used by composite services.

Last point that we will concentrate on ‘order_v1_rest’ is its interaction with the JBoss Modular Classloading, we need here again access to ‘order_api’ so we load the dependency from module ‘org.salgar.order.api.1_0’.

<?xml version="1.0"?>
<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.1">
    <deployment>
        <dependencies>
            <module name="org.salgar.order.api.1_0" />
        </dependencies>
        <exclusions>
        </exclusions>
    </deployment>
</jboss-deployment-structure>

snippet 25

Process/Composite Services
During this whole article, you got a good idea about my vision of Process/Composite services. Micro Services by definition deals with concrete and isolated business cases, like finding/editing/creating products, orders, customers, etc, if they are start stocking functionality, that is the way leading us to the Monoliths. So we still need a layer to orchestrate the things, like finding the product and customer and creating an order with them. This layer is our Composite/Process Services, other then the business requirements I sited, must also fulfill technical requirements, like scalability, availability, etc…., Orchestration Services should help us fulfilling those also.

This is how we reach these goals…

package org.salgar.process.service;

.........

@Order(Ordered.HIGHEST_PRECEDENCE)
@RestController
@Transactional
@TransactionalFanout( services = {"proxyProductServiceV1" , "proxyOrderServiceV1", 
		"proxyCustomerServiceV1"})
public class ProcessService {
	private final static Log LOG = LogFactory.getLog(ProcessService.class);
	private boolean routeRestProductV1 = false;
	private boolean routeRestProductV2 = false;
	private boolean routeRestOrderV1 = false;
	private boolean routeRestOrderV2 = false;
	private boolean routeRestCustomerV1 = false;
	private boolean routeRestCustomerV2 = false;
	
	@Autowired(required = false)
	@Named("proxyProductServiceV1")
	private org.salgar.product.api.v1.ProductService productServiceV1;

	@Autowired(required = false)
	@Named("proxyProductServiceV2")
	private org.salgar.product.api.v2.ProductService productServiceV2;
	
	@Autowired(required = false)
	@Named("proxyOrderServiceV1")
	private org.salgar.order.api.v1.OrderService orderServiceV1;
	
	@Autowired(required = false)
	@Named("proxyOrderServiceV2")
	private org.salgar.order.api.v2.OrderService orderServiceV2;
	
	@Autowired(required = false)
	@Named("proxyCustomerServiceV1")
	private org.salgar.customer.api.v1.CustomerService customerServiceV1;
	
	@Autowired(required = false)
	@Named("proxyCustomerServiceV2")
	private org.salgar.customer.api.v2.CustomerService customerServiceV2;
	
	@Autowired
	private ProcessFacade processFacade;
	
	@PostConstruct
	private void defineRoutes() {
		if(productServiceV1 == null) {
			routeRestProductV1 = true;
		} else {
			try {
				String healthCheck = productServiceV1.giveAlive();
				if(healthCheck == null && "".equals(healthCheck)) {
					routeRestProductV1 = true;
				}
			} catch (Throwable t) {
				LOG.error(t.getMessage(), t);
				routeRestProductV1 = true;
			}
		}
		
		if(productServiceV2 == null) {
			routeRestProductV2 = true;
		} else {
			try {
				String healthCheck = productServiceV2.giveAlive();
				if(healthCheck == null && "".equals(healthCheck)) {
					routeRestProductV2 = true;
				}
			} catch (Throwable t) {
				LOG.error(t.getMessage(), t);
				routeRestProductV2 = true;
			}
		}
		
		if(orderServiceV1 == null) {
			routeRestOrderV1 = true;
		} else  {
			try {
				String healthCheck = orderServiceV1.giveAlive();
				if(healthCheck == null && "".equals(healthCheck)) {
					routeRestOrderV1 = true;
				}
			} catch (Throwable t) {
				LOG.error(t.getMessage(), t);
				routeRestOrderV1 = true;
			}
		}
		
		........
	}

	@RequestMapping("/product/v1/{productId}")
	@Transactional(readOnly = true)
	public org.salgar.product.api.v1.model.Product getProductV1(@PathVariable int productId)
			throws JsonParseException, JsonMappingException, IOException {
		if (routeRestProductV1) {
			return processFacade.executeFallBackProductV1(productId);
		}

		org.salgar.product.api.v1.model.Product resut = processFacade.giveProductV1(productId);

		return resut;
	}
	
	@RequestMapping(path = "/product/v1/saveProduct", method = RequestMethod.POST)
	@Transactional(readOnly = true)
	public org.salgar.product.api.v1.model.Product saveProductV1(@RequestBody org.salgar.product.api.v1.model.Product product)
			throws JsonParseException, JsonMappingException, IOException {
		if (routeRestProductV1) {
			return processFacade.executeFallBackSaveProductV1(product);
		}

		org.salgar.product.api.v1.model.Product resut = processFacade.saveProductV1(product);

		return resut;
	}

	@RequestMapping("/product/v2/{productId}")
	@Transactional(readOnly = true)
	public org.salgar.product.api.v2.model.Product getProductV2(@PathVariable int productId) throws JsonParseException, JsonMappingException, IOException {
		if (routeRestProductV2) {
			return processFacade.executeFallBackProductV2(productId);
		}
		
		org.salgar.product.api.v2.model.Product resut = processFacade.giveProductV2(productId);

		return resut;
	}
	
	@RequestMapping(path = "/product/v2/saveProduct", method = RequestMethod.POST)
	@Transactional(readOnly = true)
	public org.salgar.product.api.v2.model.Product saveProductV2(@RequestBody org.salgar.product.api.v2.model.Product product) throws JsonParseException, JsonMappingException, IOException {
		if (routeRestProductV2) {
			return processFacade.executeFallBackSaveProductV2(product);
		}
		
		org.salgar.product.api.v2.model.Product resut = processFacade.saveProductV2(product);

		return resut;
	}
	
	@RequestMapping("/order/v1/{orderId}")
	@Transactional(readOnly = true)
	public org.salgar.order.api.v1.model.Order giveOrderV1(@PathVariable int orderId) throws JsonParseException, JsonMappingException, IOException {
		if (routeRestOrderV1) {
			return processFacade.executeFallBackGiveOrderV1(orderId);
		}
		
		org.salgar.order.api.v1.model.Order resut = processFacade.giveOrderV1(orderId);

		return resut;
	}
	
	.......
	
	@RequestMapping(path = "/saveOrderWProductWCustomer/v2", method = RequestMethod.POST)
	@Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
	public void saveOrderV2WithProductWithCustomer(@RequestBody org.salgar.process.context.v2.OrderContext orderContext) throws JsonParseException, JsonMappingException, IOException {
		org.salgar.customer.api.v2.model.Customer customerInternal = null;
		
		if (routeRestCustomerV2) {
			customerInternal = processFacade.executeFallBackGiveCustomerV2(orderContext.getCustomer().getId());
		} else {
			customerInternal = processFacade.giveCustomerV2(orderContext.getCustomer().getId());
		}
		org.salgar.product.api.v2.model.Product productInternal;
		if (routeRestProductV2) {
			productInternal = processFacade.executeFallBackProductV2(orderContext.getProduct().getProductId());
		} else {
			productInternal = processFacade.giveProductV2(orderContext.getProduct().getProductId());
		}
		
		List<org.salgar.product.api.v2.model.Product> products = new ArrayList<org.salgar.product.api.v2.model.Product>();
		products.add(productInternal);
		orderContext.getOrder().setProducts(products);
		orderContext.getOrder().setCustomer(customerInternal);
		
		if (routeRestOrderV2) {
			processFacade.executeFallBackSaveOrderV2(orderContext.getOrder());
		} else {
			processFacade.saveOrderV2(orderContext.getOrder());
		}
	}
	
	@RequestMapping(path = "/saveOrderWProductWCustomerTransactionProof/v1", method = RequestMethod.POST)
	@Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
	public void saveOrderWithProductWithCustomerTransactionProof(@RequestBody org.salgar.process.context.v1.OrderContext orderContext) throws JsonParseException, JsonMappingException, IOException {
		org.salgar.customer.api.v1.model.Customer customerInternal = null;
		
		if (routeRestCustomerV1) {
			customerInternal = processFacade.executeFallBackSaveCustomerV1(orderContext.getCustomer());
		} else {
			customerInternal = processFacade.saveCustomerV1(orderContext.getCustomer());
		}
		org.salgar.product.api.v1.model.Product productInternal;
		if (routeRestProductV1) {
			productInternal = processFacade.executeFallBackSaveProductV1(orderContext.getProduct());
		} else {
			productInternal = processFacade.saveProductV1(orderContext.getProduct());
		}
		
		List<org.salgar.product.api.v1.model.Product> products = new ArrayList<org.salgar.product.api.v1.model.Product>();
		products.add(productInternal);
		orderContext.getOrder().setProducts(products);
		orderContext.getOrder().setCustomer(customerInternal);
		
		throw new RuntimeException("Fake exception to prove transaction feature!");
	
		
//		if (routeRestOrderV1) {
//			processFacade.executeFallBackSaveOrderV1(orderContext.getOrder());
//		} else {
//			processFacade.saveOrderV1(orderContext.getOrder());
//		}
	}
}

snippet 26

Above, you are seeing the ‘ProcessService’, our orchestration layer which will orchestrate Micro Services Customer, Product, Order (there is also a notion of Versioning but I will explain more explicitly in a further chapter).

First thing you have to pay attention here, we are injecting our JMX Beans here and not the RestFacades because of Transaction and Performance reasons, we want that orchestration of Micro Services happens over binary protocols and we also don’t want the costs of serialization to REST.

Please pay close attention to annotation ‘@Autowired(required = false)’, we are telling Spring to inject the JMX Bean of our Micro Service if it can find it. If not, it will silently continue, this provides the Plug and Playablity of our system. If we decide that for this Process/Composite Service, performance and transactions are not important and we will not install the JMX Bean to this JVM, it will just communicate over REST.

Now lets assume these JMX Beans are installed to the JVM, then we have to decide these JMX Beans are healthy or not. We realize this in ‘defineRoutes()’ method which is also annotated with ‘@PostConstruct’ which will guarantee that it will run after initialization of this Process/Composite service via Spring. That means if the JMX Beans are injected, we can do their health checks. This is achieved by calling ‘giveAlive()’ method of the ‘HealthCheck’ interface which all of our JMX Beans implements. If the response positive, all the calls will be routed to JMX Beans otherwise to a Rest Facade that we find over Eureka Discovery client. Please note that we are not setting state of our Process/Composite service DOWN or UP at Eureka (yes, as you will shortly see Process/Composite will also be registered to Eureka), we are assuming that there is a temporary problem with our JMX Beans and they can recover so we don’t need to take the Process/Composite service DOWN, it will internally route the calls to the healthy RestFacade instances that exist somewhere in the Eureka cluster.

Until now we looked to the the problem that concentrates on availability and performance, what about transactions, if we don’t have the JMX Beans we can’t guarantee that our Micro Service will be able to join a transaction, there comes into play my custom annotation ‘@TransactionalFanout( services = {“proxyProductServiceV1” , “proxyOrderServiceV1”, “proxyCustomerServiceV1”})’, if we place this annotation to a Process/Composite service, Eureka health check will behave differently (I have to customize Eureka/Spring Boot code to make this work which I will explain a further chapter), now it will execute a health check against all the three JMX Beans that are defined in the annotation and if any of them does not deliver healthy message, it will take this Process/Composite service DOWN status in Eureka until recovers and delivers healthy message. Is this mean, if we have other functionality not depending to these JMX Bean will be unavailable also? Yes, but if this is a problem for you, you have to design for Process/Composite services so that will not contain functionality unrelated to the transactional functionality, for us, protecting the transactional integrity has a higher priority.

Ok, now that we discussed the health checks lets look, how are we dealing with the Business Cases, let observe the methods that delivers us a Product.

        @RequestMapping("/product/v1/{productId}")
	@Transactional(readOnly = true)
	public org.salgar.product.api.v1.model.Product getProductV1(@PathVariable int productId)
			throws JsonParseException, JsonMappingException, IOException {
		if (routeRestProductV1) {
			return processFacade.executeFallBackProductV1(productId);
		}

		org.salgar.product.api.v1.model.Product result = processFacade.giveProductV1(productId);

		return result;
	}

snippet 27

First thing, that draw our attention how are we routing the requests, if our JMX Beans are not healthy we are routing requests to Fallback/RestFacade methodes. You can also observe that this RestController delegate all the business logic to a ProcessFacade, this is off course keeping the design clean in the perspective of Rest and Transaction annotations. We are not polluting the Business Case implementation but there is an another reason called Hystrix.

Hystrix is a Circuit Breaker framework, which means if your system unhealthy, causing errors and you are continuously calling these functionality, you will at some point deplete the resources of the system, memory and thread wise. Hystrix, if detects such a condition, will break the circuit so calls will not be queued in the failed interface and keep it open until the system self recovers (it will periodically send requests to test the system recovered or not). In the mean time, Hystrix has another mechanism called ‘fallbackMethod’, we can define in Hystrix annotation to call an alternative method if the one we are calling continuously failing. We will use this feature instead calling our failing JMX Bean Instance. To find a healthy instance from the Netflix Eureka via Rest Facade. This way we will again fulfill the High Availability principle of the Micro Services, instead of being stuck, the Process/Composite service will find a working instance in our Cluster/Cloud.

There are two points we have to be careful, one we can’t let Hystrix mechanism to engage for business errors, like user does not have permission to call Product Service, Hystrix should only react to technical errors, like connection timeout, etc….luckily there is way to configure this in Hystrix, if we will wrap all of our Business errors in a special Exception type (HystrixBadRequestException) and this will let Hystrix know that it should call the ‘fallbackMethod’ for these Exceptions(another method to guarantee that Hystrix does not react to the Business Exceptions is ‘ignoreException’ property in ‘@HystrixCommand’ annotation, all the exeptions declared in this property will be accepted as Business Exception and Hystrix will not call the fallback method).

Second sticking point, Hystrix can’t call the ‘fallbackMethod’ if the method is calling several Micro Services under one Transcation context, in this case we can’t guarantee transactional integrity. JTA Container can’t deal with the transactional integrity if they are called over the Rest Facades.

Now we can observe giveProductV1 method on ProcessFacade.

@Component
public class ProcessFacadeImpl implements ProcessFacade {
	private static final Logger LOG = LoggerFactory.getLogger(ProcessFacadeImpl.class);
	
	@Autowired(required = false)
	@Named("proxyProductServiceV1")
	private org.salgar.product.api.v1.ProductService productServiceV1;

	........-.

	@Autowired
	private LoadBalancerClient loadBalancerClient;

	private RestTemplate restTemplate = new RestTemplate();

	@Override
	@HystrixCommand(fallbackMethod = "executeFallBackProductV1", commandProperties = {
			@HystrixProperty(name = "execution.isolation.strategy", value = "SEMAPHORE"),
			@HystrixProperty(name = "circuitBreaker.requestVolumeThreshold", value = "1"),
			@HystrixProperty(name = "circuitBreaker.sleepWindowInMilliseconds", value = "10000") })
	public org.salgar.product.api.v1.model.Product giveProductV1(int productId)
			throws JsonParseException, JsonMappingException, IOException {
		org.salgar.product.api.v1.model.Product result = productServiceV1.giveProduct(productId);

		return result;
	}

        @Override
	public org.salgar.product.api.v1.model.Product executeFallBackProductV1(int productId)
			throws JsonParseException, JsonMappingException, IOException {
		ServiceInstance instance = loadBalancerClient.choose("product_v1_rest");

		URI uri = instance.getUri();
		String url = uri.toString() + "/product_v1_rest/product/" + productId;

		ResponseEntity<String> result = restTemplate.getForEntity(url, String.class);

		ObjectMapper mapper = new ObjectMapper();
		org.salgar.product.api.v1.model.Product product = mapper.readValue(result.getBody(),
				org.salgar.product.api.v1.model.Product.class);

		return product;
	}
.......
}

snippet 28

In its basics, this method does nothing other then calling JMX Bean where the whole business logic lies. What here interesting is the configuration of the @HystrixCommand annotation and the implementation of the ‘fallbackMethod’ ‘executeFallBackProductV1’.

@HystrixCommand get four configuration values from us, first what we already discussed the ‘fallbackMethod’. Second @HystrixProperty ‘execution.isolation.strategy’ which is very critical for us, Hystrix uses internally thread pool to be able to realize the circuit breaking functionality, this means when our execution thread calls any method annotated with a @HystrixCommand, it will stop there and the execution will continue with a Hystrix thread which as a consequence, any ThreadLocal value that we have with our execution Thread will not be here. As it is case here, we have a very important value that we keep in our execution thread while we used @Transactional annotation in our Process/Composite service. If we will follow the normal processing rules of the Hystrix, transactions will not work, but if we set the value “SEMAPHORE” in “execution.isolation.strategy” this will guarantee the ThreadLocal values from the execution thread will be transferred to the Hystrix thread. This will cause some performance penalties but we are developing Micro Services so we can always scale.

Third Hystrix property is ‘circuitBreaker.requestVolumeThreshold’, which is the number of the error before the Hystrix break the circuit, other use case of the @HytrixCommand the developers of the Hystrix wait number of fails before breaking the circuit but with our transactional system we want to break circuit directly and redirect the traffic immediately if we see errors on an interface.

Fourth Hystrix property is ‘circuitBreaker.sleepWindowInMilliseconds’ controls after how many miliseconds that Hystrix control that the interface self healed, in our case we set to 10s but it can be much lower.

One final point to express here, as I mentioned previously, we have to treat our Business Exception different then Technical Exception, we have to wrap our Business Exceptions with ‘HystrixBadRequestException’ if we don’t want that Hystrix calls the ‘fallbackMethod’ for a business exception.

Now lets look to the implementation of the ‘fallbackMethod’s. As I mention previously if for any reason the calls to the JMX Beans fails, we want that Hystrix detects this redirect these to RestFacades. So now the question is, how are we going to find healthy Rest Facades. There is an another Netflix component that can help us, Netflix Ribbon. Netflix Ribbon is load balancer that is with continuous contact with Netflix Eureka, and it keep tracks of the healthy Rest Facades.

So to be able to use the Netflix Ribbon we have inject the ‘LoadBalancerClient’ to ‘ProcessFacadeImp’ via autowiring. When Hystrix calls the ‘fallbackMethod’ it will contact with ‘LoadBalancerClient’ and ask for a healthy client with the client name.

ServiceInstance instance = loadBalancerClient.choose("product_v1_rest");

snippet 29

Name of the Rest Facade, here ‘product_v1_rest’ is the same name we use in the ‘application.yml’ in the ‘product_v1_rest’ Rest Facade.

Now that we explained the basics in the ‘ProcessService’ orchestration layers lets look to some problematic areas, like transaction behaviors over multiple Micro Services.

Please look to the below code snippet,

        @RequestMapping(path = "/saveOrderWProductWCustomer/v2", method = RequestMethod.POST)
        @Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
	public void saveOrderV2WithProductWithCustomer(@RequestBody org.salgar.process.context.v2.OrderContext orderContext) throws JsonParseException, JsonMappingException, IOException {
		org.salgar.customer.api.v2.model.Customer customerInternal = null;
		
		if (routeRestCustomerV2) {
			customerInternal = processFacade.executeFallBackGiveCustomerV2(orderContext.getCustomer().getId());
		} else {
			customerInternal = processFacade.giveCustomerV2(orderContext.getCustomer().getId());
		}
		org.salgar.product.api.v2.model.Product productInternal;
		if (routeRestProductV2) {
			productInternal = processFacade.executeFallBackProductV2(orderContext.getProduct().getProductId());
		} else {
			productInternal = processFacade.giveProductV2(orderContext.getProduct().getProductId());
		}
		
		List<org.salgar.product.api.v2.model.Product> products = new ArrayList<org.salgar.product.api.v2.model.Product>();
		products.add(productInternal);
		orderContext.getOrder().setProducts(products);
		orderContext.getOrder().setCustomer(customerInternal);
		
		if (routeRestOrderV2) {
			processFacade.executeFallBackSaveOrderV2(orderContext.getOrder());
		} else {
			processFacade.saveOrderV2(orderContext.getOrder());
		}
	}

snippet 30

As it stays, this method, even the things go wrong with one of the Micro Services that we orchestrate, will call the fallback methods/Rest Facade to accomplish the orchestration. If you observe closely you will discern that ‘processFacade.giveCustomerV2’ and ‘processFacade.giveProductV2’ are read methods, so the information read from SAR or Rest Facade does not have any negative effect on transaction, only thing that has an effect is ‘processFacade.saveOrderV2’ but even here, if this fail ‘Order’ object will be rolled back either SAR or in Rest Facade.

Ok, it is easy if we orchestrate several read with one write, what if we have several save operations like code below,

@RequestMapping(path = "/saveOrderAndProductAndCustomer/v2", method = RequestMethod.POST)
@Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
public void saveOrderV2AndProductAndCustomer(@RequestBody org.salgar.process.context.v2.OrderContext orderContext) throws JsonParseException, JsonMappingException, IOException {
		org.salgar.customer.api.v2.model.Customer customerInternal = null;
		
		customerInternal = processFacade.saveCustomerV2(orderContext.getCustomer());
		
		org.salgar.product.api.v2.model.Product productInternal;
		productInternal = processFacade.saveProductV2(orderContext.getProduct());
		
		
		List<org.salgar.product.api.v2.model.Product> products = new ArrayList<org.salgar.product.api.v2.model.Product>();
		products.add(productInternal);
		orderContext.getOrder().setProducts(products);
		orderContext.getOrder().setCustomer(customerInternal);
		
		processFacade.saveOrderV2(orderContext.getOrder());
	}

snippet 31

now in this method, we can’t delegate this call to Rest Facade and we can’t let the Hystrix call the fallback methods because we can’t guarantee the transactional integrity. So an alternative is too remove the @HystrixCommand annotation from the ‘saveCustomerV2’, ‘saveProductV2’ and ‘saveOrderV2’ methods but we don’t want that also because if these methods are not participating an orchestration is totally legit for them to call the fallback method.

What we need is different behavior from ‘@HystrixCommand’ annotation, if we are in the middle of orchestration it should not call the fallback methods. To achieve this I created the two new annotations and also patched the Hystrix libraries.

First of the annotations called ‘@TransactionalOrchestration’ which will used on the method in the ProductService to signal that we are orchestrating several Micro Services under these methods, every Hystrix annotation running under this context should behave differently.

This bring us to the second annotation, ‘@TransactionalHystrixCommand’ which is a modified version of the ‘@HystrixCommand’, for which I have to patch the Hystrix libraries.

Here is the problem place in Hystrix code in the GenericCommand class …

        Object process(AbstractHystrixCommand.Action action) throws Exception {
        try {
            Object result = action.execute();
            this.flushCache();
            return result;
        } catch (CommandActionExecutionException var5) {
            Throwable cause = var5.getCause();
            if(this.isIgnorable(cause)) {
                throw new HystrixBadRequestException(cause.getMessage(), cause);
            } else if(cause instanceof Exception) {
                throw (Exception)cause;
            } else {
                throw Throwables.propagate(cause);
            }
        }
    }

snippet 32

As you can see, in the case of an error, Hystrix will look the Exception is one of the exceptions defined in the ‘ignoreExceptions’ attribute which will let Hystrix to think this is a Business Exception and not to call the FallBack Method by wrapping this exception with ‘HystrixBadRequestException’. For our transactional orchestrations, we don’t want that a FallBack is called, so we catch and modify the exception if it is running under ‘@TransactionalOrchestration’ context.

So how Hystrix annotation will know this you ask, standard @HystrixCommand can’t know this for this we have to implement ‘@TransactionalHystrixCommand’, there are bunch of classes we have to modify which you can see in ‘hystrix_patch’ project. Basically we have to create a new Aspect that pointcuts ‘@TransactionalHystrixCommand’ and implements ‘TransactionalGenericCommand’ main difference being…

        } catch (CommandActionExecutionException var5) {
            Throwable cause = var5.getCause();
            if(this.isIgnorable(cause)) {
                throw new HystrixBadRequestException(cause.getMessage(), cause);
            } 
            Boolean isTransactionalOrchestration = TransactionalOrchestrationThreadLocal.getTransactionalOrchestrationThreadLocal();
            if(isTransactionalOrchestration != null && isTransactionalOrchestration.booleanValue()) {
            	throw new HystrixBadRequestException(cause.getMessage(), cause);
            } else if(cause instanceof Exception) {
                throw (Exception)cause;
            } else {
                throw Throwables.propagate(cause);
            }
        }

snippet 33

‘TransactionalOrchestration.getTransactionActive()’ here being an access to a ThreadLocal variable which we will set via ‘@TransactionalOrchestration’ annotation via an Aspect, which you can see in ‘annotation-api’ project.

@Aspect
public class TransactionOrchestrationAspect {
     public TransactionOrchestrationAspect() {
     }

     @Pointcut("@annotation(org.salgar.micro.annotation.TransactionalOrchestration)")
     public void TransactionOrchestrationPointcut()
     {
     }
     
     @Around
     public Object methodsAnnotatedWithTransactionalOrchestration(ProceedingJoinPoint joinPoint) throws Throwable {
          Object result = null;
          try {
              TransactionalOrchestration.setTransactionActive(Boolean.TRUE);
              result = joinPoint.proceed();
          } finally {
              TransactionalOrchestration.setTransactionActive(Boolean.FALSE);
          }
     }
}

snippet 34

So at the end, the method ‘saveOrderV2AndProductAndCustomer’ on ProcessService will look like…

@RequestMapping(path = "/saveOrderAndProductAndCustomer/v2", method = RequestMethod.POST)
@Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
@TransactionalOrchestration
public void saveOrderV2AndProductAndCustomer(@RequestBody org.salgar.process.context.v2.OrderContext orderContext) throws JsonParseException, JsonMappingException, IOException {
    .....
}

snippet 35

and for ex, the Method ‘saveOrderV2’ on ProcessFacadeImpl….

@Override
@TransactionalHystrixCommand(fallbackMethod = "executeFallBackSaveOrderV2", commandProperties = {
		@HystrixProperty(name = "execution.isolation.strategy", value = "SEMAPHORE"),
		@HystrixProperty(name = "circuitBreaker.requestVolumeThreshold", value = "1"),
		@HystrixProperty(name = "circuitBreaker.sleepWindowInMilliseconds", value = "10000") })
public org.salgar.order.api.v2.model.Order saveOrderV2(org.salgar.order.api.v2.model.Order order) throws JsonParseException, JsonMappingException, IOException {
	.........
}

snippet 36

– sar_utility
This project contains the necessary utility class to provide Spring initialization mechanisms to the JBoss Service Archives.

Personally I didn’t want to pull some JMX specific classes to the project and want to use ‘convention over configuration’, for this purpose JBoss need some Java interfaces created with a certain naming convention, like the following…

package org.salgar.mbean;

public interface SpringInitializerMBean {
	void test();
}

snippet 37

The ending ‘MBean’ is a must for JBoss to recognize it as a JMX Bean, then the implementation will look like the following…

public class SpringInitializer implements SpringInitializerMBean {
	private static final Log LOG = LogFactory.getLog(SpringInitializer.class);
	private static final String SPRING_CTX = "classpath:/META-INF/jboss-spring.xml";
	private Object /*ConfigurableApplicationContext*/ ctx;
	
	public SpringInitializer() {
		System.out.println("starting");
	}
	
	public void start() throws Exception {
		System.out.println("starting");
		installApplicationContext();
	}
	
	public void stop() throws Exception {
		closeApplicationContext();
	}
	
	@SuppressWarnings("rawtypes")
	private void installApplicationContext() {
        try {
            Class contextClass = Class.forName("org.springframework.context.support.ClassPathXmlApplicationContext");

            @SuppressWarnings("unchecked")
			Constructor constructor = contextClass.getConstructor(new Class[] {String.class});
            Object tmpCtx = constructor.newInstance(new Object[] { SPRING_CTX });
            
            //ConfigurableApplicationContext tmpCtx = new ClassPathXmlApplicationContext(SPRING_CTX);
            if (tmpCtx != null) {
                //log(this.serviceName+" activate new applicationContext");
                ctx = tmpCtx;
            }
        } catch (Throwable e) {
            LOG.error(" Unable to load applicationContext '" + SPRING_CTX + "'. keeping existing context. Reason: " + e.getMessage(), e);
        }
    }

    private void closeApplicationContext() {
        if (ctx != null) {
            try {
                Method close = ctx.getClass().getMethod("close", null);
                close.invoke(ctx, null); //ctx.close();
                //log("applicationContext closed.");
            } catch (Throwable e) {
                //log.error("Unable to close applicationContext '" + SPRING_CTX + "'. Reason: " + e
                //        + ". Restart jboss if possible.");
            }
        }
    }

	@Override
	public void test() {
		System.out.println("Starting...");
		
	}
}

snippet 38

Again as a convention, JBoss looks for ‘start’ and ‘stop’ methods. ‘start’ method, will search a specific file ‘classpath:/META-INF/jboss-spring.xml’ in the classpath which will indicate how the Spring Context will be started. This file must be in the classpath of the JBoss Service Archive that should have an access to Spring functionality.

-Annotation
This is the project we defined the annotation for the support our transactional requirements.

First of all is the ‘@TransactionalFanout’ annotation for the Process/Orchestration layer, containing the information indicating which JBoss SAR’s are necessary to declare this Process Service healthy and can join a transaction. If any of theses SAR’s are not in healthy state, this will the instance of Process/Service down in Eureka.

The annotation looks like the following…

@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
@Documented
public @interface TransactionalFanout {
	String[] services();
}

snippet 39

Now lets look to the annotation processor that is configuring the health checker for the Process/Orchestration layer…

@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class TransactionalFanoutProcessor implements BeanPostProcessor {
	private ConfigurableListableBeanFactory configurableListableBeanFactory;
	
	@Autowired
	public TransactionalFanoutProcessor(ConfigurableListableBeanFactory configurableListableBeanFactory) {
		this.configurableListableBeanFactory = configurableListableBeanFactory;
	}

	@Override
	public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
		boolean result = bean.getClass().isAnnotationPresent(TransactionalFanout.class);
		
		if(result) {
			TransactionalFanout transactionalFanout = bean.getClass().getAnnotation(TransactionalFanout.class);
			List<HealthCheck> services = new ArrayList<>();
			
			for (String  serviceName : transactionalFanout.services()) {
				HealthCheck healthCheck = (HealthCheck) configurableListableBeanFactory.getBean(serviceName);
				services.add(healthCheck);
			}
			
			ProcessHealthIndicatorImpl processHealthIndicator = new ProcessHealthIndicatorImpl();
			processHealthIndicator.setServices(services);
			configurableListableBeanFactory.registerSingleton(bean.getClass().getSimpleName() + "ProcessHealthIndicator", processHealthIndicator);
		}
		return bean;
	}

	@Override
	public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
		return bean;
	}
}

snippet 40

As you can see this class implements ‘BeanPostProcessor’ interface which signal Spring that this class is going to configure the health checks it the places that ‘@TransactionalFanout’ is present. After this discovery, it will locate the ‘HealtCheck’ beans for the service name indicated in the annotations (Healt Check beans are already created by JBoss SAR’s, we are only reusing those) and create a ‘ProcessHealthIndicatorImpl’ bean, populate it with Health Check beans for services and register it to Spring context as singleton, so that they can report health status of Process/Service layer to Eureka.

One additional annotation that we defined in this project called ‘@TransationalOrchestration’, which will mark for the Hystrix framework that transaction occuring in the context extends to the several micro services and Hystrix can’t decide with its normal operation mode for calling fallback methods or not. For this, in the places that this annotation is set, we will also set a ThreadLocal signaling that a transactional orchestration occuring.

Which will happen with following AspectJ aspect….

@Aspect
public class TransactionalOrchestrationAspect {
   @Pointcut("@annotation(org.salgar.annotation.TransactionalOrchestration)")
   public void transactionalOrchestrationAnnotationPointcut(){
   }

   @Around("transactionalOrchestrationAnnotationPointcut()") {
      Object result = null;
      try {
          TransactionalOrchestrationThreadLocal.setTransactionalOrchestrationThreadLocal(Boolean.TRUE);
          result = joinPoint.proceed();
      } finally {
          TransactionalOrchestrationThreadLocal.setTransactionalOrchestrationThreadLocal(Boolean.FALSE);
      }
      return result;
   }
}

snippet 41

As you might see, this annotation will only ensure the ThreadLocal is set to true before method execution and false after the method execution.

-eureka_patch
In the annotation project, you saw that we are using the ‘@TransactionalFanout’ annotation to signal that Process/Orchestration bean should have health checks and register themselves to Eureka then we register ‘ProcessHealthIndicatorImpl’ class for health checks, normally Spring Boot has a configuration file that turns on health check functionality when there is class of instance ‘ProcessHealthIndicator’ in spring context. Unfortunateley because we can only insert this bean when annotation processor executed and this is too late for the Spring Boot configuration class.

The original ‘EurekaDiscoverClientConfiguration’ class contains the following inner class ‘EurekaHealthCheckHandlerConfiguration’ which marked with following ‘@ConditionalOnClass’ annotation, which cause the health check not the activate while this class is not in the Spring context when the ‘EurekaDiscoverClientConfiguration’ initialize (it happens earlier then the execution of ‘BeanPostProcessor’), so I have to modify the class and create a new ‘CompositeEurekaDiscoverClientConfiguration’ and remove this annotation.

@Configuration
@ConditionalOnProperty(value = "eureka.client.healthcheck.enabled", matchIfMissing = false)
protected static class EurekaHealthCheckHandlerConfiguration {

		@Autowired(required = false)
		private HealthAggregator healthAggregator = new OrderedHealthAggregator();

		@Bean
		@ConditionalOnMissingBean(HealthCheckHandler.class)
		public EurekaHealthCheckHandler eurekaHealthCheckHandler() {
			return new EurekaHealthCheckHandler(this.healthAggregator);
		}
}

snippet 42

As you see, only relevant thing is that ‘eureka.client.healthcheck.enabled’ property set to true in ‘application.yml’ or not.

-hystrix-patch
I already mentioned little bit the functionality of this project in detailed explanation of ‘process-service’ project. For our Transactional Orchestrations, we need that Hystrix ‘@HystrixCommand’ to behave differently. Normally in the case of none Business Exception will call the fallback methods, which in our case will cause a call to Rest Facade but in this case we can’t guarantee transactional integrity, so we have to change the behavior of the ‘@HystrixCommand’.

For this purpose, I had implemented the ‘@TransactionalHystrixCommand’ which will use another aspect and another command (which are also defined in this project) that prevents Hystrix to call the fallback methods.

Most of the classes in this project are necessary to be able to modify the Aspect defining the behavior of ‘@HystrixCommand’, you will discern I have to subclass the ‘TransactionalGenericCommand’ with the same package as Netflix classes because it uses a ‘package’ visibility in one of its constructors, it will be nice of Netflix developers to remove this obstacle, otherwise all the classes are slight modification of Netflix classes only to be able to use ‘TransactionalGenericCommand’ for the following purpose.

        } catch (CommandActionExecutionException var5) {
            Throwable cause = var5.getCause();
            if(this.isIgnorable(cause)) {
                throw new HystrixBadRequestException(cause.getMessage(), cause);
            }
            Boolean isTransactionalOrchestration = TransactionalOrchestrationThreadLocal.getTransactionalOrchestrationThreadLocal();
            if(isTransactionalOrchestration != null && isTransactionalOrchestration.booleanValue()) {
               throw new HystrixBadRequestException(cause.getMessage(), cause);
            } else if(cause instanceof Exception) {
                throw (Exception)cause;
            } else {
                throw Throwables.propagate(cause);
            }
        }

snippet 43

the block…

            if(TransactionalOrchestrationThreadLocal.getTransactionalOrchestrationThreadLocal()) {
            	throw new HystrixBadRequestException(cause.getMessage(), cause);
            }

snippet 44

now ensures that if HystrixCommand runs under ‘Transactional Orchestration’ it will not trigger Fallback Methods.

‘TransactionalHystrixAspect’ now has a ‘pointcut’ to intercept the calls marked with the ‘@TransactionalHystrixCommand’ annotation.

    @Pointcut("@annotation(org.salgar.hystrix.transaction.annotation.TransactionalHystrixCommand)")
    public void hystrixCommandAnnotationPointcut() {
    }

snippet 45

and only other major modification in the class is the following part in the ‘methodsAnnotatedWithHystrixCommand’ method…

HystrixInvokable invokable = TransactionalHystrixCommandFactory.getInstance().create(metaHolder);

snippet 46

which using ‘TransactionalHystrixCommandFactory’ instead of the ‘HystrixCommandFactory’.

The implementation of ‘TransactionalHystrixCommandFactory’ is also only changing….

   executable = new TransactionalGenericCommand(HystrixCommandBuilderFactory.getInstance().create(metaHolder));

snippet 47

return type of the factory to ‘TransactionalGenericCommand’.

-health-check
This project contains the classes that are necessary for the Health Check feature of the Netflix Eureka, when Netflix/Spring Boot detects classes that are implementing the ‘org.springframework.boot.actuate.health.HealthIndicator’ interfaces it automatically uses it to perform health checks. If the results of these checks are negative, then these services are set to ‘DOWN’ state in Netflix Eureka.

In this project you will see to implementation of this interfaces, ‘RestHealthIndicator’ which will check the health of the Rest Facades (one single Micro Service) and ‘ProcessHealthIndicatorImpl’ for the process/orchestration services (which will check the health of the all services that this process/orchestration service depends on it).

For ex, the implementation of ‘RestHealthIndicator’ looks like the following….

package org.salgar.healthcheck;

import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;

public class RestHealthIndicator<T extends HealthCheck> implements HealthIndicator {
	private T service;
	
	public RestHealthIndicator(T service) {
		this.service = service;
	}
	
	@Override
	public Health health() {
		try {
			String result = service.giveAlive();
			if(result == null || "".equals(result)) {
				return Health.down().withDetail("result", result).build();
			}
		} catch (Throwable t) {
			return Health.down().withDetail("" + t.getMessage(), t).build();
		}
		return Health.up().build();
	}
}

snippet 48

In this feasibility study, this health check is extremely simple and just checks that JMX beans returns a certain string or not. In a real life application, this can be done as complex as necessary, like checking database connection there or not, services returns a reference data set successfully or not, etc… This health check response is available from every JMX Bean because they all have to implement ‘HealthCheck’ interface in our system.

package org.salgar.healthcheck;

public interface HealthCheck {
	final static String alive_signal = "We are alive!";
	
	public String giveAlive();
}

snippet 49

This will method must return a value that is signaling the health of our application.

package org.salgar.healthcheck;

import java.util.List;

import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;

public class ProcessHealthIndicatorImpl implements HealthIndicator {
	List<? extends HealthCheck> services;
	
	public ProcessHealthIndicatorImpl() {
	}
	
	public ProcessHealthIndicatorImpl(List<? extends HealthCheck> services) {
		this.services = services;
	}

	@Override
	public Health health() {
		for (HealthCheck service : services) {
			if(service == null) {
				return Health.down().withDetail("Service is null!", null).build();
			}
			try {
				String result = service.giveAlive();
				if(result == null || "".equals(result)) {
					return Health.down().withDetail("result", result).build();
				}
			} catch (Throwable t) {
				return Health.down().withDetail("" + t.getMessage(), t).build();
			}
		}
		
		return Health.up().build();
	}

	public void setServices(List<? extends HealthCheck> services) {
		this.services = services;
	}
}

snippet 50

‘ProcessHealthIndicatorImpl’ fulfills the same requirements for Process/Orchestration services, it iterates over all the Services cited in the ‘@TransactionalFanout’ and check their health, if anyone of them faild, this class will set the Processs\Orchestation service marked with ‘@TransactionalFanout’ annotation in DOWN state in Eureka.

-support/eureka
This project contains the necessary configuration for the start of the Spring Boot Eureka Server. There are 3 things to pay attention, first is the startup/configuration class for Spring Boot and 2 YAML configuration files, ‘bootstrap.yml’ and ‘application.yml’.

package org.salgar.eureka;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
@EnableDiscoveryClient
public class EurekaApplication {
	public static void main(String[] args) {
		SpringApplication.run(EurekaApplication.class, args);
	}
}

snippet 51

As I said, it is extremely easy, but Spring Boot does lots of magic for us at behind the scenes naturally. ‘@SpringBootApplication’ annotation signals that this class should initialize Spring Boot components, ‘@EnableEurekaServer’ annotation enables the configuration of the EurekaServer via ‘EurekaServerConfiguration’ and ‘@EnableDiscoveryClient’ annotation that configures the Eureka Discovery Client via ‘EurekaDiscoverClientConfiguration’ class.

‘bootstrap.yml’ ….

spring:
  application:
    name: eureka
  cloud:
    config:
      uri: ${vcap.services.${PREFIX:}configserver.credentials.uri:http://user:password@localhost:8888}

snippet 52

defines the Spring Boot application name and the configuration uri.

‘application.yml’…

server:
  port: 8761
security:
  user:
    password: ${eureka.password} # Don't use a default password in a real app

eureka:
  client:
    registerWithEureka: false
    fetchRegistry: false
  server:
    waitTimeInMsWhenSyncEmpty: 0
  password: ${SECURITY_USER_PASSWORD:password}

snippet 53

defining the port that EurekaServer will listen and Eureka Client registry values.

One final trick here, to start to Eureka Server via Spring Boot, I didn’t want to fight with managing all that dependencies for the classpath so I used ‘spring-boot-maven-plugin’ plugin to create a single executable via the following configuration in the ‘pom’ file.

<plugin>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-maven-plugin</artifactId>
			<executions>
				<execution>
					<goals>
						<goal>repackage</goal>
					</goals>
				</execution>
			</executions>
		</plugin>

snippet 54

then the ‘jar’ that is created can be run with a simple command like the following, ‘java -jar eureka-1.0-SNAPSHOT.jar’.

-support/zuul
Netflix Zuul acts like Gateway/Edge Server for our solution, normally we don’t want that our Micro Services accessed uncontrolled over intranet/internet, we want the communication occurs only over Process\Orchestration layer. For this purpose we will start Netflix Zuul as Spring Boot application and we need to do some configuration for Gateway functionality.

@SpringBootApplication
@EnableZuulProxy
public class ZuulApplication {
	public static void main(String[] args) {
		SpringApplication.run(ZuulApplication.class, args);
	}
}

snippet 55

For the configuration, first we need two annotations, ‘@SpringBootApplication’ annotation signals that this application will be started as Spring Boot Application and ‘@EnableZuulProxy’ annotation is a signal for Spring Boot to trigger the class ‘ZuulProxyConfiguration’ to configure the Zuul Proxy. These configuration values provided from two YAML configuration ‘bootstrap.yml’ and ‘application.yml’.

spring:
  application:
    name: zuul
  cloud:
    config:
      uri: ${vcap.services.${PREFIX:}configserver.credentials.uri:http://user:password@localhost:8888}

snippet 56

‘bootstrap.yml’ is mainly for Spring Boot and it does not have any additional configuration values other then, Spring Boot application name and config url.

info:
  component: Zuul Server

endpoints:
  restart:
    enabled: true
  shutdown:
    enabled: true
  health:
    sensitive: false

zuul:
  ignoredServices: "*"
  routes:
    product_process-1_0:
      path: /product-process-1.0-SNAPSHOT/**
      serviceId: product-process_1.0-SNAPSHOT
      stripPrefix: false

server:
  port: 8765

logging:
  level:
    ROOT: INFO
    org.springframework.web: INFO
    com.netflix: DEBUG 

snippet 57

‘application.yml’ is much more interesting, it first contains configuration information over endpoint lifecycles, server port and most importantly Zuul Gateway configurations, mainly ‘ignoredServices: “*”‘ which will prevent any access to our Micro Services\Rest Facades and then routes, which only access to the services, we choose to, in this case Process\Orchestration services.

  routes:
    product_process-1_0:
      path: /product-process-1.0-SNAPSHOT/**
      serviceId: product-process_1.0-SNAPSHOT
      stripPrefix: false

snippet 58

the route we defined here accept the requests on the path ‘/product-process-1.0-SNAPSHOT/**’ and forward those to service ‘product-process_1.0-SNAPSHOT’ as it is defined in the Netflix Eureka(via Spring Boot ‘application.yml’ in ‘product-process’ project).

With this configuration, the JBoss’es should not be open to Internet/Intranet, all traffic from these sources should come to Zuul Server and it will direct it to the Process/Orchestration services. This feature is important for two reasons, first security, nobody should access our interfaces uncontrolled, secondly, if you open your interfaces uncontrolled, you will have lots of integration problems, lets say you are changing somethings in your Micro Services, who you are going to let know about these changes so they can adapt their applications also. Trust me, this will help you a lot for change management and so, to have the Zuul Gateway, you can track from a central place who are using your services.

I also used in this project as ‘support\eureka’ ‘spring-boot-maven-plugin’ to create a single executable jar file to start the Zuul via the following command ‘java -jar zuul-1.0-SNAPSHOT.jar’

-assembler
This project is really relevant for our deployment, certainly you discerned that we are not using Enterprise Application Archive(EAR), because we want to be able to decide how many and where we will deploy our Micro Service. If we use EAR deployments this will push us again in the direction of the Monolith. We also don’t want to deploy every artifact manually, in this point Maven Assembly plugin is really useful tool and will help us.

With Maven Assembly plugin we can use description file like the following to put our application together…

<assembly>
	<id>package</id>
	<formats>
		<format>dir</format>
	</formats>
	<includeBaseDirectory>false</includeBaseDirectory>
	<baseDirectory>${project.artifactId}</baseDirectory>
	<dependencySets>
		<dependencySet>
			<includes>
				<include>org.salgar:product_api</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>modules/system/layers/base/org/salgar/product/api/1_0/main</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:order_api</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>modules/system/layers/base/org/salgar/order/api/1_0/main</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:customer_api</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>modules/system/layers/base/org/salgar/customer/api/1_0/main</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:product_sar</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:order_sar</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:customer_sar</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:product_rest</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<!-- outputFileNameMapping>${artifact.artifactId}.${artifact.extension}</outputFileNameMapping-->
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:order_rest</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<!-- outputFileNameMapping>${artifact.artifactId}.${artifact.extension}</outputFileNameMapping-->
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:customer_rest</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<!-- outputFileNameMapping>${artifact.artifactId}.${artifact.extension}</outputFileNameMapping-->
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:product-process</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<!-- outputFileNameMapping>${artifact.artifactId}.${artifact.extension}</outputFileNameMapping-->
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:eureka</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>utility</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:zuul</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>utility</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
	</dependencySets>
</assembly>

snippet 59

this is the assembly description file for Maven Assembly plugin, basically what it does, is to place Maven Artifacts that we defined as dependencies in ‘pom.xml’ to certain directories, like…

<dependencySet>
        <includes>
	       <include>org.salgar:product-process</include>
	</includes>
	<useTransitiveDependencies>false</useTransitiveDependencies>
	<outputDirectory>standalone/deployments</outputDirectory>
	<unpack>false</unpack>
</dependencySet>

snippet 60

in this case dependency, ‘org.salgar:product-process’, is copied to directory ‘standalone/deployments’ which will be deployment directory for JBoss. This way all the dependencies are copied to the necessary locations like, ‘api’ classes going to ‘modules/system/layers/base’ for the modular classloading of JBoss, support projects like ‘Eureka’, ‘Zuul’ to utility directory and nearly all the rest to ‘deployments’ directory.

Final structure can be copied as a package to destination.

General Topics
Now that we dicussed in a detailed fashion project structure and the functionality that every project provides, I want to say somethings about some general topics, mainly Performance\Transaction Optimization and the Versioning Concept.

Performance/Transaction, Optimizations and Design
Until now I tried to define a template for you to build a Transactional Micro Services application but we are now coming to a very important part, your experience. You can not blindly apply the concepts of this feasiblity study, it doesn’t make sense to make every Micro Service transaction capable or communicate over JMX. Your Business Case must decide this, if you will have a Micro Service that will only read values from a database, it does not make too much sense to implement it as transaction capable.

Or if it is too important for your Business Case to have fail over security then performance and transaction capablity, then implement them as plain REST instead of JMX and so. As I said, you have to decide your System depending on your Business Case and your experience but this template here will support you whatever direction you want to go.

The Plug and Play nature of this template will ensure that we can decide in our application independently where preformance/transaction is important and where is scalablity/failover is important. For ex, you want failover security, don’t install the JMX Beans of your Micro Service to the JVM that your Process\Orchestration lies, this way Process\Orchestration Services will always get their instances for the other Micro Services from the Netflix Eureka. You need performance and failover is secondary, then go for installing JMX Bean in the same JVM of Process\Orchestration Micro Service, key point is you don’t have to configure anything, you just have to install it.

When you are designing your Micro Services which might need Transaction functionality, try to group Process\Orchestration Micro Service to only do read operation or write operations, this way you can achieve more optimisation (In the case of failure, read operations can find a healthy instance in Netflix Eureka and continue to function but for save operations, transactional orchestration, it is not good idea\possible to go for failover over Netflix Eureka).

Many of the projects that I am involved trying to go from Monolith to Micro Services, they are struggling in the design part, which has two aspects, one, how can men divide and concur the Monolith and the necessary design choices for Performance\Fallback and Transaction.

I strongly encourage you divide your Monolith so that Functional Columns constructed via Process\Orchestration level and Individual Columns for the Micro Services that will be used from several of these Process\Orchestration layer, like we did here, for Functional Columns like ProcessService or AccountingService or AuthorizationService dealing with other Micro Services like Customer, Product, Account, Order, Authorization, etc….

Versioning Concept
There is one more topic that I want to point out here, Versioning which is an unseperable part of the Micro Services, which you will undoubtedly encounter in your projects. Normally people solve this problem in artificial ways, like creating java packages for the every new version of the Micro Service or with URL conventions, the reason mostly dictated via Classloader mechanisms of Java Application Service.

I can think of four possible ways of organizing Versioning for Micro Service which I will point out here. One of them you already saw in picture 1.

Scenario 1:
In this scenario, we are handling the versioning with the naming convention of the API, SAR, RestFacade projects, then Process\Orchestration layer ports these versions to the outside world. This solution prototype is build in ‘scenario_1’ branch in ‘GitHub’.

scenario_1
picture 7

Scenario 2:
This scenario approach the nearly as same as ‘scenario 1’, only major difference, instead of having separate Rest Facade projects for different versions, it also uses one single Rest Facade project and expose different versions with naming conventions like, ‘giveProductV1’, ‘giveProdcutV2’, ‘saveProductV1’, ‘saveProdcutV2’, etc… the implementation change from ‘scenario 1’ is trivial, so I didn’t placed in GitHub, if you are interested you can see as exercise to the reader.

scenario_2
picture 8

Scenario 3:
This is my favorite option, I personally dislike artificial ways of versioning Micro Services. Versioning of Java Package names, versioning of Class names or versioning Project names, they are all things that are imposed to us because of the limitations of Java or partially out laziness. It is a limitation of Java because of the classloading problems, think about that you have 1.0-SNAPHOT of a project and 2.0-SNAPHOT of a project, which contains exactly the same java package and class structures, under normal conditions you can’t deploy this because Java can’t decide what is the correct version under the circumstances but we already discussed above that this is no problem to us because of the JBoss’s Modular Classloading or the incoming JSR-376 with Java 9. We can define exactly in JBoss what will be loaded by the classloader and for which SAR’s and WAR’s would it be available. The other argumentation you hear in the projects, having two versions of the project under GitHub and managing them is too difficult, I mean common.

scenario_3
picture 9

You will see under GitHub two branches, ‘scenario3_1’ is the ‘scenario 3’ with version ‘1.0-SNAPSHOT of the project’ and the ‘scenario3_2’ is the version ‘2.0-SNAPSHOT’, when the ‘assembly’ project executed, it will produce the version ‘1.0-SNAPSHOT’ and ‘2-0-SNAPHOT’ of the project to deploy in JBoss side-by-side. If you look in the github, the Java Projects contains exactly the same Java Package names, classes and so, the classloader and configurations in ‘jboss-deployment-structure.xml’ decides which class will be available to which project.

In the next Screenshot you also see that same Project Names are used with different versions in JBoss Deployment directory.

scenario_3_deployment
picture 10

This version scheme will also effect the API project and how they will configured before we start configuring those I advice you to copy the existing JBoss to another directory and make the changes there if you plan to make further test Scenario 1. We have to adapt ‘module.xml’ to reflect the versioning changes.

For ex, ‘module.xml’ for Order API 2.0-SNAPSHOT,

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="org.salgar.order.api.2_0">
    <resources>
        <resource-root path="order_api-2.0-SNAPSHOT.jar"/>
    </resources>
 
    <dependencies>
        <module name="org.salgar.healthcheck.api" export="true"/>
		<module name="javax.persistence.api" export="true"/>
		<module name="org.salgar.product.api.2_0" />
		<module name="org.salgar.customer.api.2_0" />
		<module name="org.hibernate" />
		<module name="org.javassist" />
    </dependencies>
</module>

snippet 61

while now the artifact name is ‘order_api-2.0-SNAPSHOT’ we have to change this in ‘module.xml’.

One of the consequence of this change will cause, now we can’t have one Product-Process Project to serve the several version of the project, we have to have separate ones for every project. This also means some changes in the Zuul Route configurations which you can see under Github.

Scenario 4:
This scenario, from Versioning point of view is quite similar to ‘Scenario 2 and 3’ the major difference is we transferred Process\Orchestration layer to another application server (mainly to another JVM) if it is problem for some people to have all JBoss technology stack, in this solution application server can be anything, Weblogic, Websphere, Jboss even non application servers like Tomcat, Spring Boot. Of course, if you have to keep in mind when Process\Orchestration layers communicates with JMX Beans remotely, the cost of serialization will be there. I am only mentioning this scenario for completeness reason but serialization costs will make this solution a no go . Again the implementation you can accept it as an exercise to the user.

scenario_4
picture 11

Project Environment Preparation
Now lets talk little bit to figure out what do you need to run the feasibility application, how do we get the code from GitHub, how do we build the application with Maven and what do we have to prepare in JBoss and deploy the application.

Source Code
To be able get to source code of the feasibility study first of all you have to install the ‘Git’ version control application, which can be downloaded from the following URL Git Download.

You can download the source code from GitHub, first switch to the directory that you want to download the source code and then execute the command

git clone https://github.com/mehmetsalgar/micro_tran.git

snippet 62

Building the Project
Now that we downloaded the project, we can build it via Maven, if you need it, you can download the Maven from Maven Download.

You can execute the following command, from the root directory of the project ‘micro_tran’, ‘mvn clean install’ a successful execution will produce a log out as the following.

maven_build
picture 12

JBoss Installation
You can download the JBoss from the following link JBoss Download. After you extract the files from the ZIP file you should have the following directory structure.

jboss
picture 13

Mostly JBoss does not need any additional configuration but you can set some parameters in file ‘$JBOSS_HOME\bin\standalone.conf’ (in Windows standalone.conf.bat) like…

set "JAVA_HOME=C:\Java\1.8.0_51"

snippet 63

or memory parameters

rem # JVM memory allocation pool parameters - modify as appropriate.
set "JAVA_OPTS=-Xms1024M -Xmx2048M -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512M"

snippet 64

or enabling remote debugging….

rem # Sample JPDA settings for remote socket debugging
set "JAVA_OPTS=%JAVA_OPTS% -agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=n"

snippet 65

Modular Classloading
As we discussed before we are going to use the modular classloading of the JBoss, we have to prepare something, I actually wrote a Maven Plugin to do this for us and I will publish that also but at the moment we have to do it manually. Feasibility study need the declaration of 4 JBoss Modules, ‘health-check-api’, ‘customer-api’, ‘product-api’ and ‘order-api’. For these we have to write the module descriptors and place them in a special directory.

Module definition for ‘health-check-api’ looks like the following….

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="org.salgar.healthcheck.api">
    <resources>
        <resource-root path="health-check-api-1.0-SNAPSHOT.jar"/>
    </resources>
</module>

snippet 66

for ‘customer-api’ for version ‘v1’ and ‘v2’…..(I am only stating here the ‘v1’ you can modify the parameters yourself for ‘v2’)

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="org.salgar.customer.api.1_0">
    <resources>
        <resource-root path="customer_api_v1-1.0-SNAPSHOT.jar"/>
    </resources>
 
    <dependencies>
        <module name="org.salgar.healthcheck.api" export="true"/>
		<module name="javax.persistence.api" export="true"/>
		<module name="org.hibernate" />
		<module name="org.javassist" />
    </dependencies>
</module>

snippet 67

for ‘product-api’ for version ‘v1’ and ‘v2’…

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="org.salgar.product.api.1_0">
    <resources>
        <resource-root path="product_api_v1-1.0-SNAPSHOT.jar"/>
    </resources>
 
    <dependencies>
        <module name="org.salgar.healthcheck.api" export="true"/>
		<module name="javax.persistence.api" export="true"/>
		<module name="org.hibernate" />
		<module name="org.javassist" />
    </dependencies>
</module>

snippet 68

and for ‘order-api’ for version ‘v1’ and ‘v2’….

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="org.salgar.order.api.1_0">
    <resources>
        <resource-root path="order_api_v1-1.0-SNAPSHOT.jar"/>
    </resources>
 
    <dependencies>
        <module name="org.salgar.healthcheck.api" export="true"/>
		<module name="javax.persistence.api" export="true"/>
		<module name="org.salgar.product.api.1_0" />
		<module name="org.salgar.customer.api.1_0" />
		<module name="org.hibernate" />
		<module name="org.javassist" />
    </dependencies>
</module>

snippet 69

as you can see all the modules have as dependency ‘health-check-api’ while we are using it in all our micro service to report health status to Netflix Eureka, ‘javax.persistence.api’, ‘org.hibernate’, ‘org.javaassist’ while we are using Hibernate\JPA annotation for persistence (I placed these on api classes because I don’t want to complicate the feasibility project too much, if you don’t want to pollute your api classes with Hibernate\JPA you can only define here only interfaces and concrete implementation in another layer) for ‘customer-api’ and ‘product-api’ but in ‘order-api’ also ‘prodcut-api’ and ‘customer-api’ while we have a JPA Relation definition to ‘Customer’ and ‘Product’ from ‘Order’.

Datasources
While we are discussing here a feasibility study over Micro Services and Transactions, we need a database to work with. At this point I will explain you how to setup a Hypersonic database for JBoss while this will be the easiest for you to setup but while I will configure the Hypersonic as in memory database, it will not be possible to test real life scenarios like multiple JBoss accessing to database and so. I have another branch in the GitHub, ‘scenario_1_mysql’ which contains the project configured for MySQL Database, setup details for this I will explain in the appendix.

For a Hypersonic database we have to configure a ‘Datasource’ in JBoss, which happens with the following file.

<?xml version="1.0" encoding="UTF-8"?>

<datasources xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.jboss.org/ironjacamar/schema/datasources_1_0.xsd">
  <datasource jndi-name="java:jboss/datasources/microDS" pool-name="microDSPool" enabled="true" use-java-context="true">
    <connection-url>jdbc:h2:${jboss.server.data.dir}/micro/micro</connection-url>
    <driver>h2</driver>
    <security>
      <user-name>sa</user-name>
      <password>${org.salgar.ds.pass}</password>
    </security>
  </datasource>
  <drivers>
	<driver name="h2" module="com.h2database.h2"> 
	       <xa-datasource-class>org.h2.jdbc.JdbcDataSource</xa-datasource-class>
	</driver>
  </drivers>
</datasources>

snippet 70

This is standard configuration for a Hypersonic database, there are only two interesting parts first is getting the password for the connection from JVM parameter, you should start JBoss with following parameter ‘-Dorg.salgar.ds.pass=YourPassword’ and second is the configuration where Hypersonic information is saved for this we are using JBoss internal property ‘jboss.server.data.dir’ to point where also JBoss is saving its local files.

Now we have one task left for completing the database configuration, we have to create necessary database objects to run the feasibility study. Necessary SQL file called ‘create.sql’ can be found under the directory ‘micro_tran’. You can see the SQL it contains below.

CREATE TABLE ORDER_TA (ID INT PRIMARY KEY, CUSTOMERID INT, COMMITDATE NUMBER(15), STATUS INT);
CREATE TABLE PRODUCT_TA (PRODUCTID INT PRIMARY KEY,  NAME VARCHAR(255));
CREATE TABLE ORDER_PRODUCT_TA (PRODUCTID INT, ORDERID INT);
CREATE TABLE CUSTOMER_TA (ID INT PRIMARY KEY,   NAME VARCHAR(255), FIRSTNAME VARCHAR(255), BIRTHDATE NUMBER(12));
CREATE SEQUENCE HIBERNATE_SEQUENCE;
INSERT INTO ORDER_TA (ID,   COMMITDATE , STATUS) VALUES (1, 345345345, 5);
INSERT INTO PRODUCT_TA(PRODUCTID, NAME  ) VALUES(9999, 'Test');

snippet 71

To run this script in JBoss is little bit tricky, we have to install a Web Application in JBoss to access to Hypersonic database, the application can be download from the following URL Hypersonic Console.

You can reach the application with the following “http://localhost:8080/h2console&#8221;, you will be greeted with a login screen, enter the identification information from the datasource, which will look for the following.

hypersonic_login
picture 14

After the login, you can execute the content of the ‘create.sql’ in the following form as you can see below.


picture 15

Netflix Eureka
As we spoke about it a lot in the previous chapter our application heavily depends on the services provided be the Netflix Eureka so we have to start that also. Luckily, thanks to Spring Boot it is a trivial task, calling the following command will be enough ‘java -jar eureka-1.0-SNAPSHOT.jar’ in the directory that jar file lies.

I also made a little trick to make the life easier for you so that you don’t have to fight with the necessary dependencies for Eureka, this snippet when place in pom.xml will package also the all necessary dependencies for the Eureka.

<plugin>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-maven-plugin</artifactId>
	<executions>
		<execution>
			<goals>
				<goal>repackage</goal>
			</goals>
		</execution>
	</executions>
</plugin>

snippet 72

When you take you take Eureka up and also the JBoss Application Server you should see the following picture in Netflix console.

eureka
picture 16

In which you see that all of our services are at UP state, (one thing I didn’t mentioned we will use Zuul as gateway and Turbine as monitoring tool, we need the messaging tool RabbitMQ in our project for messaging between these components, if you don’t configure the RabbitMQ and start the application, you will see ‘Product_Process’ in DOWN state if you didn’t started the RabbitMQ so don’t be alarmed).

This bring us a point that I specially want to mention, which is called ‘red\green’ pattern. Now think about your conventional applications, when you want to deploy a new version of your application, you always have a down time in your application, dependent on the size and complexity of your system it can take 5-6 hours or may be more. This can be acceptable if you are deploying your application every 3 months or so, but this is against Agile Agenda isn’t, we want to go our customers more often and that is a very important point for micro services. It is also not acceptable to have 5-6 hours down times every month, so what should we do?

Netflix Eureka and Micro Services helps us in this area with ‘red\green’ pattern, ‘green’ will be in this case will be current working copy of your Micro Services, when you deploy the new version of your Micro Services, they will be deployed to ‘red’ zone marked as ‘OUT_OF_SERVICE’ until you check\test and be sure that everything operating correctly. For this behavior, we need the following configuration parameters in ‘application.yml’ for ex in ‘product_process’ Micro Service.

eureka:
  instance:
    initialStatus: OUT_OF_SERVICE
    instanceEnabledOnit: false

snippet 73

These parameters will signal Eureka that the service will not be available for traffic initially and marked as ‘OUT_OF_SERVICE’ in Eureka which can be seen in the following screen shot.

eureka_out_of_service
picture 17

Now you tested your Micro Services and now you believe it is time to make accessible, so you have to switch from ‘green’ zone to ‘red’ zone, how do we do that. Luckily Eureka provide a REST interface to manipulate the state of the Micro Services its controls, which you can see in the following API definition Eureka REST API.

We can manipulate with this API state of the every single Micro Service in Eureka, for ex, with the following command(the command must be executed as a HTTP POST that is the reason we use the ‘curl’).

curl --noproxy localhost -X PUT http://localhost:8761/eureka/apps/PRODUCT_PROCESS/micro.salgar.org:product_process:8080/status?value=UP

snippet 74

Voila, you updated your Micro Service to a new version with zero downtime….

Netflix Zuul
Until now we allowed direct access to our Micro Services and actually this is definitely not a good idea. First reason is security, it is quite risky to give complete open access to our Micro Service and also we should keep track of who using our APIs, when the number of client increase things get quite fast chaotic (for ex, you will make a change that nullify the API contract, how are you going to notify your client if you don’t know who they are), if we provide a single point of access we will have better control over this topics (I will not explore this blog the subject but we can also use OAuth2 for security purpose if we implement this single point of access). Second point is again the topic that we discussed in the previous chapter, if want to implement red\green pattern, it is better that our clients don’t have a direct access to our Micro Services.

Fortunately, Netflix provides with Zuul library again a solution to the problems we discussed above.

Again the maven plugin ‘spring-boot-maven-plugin’ packs all our dependencies to one single executable jar and we only have to go to the directory ‘micro_tran\support\target’ and execute the following command ‘java -jar zuul-1.0-SNAPSHOT.jar’ and now out Micro Services only accessible over the routes we defined in the Zuul via ‘localhost:8765’ address.

JBoss Deployment and Startup
Now that all the perquisites are done we are ready to deploy and start the JBoss.

The artifacts that must be copied from ‘micro_tran\assembler\target\assembler-1.0-SNAPSHOT-package’ directory, we have to copy them to our JBoss installation directory which is ‘$JBOSS_HOME’.

Now that we configured the JBoss, Modular Classload, Datasources, Netflix Eureka and Zuul we can start the JBoss Application Server, calling ‘$JBOSS_HOME\bin\standalone.sh’ (‘standalone.bat’ in Windows) will start the JBoss Application Server, after a successful start you should see something similar the following output in the logs.

2016-12-13 13:31:42,162 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "product_v2_rest.war" (runtime-name : "product_v2_rest.war")
2016-12-13 13:31:42,162 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "product_v1_rest.war" (runtime-name : "product_v1_rest.war")
2016-12-13 13:31:42,163 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "product_sar_v2-1.0-SNAPSHOT.sar" (runtime-name : "product_sar_v2-1.0-SNAPSHOT.sar")
2016-12-13 13:31:42,163 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "product_sar_v1-1.0-SNAPSHOT.sar" (runtime-name : "product_sar_v1-1.0-SNAPSHOT.sar")
2016-12-13 13:31:42,164 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "product-process.war" (runtime-name : "product-process.war")
2016-12-13 13:31:42,165 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "order_v2_rest.war" (runtime-name : "order_v2_rest.war")
2016-12-13 13:31:42,167 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "order_v1_rest.war" (runtime-name : "order_v1_rest.war")
2016-12-13 13:31:42,168 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "order_sar_v2-1.0-SNAPSHOT.sar" (runtime-name : "order_sar_v2-1.0-SNAPSHOT.sar")
2016-12-13 13:31:42,169 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "order_sar_v1-1.0-SNAPSHOT.sar" (runtime-name : "order_sar_v1-1.0-SNAPSHOT.sar")
2016-12-13 13:31:42,169 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "micro-ds.xml" (runtime-name : "micro-ds.xml")
2016-12-13 13:31:42,171 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "customer_v2_rest.war" (runtime-name : "customer_v2_rest.war")
2016-12-13 13:31:42,172 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "customer_v1_rest.war" (runtime-name : "customer_v1_rest.war")
2016-12-13 13:31:42,173 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "customer_sar_v2-1.0-SNAPSHOT.sar" (runtime-name : "customer_sar_v2-1.0-SNAPSHOT.sar")
2016-12-13 13:31:42,174 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "customer_sar_v1-1.0-SNAPSHOT.sar" (runtime-name : "customer_sar_v1-1.0-SNAPSHOT.sar")
2016-12-13 13:31:42,329 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management
2016-12-13 13:31:42,329 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
2016-12-13 13:31:42,330 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 10.1.0.Final (WildFly Core 2.2.0.Final) started in 58131ms - Started 1905 of 2231 services (527 services are lazy, passive or on-demand)

snippet 75

Testing
Now that our application started, we can test it. For shakedown tests while we know the URLs of our Micro Services, we can access those directly and bypass the Zuul (this will make the testing easier but external clients of our Micro Service should only access the Zuul). A simple test request might look like the following. I personally use Firefox Plugin HttpRequester for the quick testing of the REST Services. Following request for ex, will create an Order object.

URL:

http://localhost:8080/product-process/saveOrder/v1

snippet 76

Payload:

{"id":null,"name":"Snow","firstName":"John","birthDate":986756767}

snippet 77

In the file ‘test_request.txt’, you will find additional test request to test other Micro Services, I will only show the ones which will proves the transactional functionality of the application but you are free to use the other ones also.

If Firefox HttpRequester is not a possibility for you, you can also use the ‘curl’, following is a sample request to ‘saveOrder\v1’….

curl --noproxy localhost -i -X POST -H "Content-Type: application/json" http://localhost:8080/product-process/saveOrder/v1 -d '{"id":null,"name":"Snow","firstName":"John","birthDate":986756767}'

snippet 78

Conclusion:
At the end you will see that with technologies and paradigms that we are using, we can develop reliable transactional micro services without dealing the complexities of the messaging/streaming system and concentrate our business cases instead of technology puzzles. Off course, when performance or other requirement dictates we can still fallback to messaging/streaming solutions but why to increase the complexity and deal technological problems when we can solve our business problems instead.
Appendix
Appendix A
Hypersonic database is good solution for a local development but not exactly ideal solution production similar environment. First of we have to download and install the MySQL database which you can do from this URL.

Installation is with the help of the wizards quite simple then we have to restart MySQL Server, we have to start the server with administration right, otherwise we will see some errors during the startup.

In the installation directory of MySQL, you find a directory called “bin” and in this directory the file “mysqld.exe”.

We should execute the following command in this directory, this will start the server.

mysqld --console

snippet 79

The installation will also the ‘MySQL Workbench’ which we can install the database structure for the MySQL which is contained in the ‘create_mysql.sql’, which contains the following.

CREATE TABLE micro_tran.ORDER_TA (ID INT PRIMARY KEY AUTO_INCREMENT,  CUSTOMERID INT, COMMITDATE INT(15), STATUS INT);
CREATE TABLE micro_tran.PRODUCT_TA (PRODUCTID INT PRIMARY KEY AUTO_INCREMENT,  NAME VARCHAR(255), QUALITY VARCHAR(255));
CREATE TABLE micro_tran.ORDER_PRODUCT_TA (PRODUCTID INT, ORDERID INT, PRIORITY INT, VOLUME VARCHAR(255));
CREATE TABLE micro_tran.CUSTOMER_TA (ID INT PRIMARY KEY AUTO_INCREMENT,   NAME VARCHAR(255), FIRSTNAME VARCHAR(255), BIRTHDATE INT(12), SEGMENT varchar(255), STATUS INT);

INSERT INTO micro_tran.ORDER_TA (COMMITDATE , STATUS) VALUES (345345345, 5);
INSERT INTO micro_tran.PRODUCT_TA(NAME  ) VALUES('Test');

snippet 80

Finally we have to modify the datasource definition.

<?xml version="1.0" encoding="UTF-8"?>
 
<datasources xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.jboss.org/ironjacamar/schema/datasources_1_0.xsd">
  <datasource jndi-name="java:jboss/datasources/microMySqlDS" pool-name="microMySqlDSPool" enabled="true" use-java-context="true">
    <connection-url>jdbc:mysql://localhost:3306/micro_tran?useUnicode=true&amp;characterEncoding=utf8</connection-url>
    <driver>mysql</driver>
    <security>
      <user-name>root</user-name>
      <password>${org.salgar.ds.pass}</password>
    </security>
  </datasource>
  <drivers>
    <driver name="mysql" module="com.mysql.driver"> 
           <xa-datasource-class>com.mysql.jdbc.Driver</xa-datasource-class>
    </driver>
  </drivers>
</datasources>

snippet 81

and the “persistence.xml” example for MySQL.

<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.0"
	xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
	<persistence-unit name="PRODUCT_V1_SAR">
		<provider>org.hibernate.ejb.HibernatePersistence</provider>
		<class>org.salgar.product.api.v1.model.Product</class>

		<properties>
			<property name="hibernate.bytecode.use_reflection_optimizer"
				value="false" />
			<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver" />
			<property name="hibernate.connection.password" value="${org.salgar.ds.pass}" />
			<!-- property name="hibernate.connection.url"
				value="jdbc:h2:data/micro" /-->
			<property name="hibernate.connection.username" value="root" />
			<property name="hibernate.show_sql" value="true" />
			<property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect" />
			<property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform"/>
		</properties>
	</persistence-unit>
</persistence>

snippet 82

and “applicationContext-dao.xml” to get the datasource for MySQL.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jee="http://www.springframework.org/schema/jee"
	xmlns:context="http://www.springframework.org/schema/context"
	xmlns:tx="http://www.springframework.org/schema/tx""
	xsi:schemaLocation="
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
        http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd">
        
        <jee:jndi-lookup id="datasource" jndi-name="java:jboss/datasources/microMySqlDS" />
        
        <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
        	<property name="jtaDataSource" ref="datasource" />
        </bean>
        
        <!-- bean id="txManager" class="org.springframework.orm.jpa.JpaTransactionManager">
        	<property name="entityManagerFactory" ref="entityManagerFactory" />
        </bean-->
        <bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager"/>
        
        <tx:annotation-driven transaction-manager="txManager" />
</beans>

snippet 83

Appendix B
Netflix Eureka – Zuul – Hystrix are using RabbitMQ message broker to communicate with each other, to prevent excessive load that will be caused via a synchronous communication. The ‘product-process’ project used these libraries excessive for the orchestration layer and for this reason check the health of the RabbitMQ also. In the case that RabbitMQ is not installed or not running, ‘product-process’ will mark itself down in the Netflix Eureka because of the health, for this reason we have to install the RabbitMQ.

You can download RabbitMQ from the following URL.

After the installation, you can start the RabbitMQ in ‘sbin’ directory from the installation location with the following command.

rabbitmq-server.bat

snippet 84

Posted in AspectJ, Eureka, Hystrix, JBoss, JSR376, Maven, Netflix, Ribbon, Software Development, Spring, Zuul | 5 Comments

Micro Services and JBoss Fuse/Apache Servicemix

A while ago, I wrote a blog concerning how do we use OSGi with Micro Services with multiple Versions in a B2B project.

That time I was not so aware about the Buzz Words and did not paid that much attention what sort of extra audience it can reach. I didn’t see any other article on the internet that express how powerful can OSGi and JBoss Fuse/Apache Servicemix be for Micro Service concepts. With this blog I will reference to the old blog but present it from another point of view: Micro Services.

Posted in Software Development | Leave a comment

XText, Domain Specific Language and Spring State Machine

A while ago, after I discovered the existence of the Spring State Machine and I wrote a blog how to convert an UML State Machine model to a runnable Spring State Machine. After I finished that, I start thinking about creating a Domain Specific Language (DSL) for Spring State Machine to see that it would be viable to use instead of using the UML model. Well if you want to develop a Domain Specific Language for yourself, if you don’t want to the hard work, the starting point for you should be the Eclipse XText project, which is an excellent framework to develop your own DSL even with syntax highlighting for popular IDEs like Eclipse IDE or IntelliJ IDEA. After we created our DSL we should be able to create Java code from this DSL/model which we will do with another Eclipe Framework, Xtend. You can see this blog also a good example implementation for Spring State Machine.

Content

Introduction:
The basis for this blog would be the feasibility study that I developed in my previous blog. We would try to model the State Machine that I designed in the previous blog via UML now with our own DSL.

You can see in the below picture the State Machine UML Diagram we used there.

CustomerSearchSM
Picture 1

and the model we create with our Domain Specific Language will look like the following when it is finished.

package org.salgar.swf_statemachine.ssm {
    statemachine CustomerSearchSM initialState WAITING_CUSTOMERSEARCH_START
        control-object {
            attribute customerNumberInternal type=java.lang.String
            attribute flowId type=java.lang.String
            attribute sessionId type=java.lang.String
            attribute customerSearchInputRenderPanels type=java.lang.String
            attribute customerAuthenticatedInternal type=java.lang.Boolean
            attribute customerJoinedInternal type=java.lang.Boolean
            attribute customerInternal type="org.salgar.swf_statemachine.techdemo.domain.Customer"
            attribute customerOrdersInternal type=java.util.List
            attribute actualGuiState type="org.salgar.swf_statemachine.ssm.customersearchsm.guistate.CustomerSearchGuiState"
            attribute findCustomerSlaveSM type="org.springframework.statemachine.StateMachine"
            attribute findOrdersSlaveSM type="org.springframework.statemachine.StateMachine"
        }
        events {
            event onStartSearch
            event onCustomerFound
            event onCustomerAuthenticatedClicked
            event onCustomerJoinedClicked
            event onOrdersLoaded
        }
        states {
            state WAITING_CUSTOMERSEARCH_START {
                transition SearchRunning => CUSTOMERSEARCH_RUNNING {
                    trigger {
                        onStartSearch
                    }
                    action {
                        ProcessSearchStart
                    }
                }
            }
            state CUSTOMERSEARCH_RUNNING {
                transition CustomerFound => CUSTOMER_FOUND {
                    trigger {
                        onCustomerFound
                    }
                    action {
                        ProcessCustomerFound
                    }
                }
            }
            state CUSTOMER_FOUND {
                transition CustomerFound => CUSTOMER_AUTHENTICATED {
                    trigger {
                        onCustomerAuthenticatedClicked
                    }
                    action {
                        ProcessCustomerFound
                    }
                }
            }
            state CUSTOMER_AUTHENTICATED {
                transition CustomerJoined => CUSTOMER_JOINED {
                    trigger {
                        onCustomerJoinedClicked
                    }
                    guard {
                        isOrdersFound
                    }
                    action {
                        ProcessOrdersFoundCustomerJoined
                    }
                }
                transition OrdersLoading => ORDERS_LOADING {
                    trigger {
                        onCustomerJoinedClicked
                    }
                    guard {
                        isOrderSearchRunning
                    }
                    action {
                        ProcessLoadingOrders
                    }
                }
                transition CustomerAuthenticationRemoved => CUSTOMER_FOUND {
                    trigger {
                        onCustomerAuthenticatedClicked
                    }
                    action {
                        ProcessCustomerAuthenticationRemoved
                    }
                }
            }
            state CUSTOMER_JOINED {
                transition CustomerAuthenticationRemovedFromJoined => CUSTOMER_FOUND {
                    trigger {
                        onCustomerAuthenticatedClicked
                    }
                    action {
                        ProcessCustomerAuthenticationRemoved
                    }
                }
                transition CustomerJoinedClicked => CUSTOMER_AUTHENTICATED {
                    trigger {
                        onCustomerJoinedClicked
                    }
                    action {
                        ProcessCustomerJoinRemoved
                    }
                }
            }
            state ORDERS_LOADING {
                transition OrdersLoaded => CUSTOMER_JOINED {
                    trigger {
                        onOrdersLoaded
                    }
                    action {
                        ProcessOrdersLoaded
                    }
                }
                transition CustomerAuthenticationFromOrdersLoadingRemoved => CUSTOMER_FOUND {
                    trigger {
                        onCustomerAuthenticatedClicked
                    }
                    action {
                        ProcessCustomerJoinRemoved
                    }
                }
                transition CustomerJoinRemoved => CUSTOMER_AUTHENTICATED {
                    trigger {
                        onCustomerJoinedClicked
                    }
                    action {
                        ProcessCustomerJoinRemoved
                    }
                }

            }
        }
    statemachine FindCustomerSM initialState NOT_RUNNING
        control-object {
            attribute customerNumber type=java.lang.String
            attribute masterStateMachine type="org.springframework.statemachine.StateMachine"
        }
        events {
            event onStartSearch
            event onCustomerFound
        }
        states {
            state NOT_RUNNING {
                transition SearchStarting => SEARCH_RUNNING {
                    trigger {
                        onStartSearch
                    }
                    action {
                        ProcessSearchStart
                    }
                }
            }
            state SEARCH_RUNNING {
                transition CustomerFound => CUSTOMER_FOUND {
                    trigger {
                        onCustomerFound
                    }
                    action {
                        ProcessCustomerFound
                    }
                }
            }
            state CUSTOMER_FOUND {

            }
        }
    statemachine FindOrdersSM initialState NOT_RUNNING
        control-object {
            attribute customerNumber type=java.lang.String
            attribute masterStateMachine type="org.springframework.statemachine.StateMachine"
            attribute orders type=java.util.List
        }
        events {
            event onOrderSearchRunning
            event onOrdersFound
        }
        states {
            state NOT_RUNNING {
                transition OrderSearchRunning => ORDER_SEARCH_RUNNING {
                    trigger {
                        onOrderSearchRunning
                    }
                    action {
                        ProcessOrdersSearchStart
                    }
                }
            }
            state ORDER_SEARCH_RUNNING {
                transition OrdersFound => ORDERS_FOUND {
                    trigger {
                        onOrdersFound
                    }
                    action {
                        ProcessOrdersFound
                    }
                }
            }
            state ORDERS_FOUND {

            }
        }
}

Snippet 1

I will explain above snippet in a detailed fashion in this blog but it seems simple and elegant isn’t it, compared creating the all Spring State Machine configuration with Java code.

XText:
XText is a so called language development framework, which we will use it for the creation of our textual domain-specific language (DSL) .

Well, to create a Domain Specific Language with XText we have to first define our Grammar file, in our case the file doing this is called ‘StateMachineDsl.xtext’ and look like the following.

grammar org.salgar.swf_statemachine.xtext.StateMachineDsl with org.eclipse.xtext.common.Terminals

generate stateMachineDsl "http://www.salgar.org/swf_statemachine/xtext/StateMachineDsl"

Model:
	(elements+=Base)*;

PackageDeclaration:
	'package' name = QualifiedName '{'
		(elements += Base)*
	'}';

StateMachine:
	'statemachine' name=ID  'initialState' initialState = [State]
	'control-object' controlObject = ControlObject
	('events' ('{'
		(events += Event)*
	'}'))
	('states' ('{'
		(states += State)*
	'}'))
;

Base: PackageDeclaration | StateMachine;

QualifiedName:
	ID ('.' ID)*;

State:
	'state' name=ID
	'{'
		(transitions += Transition)*
	'}';

Transition:
	'transition' name=ID
	'=>' target = [State]
	'{'
		('trigger' '{' trigger = [Event] '}')
		('guard' '{' guard = Guard '}')?
		('action' '{' action = Action '}')?
	'}';

Event:
	'event' name=ID;

Guard: {Guard}
	name=ID;

Action: {Action}
	name=ID;

ControlObject:
	'{'
		(attributes+=ControlObjectAttribute)*
	'}'
;

ControlObjectAttribute:
	'attribute' name=ID
	('type' '=' (
			(type=AttributeBase)
		)
	)
;

AttributeBase: ObjectType | SimpleType;

SimpleType: type=InternalType;

ObjectType: type=STRING;

enum InternalType:
	NONE = 'NONE' | BOOLEAN = 'java.lang.Boolean' | INTEGER = 'java.lang.Integer' | LONG = 'java.lang.Long' | FLOAT = 'java.lang.Float' | DECIMAL = 'java.lang.Decimal' |
	STRING = 'java.lang.String' | DATE = 'java.lang.Date' | DATETIME = 'java.lang.DateTime' | TIME = 'java.lang.Time' | LIST = 'java.util.List'
;

Snippet 2

Not too much for creating our programming language isn’t it? 🙂

If we dissect it to small parts.

grammar org.salgar.swf_statemachine.xtext.StateMachineDsl with org.eclipse.xtext.common.Terminals

Snippet 3

This is the definition of our grammar, all the artifacts that will be created from via Maven with XText would be linked under this namespace and artifact name. Terminal is the base class defining basic structures that will be valid for every DSL, like ID, STRING, etc…

generate stateMachineDsl "http://www.salgar.org/swf_statemachine/xtext/StateMachineDsl"

Snippet 4

Here, we are coupling ‘stateMachineDsl’ Domain Specific Language with namespace http://www.salgar.org/swf_statemachine/xtext/StateMachineDsl, all the models that will use this DSL should reference to it.

Model:
	(elements+=Base)*;

Snippet 5

Here, we define the root element for our DSL (if doesn’t have to be called ‘Model’, it can be anything). The element ‘Model’ can have many child nodes called Base and all will be added to the ‘elements’ collection. ‘*’ states that there will be multiple occurrences of Base, ‘+=’ states every discovered Base will be added to the ‘elements’ collection.

PackageDeclaration:
	'package' name = QualifiedName '{'
		(elements += Base)*
	'}';

Snippet 6

This will add packaging support to our DSL, now the artifacts that we will create, will not have unwanted name clashes.

QualifiedName:
	ID ('.' ID)*
;

Snippet 7

QualifiedName is a pattern defining the Qualified name structures.

StateMachine:
	'statemachine' name=ID  'initialState' initialState = [State]
	'control-object' controlObject = ControlObject
	('events' ('{'
		(events += Event)*
	'}'))
	('states' ('{'
		(states += State)*
	'}'))
;

Snippet 8

This is the core of our DSL while we want to create Spring State Machines with it, the definition of ‘StateMachine’. It states we will have a keyword ‘statemachine’ in our model and next word to it will be its name, then we will have ‘initialState’ keyword which will define the initial state of our state machine. Now you will see a fundamental concept of XText, in XText we can reference object by declaration or by reference. If I use the notation ‘initialState = State’ that will mean at this moment I have to create a new instance of a State object instead, with the notation ‘initialState = [State]’ I will reference to a State object that is created somewhere else, which you will see quite soon.

Then we define our control objects, if you read my previous blogs, I defend the idea that the state machines needs a special memory area that is strictly controlled by them, Spring State Machine uses a HashMap but this is little loose for my taste, I like to have an object that only state machine has a write access and all the others has only read access (an object that can only be manipulated with the action objects on transitions or states). DSL grammar in the vicinity of the ‘control-object’ keyword there will be a ‘{‘ and a ‘}‘ and all the attribute definitions will belong to the control object. Attribute definitions will looks like the following.

ControlObjectAttribute:
	'attribute' name=ID
	('type' '=' (
			(type=AttributeBase)
		)
	)
;

Snippet 9

‘attribute’ keyword will have name property next to it and a type attribute which will define it type in the java code. It can be an enumeration for base types

enum InternalType:
	NONE = 'NONE' | BOOLEAN = 'java.lang.Boolean' | INTEGER = 'java.lang.Integer' | LONG = 'java.lang.Long' | FLOAT = 'java.lang.Float' | DECIMAL = 'java.lang.Decimal' |
	STRING = 'java.lang.String' | DATE = 'java.lang.Date' | DATETIME = 'java.lang.DateTime' | TIME = 'java.lang.Time' | LIST = 'java.util.List'
;

Snippet 10

or a String field for complex types (like ‘org.salgar.swf_statemachine.customersearchsm.GuiState’).

ObjectType: type=STRING;

Snippet 11

which are going to be represented in AttributeBase, by the way, ‘|’ signifies that AttributeBase is based on either ObjectType or SimpleType.

AttributeBase: ObjectType | SimpleType;

SimpleType: type=InternalType;

ObjectType: type=STRING;

Snippet 12

So if we look back to the where we left the definition of the state machines, we are at events. when DSL parser encounters ‘events’ keyword under state machine, we should look to the area inside of the accolades ‘{‘, ‘}’ add every event object that we find to the events collection.

The definition of the Event look like this.

Event:
	'event' name=ID;

Snippet 13

So the keyword ‘event’ defines the event and next word to it, its name.

Then we have the ‘states’, when DSL parser encounters the ‘states’ keyword, it should look to the area inside of the accolades ‘{‘, ‘}’, it would add every State object to the states collection.

The definition of states look like the following.

State:
	'state' name=ID
	'{'
		(transitions += Transition)*
	'}'
;

Snippet 14

So DSL parser when it encounters ‘state’ keyword, the next word will define the State name and the area inside of the accolades ‘{‘, ‘}’ will defines the transitions, which will be added to transitions collection.

And the transitions looks like the following.

Transition:
	'transition' name=ID
	'=>' target = [State]
	'{'
		('trigger' '{' trigger = [Event] '}')
		('guard' '{' guard = Guard '}')?
		('action' '{' action = Action '}')?
	'}'
;

Snippet 15

‘transition’ keyword defines the transition object, next word is the name of the transition, then we have the ‘=>’ operation which defines the target state for this operation. What we have to pay attention here is the ‘[State]’, while we are using ‘[‘, ‘]’ we are not creating a new instance of a State object but we are referencing to an existing State object. When you are modeling a state machine and you reference here a State that is not defined in the state machine you will receive compile time problems with Maven.

Then we will have the definition of the triggers for the transition and please pay attention that we are again using ‘[‘, ‘]’ and that will mean we will reference Event object that already declared in the event section of the state machine.

We will have finally the Guard and Action objects.

Guard: {Guard}
	name=ID
;

Action: {Action}
	name=ID
;

Snippet 16

Those will complete the definition of our Domain Specific Language and as you can see it is enough to define the state machine I designed in my previous blog for the feasibility study.

Now XText must create necessary artifacts for us via Maven, that will happen inside of a MWE2 Workflow ‘GenerateStateMachineDsl.mwe2’ in project ‘swf_statemachine_domain_specific_language’, like the following.

module org.salgar.swf_statemachine.xtext.GenerateStateMachineDsl

import org.eclipse.emf.mwe.utils.*
import org.eclipse.xtext.xtext.generator.*
import org.eclipse.xtext.xtext.generator.model.project.*

var rootPath = "../"

Workflow {


	bean =  StandardLanguage : languageSetup {
		name = "org.salgar.swf_statemachine.xtext.StateMachineDsl"
		fileExtensions = "ssm"

		serializer = {
			generateStub = false
		}
		validator = {
			// composedCheck = "org.eclipse.xtext.validation.NamesAreUniqueValidator"
		}
	}
	
	component = XtextGenerator {
		configuration = {
			project = StandardProjectConfig {
				baseName = "swf_statemachine_domain_specific_language"
				rootPath = rootPath
				ideaPlugin = {
					enabled = false
				}
				mavenLayout = true
			}
			code = {
				encoding = "UTF-8"
				fileHeader = "/*\n * generated by Xtext \${version}\n */"
			}
		}
		language = languageSetup
	}
}

Snippet 17

these are standard configuration created via IntelliJ or Eclipse plugin which I will explain in Tooling section. I like to draw your attention to ‘fileExtensions = “ssm”‘ notation, this tells XText/Xtend that the model files that will speak your Domain Specific Language must have the ‘.ssm’ extension and it will search the all classpath for the files with this extension. With the ‘StandardLanguage’ element in the workflow we configure the XText via convention over configuration and it is able to find all necessary artifacts.

Xtend:
Now we defined our DSL, you can ask what now? We have to create the Spring State Machine configuration, for that we have to create Java Code and for that we have to use Eclipse Xtend framework.

Xtend is statically-type programming language that creates JVM bytecode and it is extremely compatible with Java. I will not place here all the Xtend code as single block would be to complex, so I will go with small snippets and explain the functionality.

But first, we have to give the Xtend the ability to understand our Domain Specific Language and the will happen in a MWE2 workflow. I will explain more detailed in the Project structure section but in short we will have two Maven projects, one will be responsible to convert our grammar to a Domain Specific Language via MWE2 and XText components and another Maven project with MWE2 workflow to create the Java code.

When the first projects runs, it will create a ‘StateMachineDslStandaloneSetup’ artifact to configure the Maven to understand our language which will look like the following.

package org.salgar.swf_statemachine.xtext;

import com.google.inject.Guice;
import com.google.inject.Injector;
import org.eclipse.emf.ecore.EPackage;
import org.eclipse.emf.ecore.resource.Resource;
import org.eclipse.xtext.ISetup;
import org.eclipse.xtext.common.TerminalsStandaloneSetup;
import org.eclipse.xtext.resource.IResourceFactory;
import org.eclipse.xtext.resource.IResourceServiceProvider;
import org.salgar.swf_statemachine.xtext.stateMachineDsl.StateMachineDslPackage;

@SuppressWarnings("all")
public class StateMachineDslStandaloneSetupGenerated implements ISetup {

	@Override
	public Injector createInjectorAndDoEMFRegistration() {
		TerminalsStandaloneSetup.doSetup();

		Injector injector = createInjector();
		register(injector);
		return injector;
	}
	
	public Injector createInjector() {
		return Guice.createInjector(new StateMachineDslRuntimeModule());
	}
	
	public void register(Injector injector) {
		if (!EPackage.Registry.INSTANCE.containsKey("http://www.salgar.org/swf_statemachine/xtext/StateMachineDsl")) {
			EPackage.Registry.INSTANCE.put("http://www.salgar.org/swf_statemachine/xtext/StateMachineDsl", StateMachineDslPackage.eINSTANCE);
		}
		IResourceFactory resourceFactory = injector.getInstance(IResourceFactory.class);
		IResourceServiceProvider serviceProvider = injector.getInstance(IResourceServiceProvider.class);
		
		Resource.Factory.Registry.INSTANCE.getExtensionToFactoryMap().put("ssm", resourceFactory);
		IResourceServiceProvider.Registry.INSTANCE.getExtensionToFactoryMap().put("ssm", serviceProvider);
	}
}

Snippet 18

This will introduce our namespace http://www.salgar.org/swf_statemachine/xtext/StateMachineDsl to EMF ecore package. This is critical for Xtend to understand our DSL.

The file ‘StateMachineDslGenerator.xtend’ will be responsible for creating our Java Code.

So if we start looking small snippets….

class StateMachineDslGenerator extends AbstractGenerator {

	@Inject extension IQualifiedNameProvider

Snippet 19

Do you remember the QualifiedName we used in our language definition, this injection will help us to have fully qualified names for our code generation (like package names and so).

override void doGenerate(Resource resource, IFileSystemAccess2 fsa, IGeneratorContext context) {
		fsa.generateFile(resource.getAllContents.findFirst(object | object instanceof PackageDeclaration).fullyQualifiedName.toString("/") + "/enumeration/StateMachineEnumerationImpl.java",
			resource.allContents.toIterable.filter(StateMachine).complileStateMachineEnumeration(resource.getAllContents.findFirst(object | object instanceof PackageDeclaration) as PackageDeclaration))

		for(e : resource.allContents.toIterable.filter(StateMachine)) {
			fsa.generateFile(e.eContainer.fullyQualifiedName.toString("/") + "/" + e.name.toLowerCase + "/enumeration/state/" + e.name + "_StateEnumerationImpl.java",
			e.eAllContents.toIterable.filter(State).compileState(e))

			fsa.generateFile(e.eContainer.fullyQualifiedName.toString("/") + "/" + e.name.toLowerCase + "/enumeration/event/" + e.name + "_EventEnumerationImpl.java",
			e.eAllContents.toIterable.filter(Event).compileEvent(e))

			fsa.generateFile(e.eContainer.fullyQualifiedName.toString("/") + "/" + e.name.toLowerCase() + "/configuration/" + e.name + "ControlObjectLocator.java",
			e.compileControlObjectLocator)

			fsa.generateFile(e.eContainer.fullyQualifiedName.toString("/") + "/" + e.name.toLowerCase() + "/configuration/" + e.name + "GuardContainer.java",
			e.compileGuardContainer)

			for(state : e.states) {
				for(transition : state.transitions) {
					if(transition.guard != null) {
						fsa.generateFile(e.eContainer.fullyQualifiedName.toString("/") + "/" + e.name.toLowerCase() + "/configuration/" + state.name.toLowerCase() + "/guard/" + state.name + "___" + transition.target.name + "_" + transition.name + "_" + transition.guard.name + "_guard.java",
						e.compileGuard(state, transition))
					}
					if(transition.action != null) {
						fsa.generateFile(e.eContainer.fullyQualifiedName.toString("/") + "/" + e.name.toLowerCase() + "/configuration/" + state.name.toLowerCase() + "/action/" + state.name + "___" + transition.target.name + "_" + transition.name + "_" + transition.action.name + "_action.java",
						e.compileAction(state, transition))
					}
				}
			}

			fsa.generateFile(e.eContainer.fullyQualifiedName.toString("/") + "/" + e.name.toLowerCase() + "/configuration/" + e.name + "ActionContainer.java",
			e.compileActionContainer)

			fsa.generateFile(e.eContainer.fullyQualifiedName.toString("/") + "/" + e.name.toLowerCase() + "/controlobject/Abstract" + e.name + "ControlObject.java",
			e.compileControlObject)

			fsa.generateFile(e.eContainer.fullyQualifiedName.toString("/") + "/" + e.name.toLowerCase() + "/configuration/" + e.name + "Configuration.java",
			e.compileStateMachine)
		}
	}

Snippet 20

first part of this snippet

fsa.generateFile(resource.getAllContents.findFirst(object | object instanceof PackageDeclaration).fullyQualifiedName.toString("/") +      "/enumeration/StateMachineEnumerationImpl.java",			resource.allContents.toIterable.filter(StateMachine).complileStateMachineEnumeration(resource.getAllContents.findFirst(object | object instanceof PackageDeclaration) as PackageDeclaration))

Snippet 21

is little bit this feasibility study specific, you will see that I only process first package declaration for the package creation but the language grammar declaration via XText does not say anything about that we will only have one single package declaration. We could have several, if you need to process more then one package, you can take this ‘as an exercise to the reader’ and try to improve this.

This snippet is selecting the first package in the model and creating an enumeration containing all the state machines in the model (careful eyes will notice here that Xtend support lambdas ‘findFirst(object | object instanceof PackageDeclaration)’) via calling the ‘complileStateMachineEnumeration’ method.

The rest of the methods using the same pattern, we select state machines, states, events, transition, guards, action, control objects and create them via the methods ‘compileState’, ‘compileEvent’, ‘compileControlObjectLocator’, ‘compileGuardContainer’, ‘compileGuard’, ‘compileAction’, ‘compileActionContainer’, ‘compileControlObject’ and ‘compileStateMachine’.

I will explain here the core method ‘compileStateMachine’, all the other ones are some repetition of the same concept.

If you like to understand why I have configure some elements in Spring State Machine the way I did, you have to read my previous blog, it explains the motivations behind it.

       def compileStateMachine(StateMachine e) '''
		package «e.eContainer.fullyQualifiedName».«e.name.toLowerCase()».configuration;

		import org.salgar.statemachine.domain.ControlObject;
		import «e.eContainer.fullyQualifiedName».«e.name.toLowerCase()».controlobject.Abstract«e.name»ControlObject;
		import «e.eContainer.fullyQualifiedName».«e.name.toLowerCase()».enumeration.event.«e.name»_EventEnumerationImpl;
		import «e.eContainer.fullyQualifiedName».«e.name.toLowerCase()».enumeration.state.«e.name»_StateEnumerationImpl;

		import org.springframework.beans.factory.annotation.Autowired;
		import org.springframework.context.annotation.Configuration;
		import org.springframework.context.annotation.Bean;
		import org.springframework.messaging.Message;
		import org.springframework.statemachine.config.EnableStateMachineFactory;
		import org.springframework.statemachine.config.EnumStateMachineConfigurerAdapter;
		import org.springframework.statemachine.config.builders.StateMachineConfigurationConfigurer;
		import org.springframework.statemachine.config.builders.StateMachineTransitionConfigurer;
		import org.springframework.statemachine.config.builders.StateMachineStateConfigurer;
		import org.springframework.statemachine.StateContext;
		import org.springframework.statemachine.action.Action;
		import org.springframework.statemachine.listener.StateMachineListener;
		import org.springframework.statemachine.listener.StateMachineListenerAdapter;
		import org.springframework.statemachine.state.State;

		import java.util.EnumSet;

		import org.apache.log4j.Logger;

		@Configuration
		@EnableStateMachineFactory(name = "«e.name»")
		public class «e.name»Configuration extends
			EnumStateMachineConfigurerAdapter<«e.name»_StateEnumerationImpl, «e.name»_EventEnumerationImpl> {
			private static final Logger LOG = Logger.getLogger(«e.name»Configuration.class);

			@Autowired
			private «e.name»ActionContainer «WordUtils.uncapitalize(e.name)»ActionContainer;

			@Autowired
			private «e.name»GuardContainer «WordUtils.uncapitalize(e.name)»GuardContainer;

			@Autowired
			private «e.name»ControlObjectLocator controlObjectLocator;

			@Override
			public void configure(
				StateMachineConfigurationConfigurer<«e.name»_StateEnumerationImpl, «e.name»_EventEnumerationImpl> config)
						throws Exception {
				config.withConfiguration().listener(listener());
			}

			@Override
			public void configure(
				StateMachineStateConfigurer<«e.name»_StateEnumerationImpl, «e.name»_EventEnumerationImpl> states)
						throws Exception {
				states.withStates()
					.initial(«e.name»_StateEnumerationImpl.«e.initialState.name»,
							initialState«e.name»Action())
					.states(EnumSet.allOf(«e.name»_StateEnumerationImpl.class));
		}

		@Override
		public void configure(
				StateMachineTransitionConfigurer<«e.name»_StateEnumerationImpl, «e.name»_EventEnumerationImpl> transitions)
						throws Exception {
			transitions
				«FOR state : e.states SEPARATOR '.and()'»
					//STATE - «state.name»
					«FOR transition : state.transitions SEPARATOR '.and()'»
						//TRANSITION - «transition.name»
						.withExternal().source(«e.name»_StateEnumerationImpl.«state.name»)
						.target(«e.name»_StateEnumerationImpl.«transition.target.name»)
						.event(«e.name»_EventEnumerationImpl.«transition.trigger.name»)
						«IF transition.guard != null»
							.guard(«WordUtils.uncapitalize(e.name)»GuardContainer
							.get«state.name»___«transition.target.name»_«transition.name»_«transition.guard.name»_guard()
							.«state.name»_«transition.target.name»_«transition.name»_«transition.guard.name»_guard())
						«ENDIF»
						«IF transition.action != null»
							.action(«WordUtils.uncapitalize(e.name)»ActionContainer
							.get«state.name»___«transition.target.name»_«transition.name»_«transition.action.name»_action()
							.«state.name»_«transition.target.name»_«transition.name»_«transition.action.name»_action())
						«ENDIF»
					«ENDFOR»
				«ENDFOR»;
		}


			public StateMachineListener<«e.name»_StateEnumerationImpl, «e.name»_EventEnumerationImpl> listener() {
				return new StateMachineListenerAdapter<«e.name»_StateEnumerationImpl, «e.name»_EventEnumerationImpl>() {
					@Override
					public void stateChanged(
							State<«e.name»_StateEnumerationImpl, «e.name»_EventEnumerationImpl> from,
							State<«e.name»_StateEnumerationImpl, «e.name»_EventEnumerationImpl> to) {
						LOG.info("State change to " + to.getId());
					}

					@Override
					public void eventNotAccepted(Message<«e.name»_EventEnumerationImpl> event) {
						LOG.warn("The event " + event.toString() + " is not accepted!");
					}
				};
			}

			@Bean
			public Action<«e.name»_StateEnumerationImpl, «e.name»_EventEnumerationImpl> initialState«e.name»Action() {
				return new Action<«e.name»_StateEnumerationImpl, «e.name»_EventEnumerationImpl>() {
					@Override
					public void execute(
							StateContext<«e.name»_StateEnumerationImpl, «e.name»_EventEnumerationImpl> context) {
						Abstract«e.name»ControlObject controlObject = controlObjectLocator.getControlObject();
						((ControlObject) controlObject).resetStateMachine();
						context.getExtendedState().getVariables().put("«e.name»ControlObject", controlObject);
					}
				};
			}
		}
	'''

Snippet 22

Actually it is nothing more then getting StateMachine that we defined in our Domain Specific Language and reading some properties and creating Java Code from it. State Machine Enumerations, State Enumerations, Event Enumerations, iterating over Transitions, Guards, Actions.

Java code generated with Java will look like this.

package org.salgar.swf_statemachine.ssm.customersearchsm.configuration;

import org.salgar.statemachine.domain.ControlObject;
import org.salgar.swf_statemachine.ssm.customersearchsm.controlobject.AbstractCustomerSearchSMControlObject;
import org.salgar.swf_statemachine.ssm.customersearchsm.enumeration.event.CustomerSearchSM_EventEnumerationImpl;
import org.salgar.swf_statemachine.ssm.customersearchsm.enumeration.state.CustomerSearchSM_StateEnumerationImpl;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Bean;
import org.springframework.messaging.Message;
import org.springframework.statemachine.config.EnableStateMachineFactory;
import org.springframework.statemachine.config.EnumStateMachineConfigurerAdapter;
import org.springframework.statemachine.config.builders.StateMachineConfigurationConfigurer;
import org.springframework.statemachine.config.builders.StateMachineTransitionConfigurer;
import org.springframework.statemachine.config.builders.StateMachineStateConfigurer;
import org.springframework.statemachine.StateContext;
import org.springframework.statemachine.action.Action;
import org.springframework.statemachine.listener.StateMachineListener;
import org.springframework.statemachine.listener.StateMachineListenerAdapter;
import org.springframework.statemachine.state.State;

import java.util.EnumSet;

import org.apache.log4j.Logger;

@Configuration
@EnableStateMachineFactory(name = "CustomerSearchSM")
public class CustomerSearchSMConfiguration extends
	EnumStateMachineConfigurerAdapter<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> {
	private static final Logger LOG = Logger.getLogger(CustomerSearchSMConfiguration.class);

	@Autowired
	private CustomerSearchSMActionContainer customerSearchSMActionContainer;

	@Autowired
	private CustomerSearchSMGuardContainer customerSearchSMGuardContainer;

	@Autowired
	private CustomerSearchSMControlObjectLocator controlObjectLocator;

	@Override
	public void configure(
		StateMachineConfigurationConfigurer<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> config)
				throws Exception {
		config.withConfiguration().listener(listener());
	}

	@Override
	public void configure(
		StateMachineStateConfigurer<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> states)
				throws Exception {
		states.withStates()
			.initial(CustomerSearchSM_StateEnumerationImpl.WAITING_CUSTOMERSEARCH_START,
					initialStateCustomerSearchSMAction())
			.states(EnumSet.allOf(CustomerSearchSM_StateEnumerationImpl.class));
}

@Override
public void configure(
		StateMachineTransitionConfigurer<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> transitions)
				throws Exception {
	transitions
		//STATE - WAITING_CUSTOMERSEARCH_START
		//TRANSITION - SearchRunning
		.withExternal().source(CustomerSearchSM_StateEnumerationImpl.WAITING_CUSTOMERSEARCH_START)
		.target(CustomerSearchSM_StateEnumerationImpl.CUSTOMERSEARCH_RUNNING)
		.event(CustomerSearchSM_EventEnumerationImpl.onStartSearch)
		.action(customerSearchSMActionContainer
		.getWAITING_CUSTOMERSEARCH_START___CUSTOMERSEARCH_RUNNING_SearchRunning_ProcessSearchStart_action()
		.WAITING_CUSTOMERSEARCH_START_CUSTOMERSEARCH_RUNNING_SearchRunning_ProcessSearchStart_action()).and()
						//STATE - CUSTOMERSEARCH_RUNNING
		//TRANSITION - CustomerFound
		.withExternal().source(CustomerSearchSM_StateEnumerationImpl.CUSTOMERSEARCH_RUNNING)
		.target(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_FOUND)
		.event(CustomerSearchSM_EventEnumerationImpl.onCustomerFound)
		.action(customerSearchSMActionContainer
		.getCUSTOMERSEARCH_RUNNING___CUSTOMER_FOUND_CustomerFound_ProcessCustomerFound_action()
		.CUSTOMERSEARCH_RUNNING_CUSTOMER_FOUND_CustomerFound_ProcessCustomerFound_action()).and()
						//STATE - CUSTOMER_FOUND
		//TRANSITION - CustomerFound
		.withExternal().source(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_FOUND)
		.target(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_AUTHENTICATED)
		.event(CustomerSearchSM_EventEnumerationImpl.onCustomerAuthenticatedClicked)
		.action(customerSearchSMActionContainer
		.getCUSTOMER_FOUND___CUSTOMER_AUTHENTICATED_CustomerFound_ProcessCustomerFound_action()
		.CUSTOMER_FOUND_CUSTOMER_AUTHENTICATED_CustomerFound_ProcessCustomerFound_action()).and()
						//STATE - CUSTOMER_AUTHENTICATED
		//TRANSITION - CustomerJoined
		.withExternal().source(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_AUTHENTICATED)
		.target(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_JOINED)
		.event(CustomerSearchSM_EventEnumerationImpl.onCustomerJoinedClicked)
		.guard(customerSearchSMGuardContainer
		.getCUSTOMER_AUTHENTICATED___CUSTOMER_JOINED_CustomerJoined_isOrdersFound_guard()
		.CUSTOMER_AUTHENTICATED_CUSTOMER_JOINED_CustomerJoined_isOrdersFound_guard())
		.action(customerSearchSMActionContainer
		.getCUSTOMER_AUTHENTICATED___CUSTOMER_JOINED_CustomerJoined_ProcessOrdersFoundCustomerJoined_action()
		.CUSTOMER_AUTHENTICATED_CUSTOMER_JOINED_CustomerJoined_ProcessOrdersFoundCustomerJoined_action()).and()
		//TRANSITION - OrdersLoading
		.withExternal().source(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_AUTHENTICATED)
		.target(CustomerSearchSM_StateEnumerationImpl.ORDERS_LOADING)
		.event(CustomerSearchSM_EventEnumerationImpl.onCustomerJoinedClicked)
		.guard(customerSearchSMGuardContainer
		.getCUSTOMER_AUTHENTICATED___ORDERS_LOADING_OrdersLoading_isOrderSearchRunning_guard()
		.CUSTOMER_AUTHENTICATED_ORDERS_LOADING_OrdersLoading_isOrderSearchRunning_guard())
		.action(customerSearchSMActionContainer
		.getCUSTOMER_AUTHENTICATED___ORDERS_LOADING_OrdersLoading_ProcessLoadingOrders_action()
		.CUSTOMER_AUTHENTICATED_ORDERS_LOADING_OrdersLoading_ProcessLoadingOrders_action()).and()
		//TRANSITION - CustomerAuthenticationRemoved
		.withExternal().source(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_AUTHENTICATED)
		.target(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_FOUND)
		.event(CustomerSearchSM_EventEnumerationImpl.onCustomerAuthenticatedClicked)
		.action(customerSearchSMActionContainer
		.getCUSTOMER_AUTHENTICATED___CUSTOMER_FOUND_CustomerAuthenticationRemoved_ProcessCustomerAuthenticationRemoved_action()
		.CUSTOMER_AUTHENTICATED_CUSTOMER_FOUND_CustomerAuthenticationRemoved_ProcessCustomerAuthenticationRemoved_action()).and()
						//STATE - CUSTOMER_JOINED
		//TRANSITION - CustomerAuthenticationRemovedFromJoined
		.withExternal().source(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_JOINED)
		.target(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_FOUND)
		.event(CustomerSearchSM_EventEnumerationImpl.onCustomerAuthenticatedClicked)
		.action(customerSearchSMActionContainer
		.getCUSTOMER_JOINED___CUSTOMER_FOUND_CustomerAuthenticationRemovedFromJoined_ProcessCustomerAuthenticationRemoved_action()
		.CUSTOMER_JOINED_CUSTOMER_FOUND_CustomerAuthenticationRemovedFromJoined_ProcessCustomerAuthenticationRemoved_action()).and()
		//TRANSITION - CustomerJoinedClicked
		.withExternal().source(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_JOINED)
		.target(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_AUTHENTICATED)
		.event(CustomerSearchSM_EventEnumerationImpl.onCustomerJoinedClicked)
		.action(customerSearchSMActionContainer
		.getCUSTOMER_JOINED___CUSTOMER_AUTHENTICATED_CustomerJoinedClicked_ProcessCustomerJoinRemoved_action()
		.CUSTOMER_JOINED_CUSTOMER_AUTHENTICATED_CustomerJoinedClicked_ProcessCustomerJoinRemoved_action()).and()
						//STATE - ORDERS_LOADING
		//TRANSITION - OrdersLoaded
		.withExternal().source(CustomerSearchSM_StateEnumerationImpl.ORDERS_LOADING)
		.target(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_JOINED)
		.event(CustomerSearchSM_EventEnumerationImpl.onOrdersLoaded)
		.action(customerSearchSMActionContainer
		.getORDERS_LOADING___CUSTOMER_JOINED_OrdersLoaded_ProcessOrdersLoaded_action()
		.ORDERS_LOADING_CUSTOMER_JOINED_OrdersLoaded_ProcessOrdersLoaded_action()).and()
		//TRANSITION - CustomerAuthenticationFromOrdersLoadingRemoved
		.withExternal().source(CustomerSearchSM_StateEnumerationImpl.ORDERS_LOADING)
		.target(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_FOUND)
		.event(CustomerSearchSM_EventEnumerationImpl.onCustomerAuthenticatedClicked)
		.action(customerSearchSMActionContainer
		.getORDERS_LOADING___CUSTOMER_FOUND_CustomerAuthenticationFromOrdersLoadingRemoved_ProcessCustomerJoinRemoved_action()
		.ORDERS_LOADING_CUSTOMER_FOUND_CustomerAuthenticationFromOrdersLoadingRemoved_ProcessCustomerJoinRemoved_action()).and()
		//TRANSITION - CustomerJoinRemoved
		.withExternal().source(CustomerSearchSM_StateEnumerationImpl.ORDERS_LOADING)
		.target(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_AUTHENTICATED)
		.event(CustomerSearchSM_EventEnumerationImpl.onCustomerJoinedClicked)
		.action(customerSearchSMActionContainer
		.getORDERS_LOADING___CUSTOMER_AUTHENTICATED_CustomerJoinRemoved_ProcessCustomerJoinRemoved_action()
		.ORDERS_LOADING_CUSTOMER_AUTHENTICATED_CustomerJoinRemoved_ProcessCustomerJoinRemoved_action())
				;
}


	public StateMachineListener<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> listener() {
		return new StateMachineListenerAdapter<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl>() {
			@Override
			public void stateChanged(
					State<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> from,
					State<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> to) {
				LOG.info("State change to " + to.getId());
			}

			@Override
			public void eventNotAccepted(Message<CustomerSearchSM_EventEnumerationImpl> event) {
				LOG.warn("The event " + event.toString() + " is not accepted!");
			}
		};
	}

	@Bean
	public Action<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> initialStateCustomerSearchSMAction() {
		return new Action<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl>() {
			@Override
			public void execute(
					StateContext<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> context) {
				AbstractCustomerSearchSMControlObject controlObject = controlObjectLocator.getControlObject();
				((ControlObject) controlObject).resetStateMachine();
				context.getExtendedState().getVariables().put("CustomerSearchSMControlObject", controlObject);
			}
		};
	}
}

Project Structure:
If you look to my previous blogs some Maven artifacts are disappeared and some new ones are appeared.

xtext_project_structure
Picture 2

The projects ‘swf_statemachine_model’ (which was containing the UML Model for the state machine), ‘swf_statemachine_xpand’ (which was containing XPand template for Java Code generation from UML Models) were removed. New are, ‘swf_statemachine_domain_specific_language’ contains XText DSL definitions, ‘swf_statemachine_domain_specific_language_generator’ the functionality MWE2 and Xtend needs to create the Java Code, ‘swf_statemachine_domain_specific_language_model’ containing model for our DSL and ‘swf_statemachine_domain_specific_language_creator’ containing the MWE2 Workflow to create the Java Code.

All these projects contains nearly standard functionality so there isn’t too many thing to explain other then one functionality in ‘swf_statemachine_domain_specific_language_generator’ which contain a class called ‘SwfReader’. It seems standard model reader class from XText designed to work form Eclipse and it has problems to identify Maven classpath entries. For this reason I have to modify it little bit.

package org.salgar.swf_statemachine.xtext.reader.reader;

import org.apache.log4j.Logger;
import org.eclipse.xtext.mwe.Reader;

import java.net.URL;
import java.net.URLClassLoader;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;

public class SwfReader extends Reader {
    private static final Logger LOGGER = Logger.getLogger(SwfReader.class);

    @Override
    public void setUseJavaClassPath(boolean isUse) {
        if (isUse) {
            Set<String> classPathEntries = new HashSet();
            retrieveClassPathEntries(Thread.currentThread().getContextClassLoader(), classPathEntries);
            List<String> tmp = new ArrayList<>(classPathEntries);

            for (String entry : tmp) {
                addPath(entry);
            }

        }
    }

    private Set<String> retrieveClassPathEntries(ClassLoader classLoader, Set<String> classPathEntries) {
        List<String> givenLoaderClassPathEntries = new ArrayList<String>();
        if (classLoader instanceof URLClassLoader) {
            URLClassLoader tmp = (URLClassLoader) classLoader;
            for(URL classPathURL : tmp.getURLs()) {
                String classPath = classPathURL.getPath().trim().toLowerCase();

                if (classPath.endsWith("/") || classPath.endsWith(".jar")) {
                    givenLoaderClassPathEntries.add(classPathURL.getFile());
                }
            }
            if(LOGGER.isDebugEnabled()) {
                LOGGER.debug("Classpath entries from thread context loader: {" + givenLoaderClassPathEntries.toString() + "}");
            }
            classPathEntries.addAll(givenLoaderClassPathEntries);
        }
        if (givenLoaderClassPathEntries.isEmpty() && (classLoader instanceof URLClassLoader || classLoader == null)) {
            givenLoaderClassPathEntries.addAll(retrieveSystemClassPathEntries());
            if(LOGGER.isDebugEnabled()) {
                LOGGER.debug("System classpath entries from thread context loader: {" + givenLoaderClassPathEntries.toString() + "}");
            }
            classPathEntries.addAll(givenLoaderClassPathEntries);
            return classPathEntries;
        }
        if( classLoader.getParent() != null) {
            retrieveClassPathEntries(classLoader.getParent(), classPathEntries);
        }
        return classPathEntries;
    }

    private static List<String> retrieveSystemClassPathEntries() {
        List<String> pathes = new ArrayList<>();
        String classPath = System.getProperty("java.class.path");
        String separator = System.getProperty("path.separator");
        String[] strings = classPath.split(separator);

        for(String path : strings) {
            pathes.add(path);
        }

        return pathes;
    }
}

Snippet 23

This version of the Reader search all the entries for the given Threads Classloader, also it parents and finally adds the entries also from the ‘java.class.path’ and ‘path.separator’ otherwise your MWE2 workflow because it can’t resolve the entries from Maven dependency elements.

When we will run the Maven Build, it will create the necessary Java artifacts into the ‘swf_statemachine_domain_specific_language_creator’ project, under the directory ‘src/generated’. Nearly all the implementations are same as the previous blog only some package structure had changed, so after adapting some unit tests (which also proving my Test Rig concept) unit test completed successfully and everything run correctly at the first start of the ‘swf_statemachine_techdemo’ web application which will look following when it runs.

When we run the Maven it will execute the ‘GenerateStateMachine.mwe2’ from the project ‘swf_statemachine_domain_specific_language_creator’…

module org.salgar.swf_statemachine.xtext.GenerateStateMachine

import org.eclipse.emf.mwe.utils.*
import org.eclipse.xtext.xtext.generator.*
import org.eclipse.xtext.xtext.generator.model.project.*

var rootPath = ".."

Workflow {
		bean = org.salgar.swf_statemachine.xtext.StateMachineDslStandaloneSetup : languageStandaloneSetup {

		}

	    component =  org.salgar.swf_statemachine.xtext.reader.reader.SwfReader {
        		useJavaClassPath = true
        		uriFilter = org.eclipse.xtext.mwe.NameBasedFilter {
        			extension = 'ssm'
        		}

        		register = languageStandaloneSetup
        		loadResource = {
        			slot = "modelSlot"
        		}
        }

        component = org.eclipse.xtext.generator.GeneratorComponent {
            slot = "modelSlot"
            register = languageStandaloneSetup
            outlet = {
                path = "${rootPath}/src/generated/java"
            }
        }
}

Snippet 24

which will configure our DSL via ‘org.salgar.swf_statemachine.xtext.StateMachineDslStandaloneSetup’ bean to the workflow. Reader as we mentioned before loads our model file and GeneratorComponent which will create via Xtend our Spring State Machine artifacts to the configured ‘/src/generated/java’ directory.

Tools:

  • XText
  • You can find the XText reference documentation here.

    XText is available as plugin for IntelliJ IDEA and Eclipse IDE. You can read and download from this links.

    Plugins
    XText

    The advantage of using this plugins, you will get auto complete help from both IDE when you are developing your grammar. Further on if you follow the default wizard from both IDE to create XText projects, these wizards will also create IntelliJ and Eclipse plugins which help you with syntax highlighting when you are creating a model file with your grammar.

    I didn’t include those to my project structure because they were not relevant for my feasibility study but you can include those and consider as an exercise to the reader.

  • Xtend
  • You can find the Xtend reference documentation here.

    For All the other tooling like Maven, git, you can find all the necessary information from my previous blog, only thing different this version of the feasibility study will be on another git branch and can be downloaded via the following command.

  • Git
  • git clone -b XText git@github.com:mehmetsalgar/swf_statemachine.git

  • Maven
  • We can run the build with ‘mvn clean install’ command, there is one thing to play attention, some projects are still following the model driven approach and creating artifacts from UML Models, as you might know from my previous blogs, code generation from UML has some dependencies to some artifacts that can only be found in Eclipse p2 repositories and to access those maven build should be started only once with parameter ‘mvn clean install -Pfull-build’ after that these dependencies will be placed into the maven repository and this will not be needed anymore.

    A successful build will look like the following.

    [INFO] ------------------------------------------------------------------------
    [INFO] Reactor Summary:
    [INFO]
    [INFO] Spring WebFlow - State Machine .................... SUCCESS [0.217s]
    [INFO] Spring WebFlow - State Machine - Comet ............ SUCCESS [1.271s]
    [INFO] Spring WebFlow - State Machine - Fornax Extensions  SUCCESS [0.461s]
    [INFO] Spring WebFlow - State Machine - Domain Model ..... SUCCESS [8.056s]
    [INFO] Spring WebFlow - State Machine - Domain Specific Language  SUCCESS [13.781s]
    [INFO] Spring WebFlow - State Machine - Domain Specific Language Generator  SUCCESS [0.308s]
    [INFO] Spring WebFlow - State Machine - Domain Specific Language Model  SUCCESS [0.032s]
    [INFO] Spring WebFlow - State Machine - Techdemo Domain Model  SUCCESS [3.060s]
    [INFO] Spring WebFlow - State Machine - Domain Specific Language Creator  SUCCESS [2.529s]
    [INFO] Spring WebFlow - State Machine - Statemachine Model Implementation  SUCCESS [0.366s]
    [INFO] Spring WebFlow - State Machine - TechDemo ......... SUCCESS [2.575s]
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    

    Snippet 25

    Maven configuration for these projects are pretty standard, there only one special thing to be mentioned, we are using two non standard maven directory structure ‘xtext-gen’ and ‘xtend-gen’ which are used for the generated code for XText and Xtend.

    MWE2 Workflow use deafult ‘xtext-gen’ directory but we have to configure the Xtend Maven plugin to use the ‘xtend-gen’.

    <plugin>
       <groupId>org.eclipse.xtend</groupId>
       <artifactId>xtend-maven-plugin</artifactId>
       <version>${xtext-version}</version>
       <executions>
         <execution>
            <goals>
              <goal>compile</goal>
              <goal>testCompile</goal>
             </goals>
         </execution>
       </executions>
       <configuration>
         <outputDirectory>${project.basedir}/src/main/xtend-gen</outputDirectory>
         <testOutputDirectory>${project.basedir}/src/test/xtend-gen</testOutputDirectory>
        </configuration>
    </plugin>
    

    Snippet 26

    also we have to add these directory to the classpath.

    <plugin>
    	<groupId>org.codehaus.mojo</groupId>
    	<artifactId>build-helper-maven-plugin</artifactId>
    	<version>${build_helper.plugin.version}</version>
    	<executions>
    		<execution>
    			<id>add-source-xtend</id>
    			<phase>initialize</phase>
    				<goals>
    					<goal>add-source</goal>
    				</goals>
    				<configuration>
    					<sources>
    						<source>src/main/xtend-gen</source>
    					</sources>
    				</configuration>
    		</execution>
    		<execution>
    			<id>add-source</id>
    			<phase>initialize</phase>
    			<goals>
    				<goal>add-source</goal>
    				<goal>add-resource</goal>
    			</goals>
    			<configuration>
    				<sources>
    					<source>src/main/xtext-gen</source>
    				</sources>
    				<resources>
    					<resource>
    						<directory>src/main/xtext-gen</directory>
    						<excludes>
    							<exclude>**/*.java</exclude>
    							<exclude>**/*.g</exclude>
    						</excludes>
    					</resource>
    				</resources>
    			</configuration>
    		</execution>
    	</executions>
    </plugin>
    

    Snippet 27

  • Tomcat
  • You can find the instructions how to configure the tomcat and starting here in the previous blog

    Conclusion:
    Well I still think using UML Model for starting point for Spring State Machine is a better idea for complex and big projects but it is nice to see with the help of the XText it is actually quite easy and feasible to create our Domain Specific Language for Spring State Machine and use it. Can we keep an eye of overall view when the project has more then 100 state machines when they are modeled with our DSL compared to UML Model, I don’t know, it is little bit of personal taste thing but at least you know from this blog that is possible.

    As always I hope this blog will be useful for somebody.

    Posted in Ajax, Asynchrounus Processing, Atmosphere, Comet, cxf, Fornax, Iterative, JSF, M2T, Maven, MDA, MDD, MDSD, MWE, MWE2, Native IO, Open Source, Primefaces, Software Development, Spring State Machine, Spring Web Flow, State Explosion, State Machine, Web Socket, XPand, Xtend, XText | 1 Comment

    Ajax, Spring Web Flow and Spring State Machine

    Foreword:
    A while ago, I developed a feasibility study application to research potential methods to reduces the chaos induced from the extreme Ajax usage in the Web Applications. The technology stack that was used, were Spring Web Flow and State Machine. In my opinion it produced good results and proved that it was feasible to use Spring Web Flow and State Machines in modern Web Applications. One point that this methodic was criticized based on comments I got, was its reliance to the custom implemented State Machine. Any organization/person considered to use this solution was skeptical about the custom State Machine and asked guarantees about it continued support. Those are naturally valid concerns, when people will invest for such an Enterprise Web Application, they needed assurances that technology will not disappear in next 2 months and continued support will be there. Well to calm those concerns I can now happily say that there is now a reliable open source implementation of a State Machine, Spring State Machine.

    I don’t know where the Spring Organization found the motivation to develop a State Machine, may be they also see some valid points in my previous blog? :).

    Positively surprised with the existence of the Spring State Machine, I decided to convert the feasibility study to use it from this point you will read the story about how it is gone. You can see this blog also a good example implementation for Spring State Machine.

    Content:

    Why to use this solution:
    In the timeframe that I developed this feasibility study, State Machines were a quite unknown in Java development world. On one specific discussion I had about my previous blog with very renown Software Architect and book author, who developed systems for US Navy ships based on State Machines and reached %99.99 error free operation time (on 2 errors found on its system were hardware errors :)) quoted to me that ‘Too long Java developers made believe that State Machines were too complex for them, and it was better for them to stay away’. For this reason, there was no practically usable State Machine for Web Applications. There were implementations, but they required specific DSLs and customized too much for the special requirement of industry branched they are developed for. There was none following the principle of UML State Machine closely which are really compatible with the requirements of a modern Web Application. Luckily that days are over, we have now Spring State Machine, considering how mainstream Spring Projects are, I am sure that State Machine ideas will find some traction.

    Tooling:
    One side note I want to make here, for the people who read the previous blog. I used there an Open Source Modeling Tool (it is important for me that every technology used in this project should be Open Source to prevent that if somebody wants to use it, it will not be blocked for cost reasons from the management). The UML Modeling tool that I used in the previous blog, Topcased,  this project transformed and further developed. It now exist under the name of Papyrus, mostly compatible product to Topcased but there several small difference that I will mention.

    So lets make our hands dirty.

    Model-To-Spring State Machine:
    First, I have to state that I am a big fan of Model Driven Software Development. Lately with the abuse of the Agile concepts, specification of software became a rare thing and Agile practicers are normally the haters of the modeling. I agree that a model that will be outdated and useless in 2 weeks is wasted effort but what about if our model lives with the project and evolves every iteration of the project.

    A State Machine modeled in an UML tool guarantees that, if you want to further develop your State Machine you can not leave the UML Model to be outdated in a corner, it has to evolve with the State Machine and the Project . Unfortunately Spring State Machine expects at the moment to be configured via Java code. For a small project that might be Ok but for a complex projects, this Java files will be to complex to have an overview and understanding.

    Fortunately Spring State Machine follows the concepts of the UML State Machine really close, so an UML State Machine Diagram can easily be converted to runnable Java code with technologies like Eclipse M2T XPand.

    A state machine configuration looks like this for Spring State Machine….

    @Configuration
    @EnableStateMachineFactory(name = "TechDemoSM")
    public class TechDemoSMConfiguration
    		extends
    			EnumStateMachineConfigurerAdapter<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> {
    	private static final Logger LOG = Logger.getLogger(TechDemoSMConfiguration.class);
    
    	@Autowired
    	private TechDemoSMActionContainer techDemoSMActionContainer;
    
    	@Autowired
    	private TechDemoSMGuardContainer techDemoSMGuardContainer;
    
    	@Autowired
    	private TechDemoSMControlObjectLocator controlObjectLocator;
    
    	@Override
    	public void configure(
    			StateMachineConfigurationConfigurer<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> config)
    					throws Exception {
    		config.withConfiguration().listener(listener());
    	}
    
    	@Override
    	public void configure(
    			StateMachineStateConfigurer<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> states)
    					throws Exception {
    		states.withStates()
    				.initial(TechDemoSM_StateEnumerationImpl.CUSTOMERSEARCH_RUNNING, initialStateTechDemoSMAction())
    				.states(EnumSet.allOf(TechDemoSM_StateEnumerationImpl.class));
    	}
    
    	@Override
    	public void configure(
    			StateMachineTransitionConfigurer<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> transitions)
    					throws Exception {
    		transitions
    
    				//STATE - CUSTOMERSEARCH_RUNNING
    				.withExternal().source(TechDemoSM_StateEnumerationImpl.CUSTOMERSEARCH_RUNNING)
    				.target(TechDemoSM_StateEnumerationImpl.End).event(TechDemoSM_EventEnumerationImpl.onEnd)
    
    				.and()
    				//STATE - CUSTOMERSEARCH_RUNNING
    				.withExternal().source(TechDemoSM_StateEnumerationImpl.CUSTOMERSEARCH_RUNNING)
    				.target(TechDemoSM_StateEnumerationImpl.TIMEOUT).event(TechDemoSM_EventEnumerationImpl.onTimeout)
    
    				.and()
    				//STATE - CUSTOMERSEARCH_RUNNING
    				.withExternal().source(TechDemoSM_StateEnumerationImpl.CUSTOMERSEARCH_RUNNING)
    				.target(TechDemoSM_StateEnumerationImpl.SERVICE_NOT_AVAILABLE)
    				.event(TechDemoSM_EventEnumerationImpl.onServiceNotAvailable)
    
    		;
    	}
    
    	public StateMachineListener<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> listener() {
    		return new StateMachineListenerAdapter<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl>() {
    			@Override
    			public void stateChanged(State<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> from,
    					State<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> to) {
    				LOG.info("State change to " + to.getId());
    			}
    
    			@Override
    			public void eventNotAccepted(Message<TechDemoSM_EventEnumerationImpl> event) {
    				LOG.warn("The event " + event.toString() + " is not accepted!");
    			}
    		};
    	}
    
    	@Bean
    	public Action<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> initialStateTechDemoSMAction() {
    		return new Action<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl>() {
    			@Override
    			public void execute(
    					StateContext<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> context) {
    				AbstractTechDemoControlObject controlObject = controlObjectLocator.getControlObject();
    				((ControlObject) controlObject).resetStateMachine();
    				context.getExtendedState().getVariables().put("TechDemoSMControlObject", controlObject);
    			}
    		};
    	}
    }
    

    Snippet 1

    as you can see above the interface EnumStateMachineConfigurerAdapter over several configure methods (one for State Machine, one for States, one for Transitions) configures our Spring State Machine.

    Above State Machine is created from the following UML Diagram.

    TechDemoSM
    Picture 1

    It is quite a simple State Machine demonstrating the concepts how the States, Events, Transitions, Guards taken from the UML model and transferred to the Java Code and Spring State Machine configuration.

    What we have here is an UML Model created with Eclipse Papyrus, corresponding XMI which interpreted with Eclipse M2T Framework and converted to Java code with XPand.

    Sounds simple isn’t it, actually it really is.

    EnumStateMachineConfigurerAdapter interface configure the State Machine in chunks.

    First block is the definition of the State Machine,

    	@Override
    	public void configure(
    			StateMachineConfigurationConfigurer<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> config)
    					throws Exception {
    		config.withConfiguration().listener(listener());
    	}
    

    Snippet 2

    in which we define the basic functionality of the State Machine, it’s listeners(which I will explain later) or it will auto start or not, for ex.

    Next we inform to the State Machine about its States,

            @Override
    	public void configure(
    			StateMachineStateConfigurer<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> states)
    					throws Exception {
    		states.withStates()
    				.initial(TechDemoSM_StateEnumerationImpl.CUSTOMERSEARCH_RUNNING, initialStateTechDemoSMAction())
    				.states(EnumSet.allOf(TechDemoSM_StateEnumerationImpl.class));
    	}
    

    Snippet 3

    which is nothing more then passing the State Enumeration created via Eclipse M2T XPand from the UML Model as a Template Parameter and letting the State Machine read the enumeration.

    Third step is introducing the State Transitions of the State Machine,

    	@Override
    	public void configure(
    			StateMachineTransitionConfigurer<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> transitions)
    					throws Exception {
    		transitions
                    //STATE - CUSTOMERSEARCH_RUNNING
    		.withExternal().source(TechDemoSM_StateEnumerationImpl.CUSTOMERSEARCH_RUNNING)
    		.target(TechDemoSM_StateEnumerationImpl.End).event(TechDemoSM_EventEnumerationImpl.onEnd)
    		.and()
    		//STATE - CUSTOMERSEARCH_RUNNING
    		/**   
    		        We received an error Event and we are transitioning to the TIMEOUT state.				    
    		*/
                    .withExternal().source(TechDemoSM_StateEnumerationImpl.CUSTOMERSEARCH_RUNNING)
    		.target(TechDemoSM_StateEnumerationImpl.TIMEOUT).event(TechDemoSM_EventEnumerationImpl.onTimeout)
    		.and()
    		//STATE - CUSTOMERSEARCH_RUNNING
    		.withExternal().source(TechDemoSM_StateEnumerationImpl.CUSTOMERSEARCH_RUNNING)
    		.target(TechDemoSM_StateEnumerationImpl.SERVICE_NOT_AVAILABLE)
    		.event(TechDemoSM_EventEnumerationImpl.onServiceNotAvailable)
    		;
    	}
    

    Snippet 4

    in my previous blog, when I used Topcased as UML modeling tool, while Topcased has no representation of transition activities or guards, I had to use custom made UML Steorotypes to define the Activities and Guard conditions on transition. Eclipse Papyrus further developed in this area and men can express on the UML model the Transition Activities and Guards making the UML Stereotypes obsolete. As you can see I created the Action and Guard in Eclipse Papyrus and Eclipse M2T XPand is able to detect those.

    guard_action
    Picture 2

    So when Eclipse M2T Xpand traverse the model and detect Transition Activities and Guards creates Spring Beans to be Auto-Wired (Above was a simplistic example that doesn’t have dedicated Activity and Guard objects, further in the Blog, I will show you a State Machine that have custom Activities and Guard Conditions.), so State Machine realize the Transition without executing any Activity or checking Guard Condition.

    The configuration of the listeners for our State Machine, we want to receive Event from the State Machine to be able observe its behavior during development and production.

    	public StateMachineListener<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> listener() {
    		return new StateMachineListenerAdapter<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl>() {
    			@Override
    			public void stateChanged(State<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> from,
    					State<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> to) {
    				LOG.info("State change to " + to.getId());
    			}
    
    			@Override
    			public void eventNotAccepted(Message<TechDemoSM_EventEnumerationImpl> event) {
    				LOG.warn("The event " + event.toString() + " is not accepted!");
    			}
    		};
    	}
    

    Snippet 5
    For this purpose, we have one stateChanged and one eventNotAccepted Listeners telling us from an to States and in a case that State Machine does not accept a event, the name of the Event.

    Now lets look how UML Model structure transferred to Java file structure for Spring State Machine Configuration, Enumerations, Action and Guards.

    Directory Structure for CustomerSearchSM
    Picture 3

    Eclipse M2T XPand reads the name of the State Machine, Package names from the UML Model and creates the directory structure accordingly. I organized so that Actions and Guards placed under the Java packages which names are originated from State name that they are linked. I think, that will make easier to relate the Java code to the UML Model in development and production phases.

    And finally a method, arranging the initial transition activity, which is setting the Control Object of the State Machine.

    	public Action<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> initialStateTechDemoSMAction() {
    		return new Action<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl>() {
    			@Override
    			public void execute(
    					StateContext<TechDemoSM_StateEnumerationImpl, TechDemoSM_EventEnumerationImpl> context) {
    				AbstractTechDemoControlObject controlObject = controlObjectLocator.getControlObject();
    				((ControlObject) controlObject).resetStateMachine();
    				context.getExtendedState().getVariables().put("TechDemoSMControlObject", controlObject);
    			}
    		};
    	}
    

    Snippet 6

    There are few interesting points there.

    If you read my previous blog, you know that I am a strong advocate that a State Machine should protect the variables that are representing its State, this area can be only accessible over the Event Mechanism of the State Machines, outside of that it should be only available as read only. Spring State Machine has loose concept of protecting its variables, it uses a HashMap ‘context.getExtendedState().getVariables()’ for this purpose, which can be modified from several sources in an uncontrolled fashion.

    For this reason, I modeled in the UML Model specific objects called ControlObjects (at the end of the day we are doing Model Driven Software Development) and those has to be initialized at the initial Transition of a State Machine.

    -A more complex State Machine:
    For the warm ups, I choose the previous example from a simple State Machine.

    Now we can go in details with a more complex State Machine, in my previous blog I configured the State Machines via Spring configuration files. Spring State Machine instead use Auto-wiring for dependency injection. This causes some complications, I will, after this blog, research the possibilities of configuring the Spring State Machine via configuration files, but let me show how I did it for this blog.

    First, I have to tell, one of the biggest advantage of developing a Web Application via State Machines is that you can completely test it in a ‘Test Rig‘ all the business logic without attaching to any GUI Layer. That you can have %95 of your application tested without starting a Tomcat server and doing expansive round around trips.

    For this purpose, State Machine Activities and Guard Condition should be representable as real implementations and test versions. This is really easy to do with Spring configuration files but to achieve that by Auto-Wiring we have to create some indirections.

    Below is the picture of CustomerSearchSM State Machine, it is relatively more complex then TechDemoSM.

    CustomerSearchSM
    Picture 4

    Main changes occur under the method….

            @Override
    	public void configure(
    			StateMachineTransitionConfigurer<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> transitions)
    

    Snippet 7

    now transitions having Activities and Guard Conditions…

    //STATE - CUSTOMER_AUTHENTICATED
    .withExternal()
    .source(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_AUTHENTICATED)
    .target(CustomerSearchSM_StateEnumerationImpl.ORDERS_LOADING)
    .event(CustomerSearchSM_EventEnumerationImpl.onCustomerJoinedClicked)
    .guard(customerSearchSMGuardContainer
    			.getCUSTOMER_AUTHENTICATED___ORDERS_LOADING_OrdersLoading_isOrderSearchRunning_guard()
    					.CUSTOMER_AUTHENTICATED_ORDERS_LOADING_OrdersLoading_isOrderSearchRunning_guard())
    .action(customerSearchSMActionContainer
    			.getCUSTOMER_AUTHENTICATED___ORDERS_LOADING_OrdersLoading_ProcessLoadingOrders_action()
    					.CUSTOMER_AUTHENTICATED_ORDERS_LOADING_OrdersLoading_ProcessLoadingOrders_action())
    .and()
    

    Snippet 8

    to ensure that we can have production and test implementations, we have to build an indirection for Spring Dependency Injection so Activities are defined on the ‘customerSearchSMActionContainer’ and Guard Conditions on the ‘customerSearchSMGuardContainer’.

    ‘CustomerSearchSMGuardContainer’ looks like following…

    @Configuration
    public class CustomerSearchSMGuardContainer {
    	@Autowired
    	private org.salgar.swf_statemachine.customersearch.configuration.customer_authenticated.guard.CUSTOMER_AUTHENTICATED___CUSTOMER_JOINED_CustomerJoined_isOrdersFound_guard customer_authenticated___CUSTOMER_JOINED_CustomerJoined_isOrdersFound_guard;
    
    	public org.salgar.swf_statemachine.customersearch.configuration.customer_authenticated.guard.CUSTOMER_AUTHENTICATED___CUSTOMER_JOINED_CustomerJoined_isOrdersFound_guard getCUSTOMER_AUTHENTICATED___CUSTOMER_JOINED_CustomerJoined_isOrdersFound_guard() {
    		return customer_authenticated___CUSTOMER_JOINED_CustomerJoined_isOrdersFound_guard;
    	}
    
    	@Autowired
    	private org.salgar.swf_statemachine.customersearch.configuration.customer_authenticated.guard.CUSTOMER_AUTHENTICATED___ORDERS_LOADING_OrdersLoading_isOrderSearchRunning_guard customer_authenticated___ORDERS_LOADING_OrdersLoading_isOrderSearchRunning_guard;
    
    	public org.salgar.swf_statemachine.customersearch.configuration.customer_authenticated.guard.CUSTOMER_AUTHENTICATED___ORDERS_LOADING_OrdersLoading_isOrderSearchRunning_guard getCUSTOMER_AUTHENTICATED___ORDERS_LOADING_OrdersLoading_isOrderSearchRunning_guard() {
    		return customer_authenticated___ORDERS_LOADING_OrdersLoading_isOrderSearchRunning_guard;
    	}
    }
    

    Snippet 9

    real implementation beans are initialized in the system (you will see that more detailed when I explain the project structure) and Auto-Wired here…

    For ex, ‘CUSTOMER_AUTHENTICATED___CUSTOMER_JOINED_CustomerJoined_isOrdersFound_guard’ …..

    @Configuration
    public class CUSTOMER_AUTHENTICATED___CUSTOMER_JOINED_CustomerJoined_isOrdersFound_guard {
    	@Autowired(required = false)
    	private ICUSTOMER_AUTHENTICATED_CUSTOMER_JOINED_CustomerJoined_isOrdersFoundGuard realImplementation;
    
    	@Bean
    	public Guard<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> CUSTOMER_AUTHENTICATED_CUSTOMER_JOINED_CustomerJoined_isOrdersFound_guard() {
    		return new Guard<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl>() {
    			@Override
    			public boolean evaluate(
    					StateContext<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> context) {
    				if (realImplementation != null) {
    					return realImplementation.evaluate(context);
    				}
    				return false;
    			}
    		};
    	}
    	public interface ICUSTOMER_AUTHENTICATED_CUSTOMER_JOINED_CustomerJoined_isOrdersFoundGuard {
    		boolean evaluate(
    				StateContext<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> context);
    	}
    }
    

    Snippet 10

    here we are trying to inject ‘realImplementation’ with the ‘@Autowired(required = false)’ annotation, ‘required = false’ is critical here because if we want to test the special part of the State Machine, that mean we should be able start the State Machine with partial configuration, when we are testing CustomerSearch Component, it should be possible to run the State Machine without OrderSearch Component.

    The real implementation of the ‘ICUSTOMER_AUTHENTICATED_CUSTOMER_JOINED_CustomerJoined_isOrdersFoundGuard’ will look like the following…

    @Configuration
    public class CUSTOMER_AUTHENTICATED_CUSTOMER_JOINED_CustomerJoinedTransitionGuardImpl {
        @Bean
        public CUSTOMER_AUTHENTICATED___CUSTOMER_JOINED_CustomerJoined_isOrdersFound_guard.ICUSTOMER_AUTHENTICATED_CUSTOMER_JOINED_CustomerJoined_isOrdersFoundGuard getCUSTOMER_AUTHENTICATED_CUSTOMER_JOINED_CustomerJoinedTransitionGuard() {
            return new CUSTOMER_AUTHENTICATED___CUSTOMER_JOINED_CustomerJoined_isOrdersFound_guard.ICUSTOMER_AUTHENTICATED_CUSTOMER_JOINED_CustomerJoined_isOrdersFoundGuard() {
                @Override
                public boolean evaluate(StateContext<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> context) {
                    CustomerSearchSMControlObject customerSearchSMControlObject = (CustomerSearchSMControlObject) CustomerSearchSMControlObjectAccessor.getControlObject(context.getStateMachine());
    
                    if (FindOrdersSM_StateEnumerationImpl.ORDERS_FOUND
                            .equals(((StateMachine)customerSearchSMControlObject.getFindOrdersSlaveSM()).getState().getId())) {
                        return true;
                    }
                    return false;
                }
            };
        }
    }
    

    Snippet 11

    in which we are checking that a Slave State Machine in a specific State is to let the transition proceed.

    To say the truth Auto-Wired solution produces too much code clutter, I rather have the option to initialize the Spring State Machine over Spring configuration files and decide there what to inject or not inject, or which implementation to inject.

    And if you want to proof that ‘Test Rig‘ concept that I discussed in the previous blog works, I converted the feasibility application from my custom State Machine implementation to the Spring State Machine only using the Unit Tests in the Test Rig. I didn’t started the Tomcat once to see the application works or not and when I did run the application in Tomcat, I didn’t have to fix anything other then adapting Spring Web Flow to use the Spring State Machine.

    How the magic works:
    -Project Structure:
    I will try to explain here the project structure, maven, git configurations, try to explain how Eclipse M2T XPand/MWE works and finally how to run the feasibility application in Tomcat.

    Project Structure
    Picture 5

    Most of the Maven projects are there for Eclipse M2T (Model to Text) translation, swf_statemachine is the main Maven project, swf_statemachine_eclipse_dependencies, swf_statemachine_fornax_extension, swf_statemachine_xpand, swf_statemachine_techdemo_domain are all preparation for Eclipse M2T, which mainly occurs at project swf_statemachine_sm_model. swf_statemachine_model_impl is where we have the Test Rig implementation and real implementations of Activites and Guard occurs and final swf_statemachine_techdemo is where everything put together to be deployed to Tomcat.

    Other then swf_statemachine_sm_model, swf_statemachine_model_impl nearly all the functionality stayed similar to original blog, there you can find detailed explanation about their functionality. I will only explain here swf_statemachine_sm_model, swf_statemachine_model_impl.

    As you might understand now, project swf_statemachine_sm_model contains all the UML Model and code generation mechanism, while we might want different implementations for production, test or even several different implementations for production, it is wiser to have a project that contains the implementation classes for the Guards and Conditions and this project is swf_statemachine_model_impl.

    swf_statemachine_impl
    Picture 6

    In this picture, you can see that I organized the actions and guards implementation under the packages with their UML State Machine name and the naming convention that I used help to identify with first part of name is the source state, second part the target state and third is the transition name. This way we can have a quick correlation between UML Model to Java Code.

    This project also contains the Unit Test that I mentioned for the ‘Test Rig‘, the following is small snippet from the Test class.

    @Test(dependsOnMethods = { "initialisation" }, enabled = true)
    public void startSearch() {
    	StateMachine<CustomerSearchSM_StateEnumerationImpl, CustomerSearchSM_EventEnumerationImpl> stateMachine = StateMachineFactories.getInstance().getCustomerSearchSMFactory().getStateMachine();
    	stateMachine.start();
    
    	String customerNumber = "987654321";
    
    	Assert.assertNotNull(stateMachine);
    	Assert.assertEquals(CustomerSearchSM
    			.getStateMachineName(), stateMachine.getState().getId().getStateMachineName().getStateMachineName());
    
    	Assert.assertEquals(
    			WAITING_CUSTOMERSEARCH_START.getStateName(),
    				stateMachine.getState().getId().getStateName());
    	Assert.assertEquals(Boolean.TRUE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerSearchInput());
    	Assert.assertEquals(Boolean.FALSE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerSearchRunning());
    	Assert.assertEquals(Boolean.FALSE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerSearchAuthentication());
    	Assert.assertEquals(Boolean.FALSE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerSearchFound());
    	Assert.assertEquals(Boolean.FALSE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerJoin());
    	Assert.assertEquals(Boolean.FALSE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerOrders());
    	Assert.assertEquals(Boolean.FALSE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerOrderLoading());
    
    	CustomerSearchStartEventPayload customerSearchStartEventPayload = new CustomerSearchStartEventPayload();
    	customerSearchStartEventPayload.setCustomerNumber(customerNumber);
    	Message<CustomerSearchSM_EventEnumerationImpl> message = MessageBuilder.withPayload(CustomerSearchSM_EventEnumerationImpl.onStartSearch).setHeader("customerSearchStartEventPayload", customerSearchStartEventPayload).build();
    
    	CustomerManager customerManager = (CustomerManager) this.applicationContext
    			.getBean("customerManager");
    	customerManager.findCustomer(anyObject(String.class),
    			anyObject(Object.class));
    	EasyMock.replay(customerManager);
    	stateMachine.sendEvent(message);
    	EasyMock.verify(customerManager);
    
    	Assert.assertEquals(
    			CUSTOMERSEARCH_RUNNING.getStateName(),
    			stateMachine.getState().getId().getStateName());
    	Assert.assertEquals(customerNumber,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getCustomerNumber());
    	Assert.assertEquals(Boolean.FALSE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerSearchInput());
    	Assert.assertEquals(Boolean.TRUE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerSearchRunning());
    	Assert.assertEquals(Boolean.FALSE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerSearchAuthentication());
    	Assert.assertEquals(Boolean.FALSE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerSearchFound());
    	Assert.assertEquals(Boolean.FALSE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerJoin());
    	Assert.assertEquals(Boolean.FALSE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerOrders());
    	Assert.assertEquals(Boolean.FALSE,
    			CustomerSearchSMControlObjectAccessor.getControlObject(stateMachine).getRenderCustomerOrderLoading());
    

    Snippet 12

    I will not go to deep analysis about unit test and Test Rig, I discussed those quite extensively in the previous blog, you can find it there, but what to pay attention here while we exactly know which values the GUI elements should have in every state, we can prove those in our unit tests and our application would be %95 tested even without starting Tomcat once.

    -Eclipse M2T and MWE:
    Let’s start with swf_statemachine_sm_model, this is the Maven project that we have our UML Model in. ‘swf_sm_model.di’ is the Eclipse Papyrus file (and underlying XMI UML representation – ‘swf_sm_model.uml’) which contains TechDemoSM, CustomerSearchSM, FindCustomerSM and FindOrdersSM.

    The technology responsible for creating executable Java code from this UML Model is Eclipse M2T MWE(Model Workflow Engine) which is telling M2T to read which UML Model and use which XPand template to generate Java code.

    It looks like following…

    module swf_statemachine_sm_model
    
    import org.eclipse.emf.mwe.utils.*
    import org.eclipse.xtext.generator.*
    import org.eclipse.xtext.ui.generator.*
    import org.eclipse.xpand2.*
    
    import org.salgar.swf_statemachine.uml2.model.ExtendedUML2Metamodel
    
    import org.salgar.swf_statemachine.uml2.Setup
    import org.eclipse.xtend.typesystem.uml2.*
    import org.eclipse.xtend.typesystem.uml2.UML2MetaModel
    import org.eclipse.xtend.typesystem.emf.XmiReader
    import org.eclipse.xtend.typesystem.uml2.profile.ProfilingExtensions.XmiReader
    import org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel
    import org.eclipse.emf.mwe.utils.Reader
    var targetDir = "src-gen"
    var fileEncoding = "Cp1252"
    var modelPath = "src/model"
    var projectName = "swf_statemachine_sm_model"
    var runtimeProject
    
    var list.set.property = 'order'
    var type_header_text = ""
    var annotation_source_key = ""
    var type_footer_text = ""
    var javabasic_entities = ""
    var classes_operation_implementation_strategy ="none"
    var javabasic_generateSerialVersionUID = "true"
    var use_overridden_equals_hashcode_toString= "true"
    var java_version = "5"
    var generate_additional_collection_methods = ""
    
    Workflow {
    	bean = org.eclipse.emf.mwe.utils.StandaloneSetup {
    		platformUri=".."
    		projectMapping = { projectName = "${projectName}" path = "${runtimeProject}" }
    		projectMapping = { projectName = "swf_statemachine_domain" path = "../swf_statemachine_domain" }
    		logResourceUriMap = false
    		scanClassPath = false
    	}
    	bean = org.salgar.swf_statemachine.uml2.Setup {
    		standardUML2Setup = true
     	}
    
     	bean = org.eclipse.xtend.typesystem.uml2.UML2MetaModel : umlMM { }
    
     	bean = org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel : swf_statemachine {
     		profile = "platform:/resource/swf_statemachine_sm_model/src/main/resources/swf_statemachine.profile.uml"
     	}
     	
     	bean = org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel : datatype {
     		profile = "platform:/resource/swf_statemachine_domain/src/main/resources/model/Datatype.profile.uml"
     	}
     	
     	bean = org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel : java {
     		profile = "platform:/resource/swf_statemachine_domain/src/main/resources/model/Java.profile.uml"
     	}
     	
     	component = org.eclipse.emf.mwe.utils.Reader {
     		uri = "platform:/resource/swf_statemachine_sm_model/src/main/resources/model/swf_sm_model.uml"
     		modelSlot = "model"
     	}
     	
     	component = org.eclipse.xpand2.Generator : enumerationGenerator {
     		metaModel = umlMM
     		metaModel = swf_statemachine
     		
     		globalVarDef = { name = "list_set_property" value = "'${list.set.property}'" }
     		globalVarDef = { name = "type_header_text" value = "''" }
     		globalVarDef = { name = "annotation_source_key" value = "''" }
     		globalVarDef = { name = "type_footer_text" value = "''" }
     		globalVarDef = { name = "javabasic_entities" value = "''" }
     		globalVarDef = { name = "classes_operation_implementation_strategy" value = "'${classes_operation_implementation_strategy}'" }
     		globalVarDef = { name = "javabasic_generateSerialVersionUID" value = "'${javabasic_generateSerialVersionUID}'" }
     		globalVarDef = { name = "use_overridden_equals_hashcode_toString" value = "'${use_overridden_equals_hashcode_toString}'" }
     		globalVarDef = { name = "java_version" value = "${java_version}" }
     		
     		fileEncoding = "ISO-8859-1"
     		outlet = { path = "${runtimeProject}/src/generated/java" postprocessor = org.eclipse.xpand2.output.JavaBeautifier {} }
    
    		advice = "templates::advices::javaBasicAssociationAdvices"
    		
    		expand = "template::stateMachineEnumeration::Root FOR model"
     	}
    
     	component = org.eclipse.xpand2.Generator : javaGenerator {
     		metaModel = umlMM
     		metaModel = datatype
     		metaModel = java
     		globalVarDef = { name = "list_set_property" value = "'${list.set.property}'" }
     		globalVarDef = { name = "type_header_text" value = "''" }
     		globalVarDef = { name = "annotation_source_key" value = "''" }
     		globalVarDef = { name = "type_footer_text" value = "''" }
     		globalVarDef = { name = "javabasic_entities" value = "''" }
     		globalVarDef = { name = "classes_operation_implementation_strategy" value = "'${classes_operation_implementation_strategy}'" }
     		globalVarDef = { name = "javabasic_generateSerialVersionUID" value = "'${javabasic_generateSerialVersionUID}'" }
     		globalVarDef = { name = "use_overridden_equals_hashcode_toString" value = "'${use_overridden_equals_hashcode_toString}'" }
     		globalVarDef = { name = "java_version" value = "${java_version}" }
    
     		fileEncoding = "ISO-8859-1"
     		outlet = { path = "${runtimeProject}/src/generated/java" postprocessor = org.eclipse.xpand2.output.JavaBeautifier {} }
    
    		advice = "templates::advices::javaBasicAssociationAdvices"
    
    		expand = "template::Root::Root FOR model"
     	}
    
     	component = org.eclipse.xpand2.Generator : papyrusGenerator {
         		metaModel = umlMM
         		metaModel = swf_statemachine
    
         		fileEncoding = "ISO-8859-1"
         		outlet = { path = "${runtimeProject}/src/generated/java" postprocessor = org.eclipse.xpand2.output.JavaBeautifier {} }
    
        		expand = "template::papyrusSSM::Root FOR model"
         	}
    }
    

    Snippet 13

    major components…

    	bean = org.eclipse.emf.mwe.utils.StandaloneSetup {
    		platformUri=".."
    		projectMapping = { projectName = "${projectName}" path = "${runtimeProject}" }
    		projectMapping = { projectName = "swf_statemachine_domain" path = "../swf_statemachine_domain" }
    		logResourceUriMap = false
    		scanClassPath = false
    	}
    

    Snippet 14

    which is configuring Eclipse environment to configure MWE and configure Eclipse Project dependencies to run under Maven…

    	bean = org.salgar.swf_statemachine.uml2.Setup {
    		standardUML2Setup = true
     	}
    

    Snippet 15

    this one was tricky…

    Eclipse Papyrus use UML 5.0.0 version, that in XTend typesystem until now is not defined, so I have to define it manually in Setup class.

    public class Setup
    	extends org.eclipse.xtend.typesystem.uml2.Setup {
    	private static final String UML2_500_NS_URI = "http://www.eclipse.org/uml2/5.0.0/UML";
    
    	@Override
    	public void setStandardUML2Setup(boolean b) {
    		super.setStandardUML2Setup(b);
    		EPackage.Registry.INSTANCE.put(UML2_500_NS_URI, EPackage.Registry.INSTANCE.get(UMLPackage.eINSTANCE.getNsURI()));
    	}
    }
    

    Snippet 16

    custom UML Profile defining Java types…

     	bean = org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel : java {
     		profile = "platform:/resource/swf_statemachine_domain/src/main/resources/model/Java.profile.uml"
     	}
    

    Snippet 17

    reading the UML Model

     	component = org.eclipse.emf.mwe.utils.Reader {
     		uri = "platform:/resource/swf_statemachine_sm_model/src/main/resources/model/swf_sm_model.uml"
     		modelSlot = "model"
     	}
    

    Snippet 18

    creating Enumeration for StateMachines, States, Events….

     component = org.eclipse.xpand2.Generator : enumerationGenerator {
     	metaModel = umlMM
     	metaModel = swf_statemachine
     		
     	globalVarDef = { name = "list_set_property" value = "'${list.set.property}'" }
     	globalVarDef = { name = "type_header_text" value = "''" }
     	globalVarDef = { name = "annotation_source_key" value = "''" }
     	globalVarDef = { name = "type_footer_text" value = "''" }
     	globalVarDef = { name = "javabasic_entities" value = "''" }
     	globalVarDef = { name = "classes_operation_implementation_strategy" value = "'${classes_operation_implementation_strategy}'" }
     	globalVarDef = { name = "javabasic_generateSerialVersionUID" value = "'${javabasic_generateSerialVersionUID}'" }
     	globalVarDef = { name = "use_overridden_equals_hashcode_toString" value = "'${use_overridden_equals_hashcode_toString}'" }
     	globalVarDef = { name = "java_version" value = "${java_version}" }
     		
     	fileEncoding = "UTF-8"
     	outlet = { path = "${runtimeProject}/src/generated/java" postprocessor = org.eclipse.xpand2.output.JavaBeautifier {} }
    
    	advice = "templates::advices::javaBasicAssociationAdvices"
    		
    	expand = "template::stateMachineEnumeration::Root FOR model"
     	}
    

    Snippet 19

    creating necessary domain object for interoperability between State Machine and TechDemo Web Application…

    component = org.eclipse.xpand2.Generator : javaGenerator {
     	metaModel = umlMM
     	metaModel = datatype
     	metaModel = java
     	globalVarDef = { name = "list_set_property" value = "'${list.set.property}'" }
     	globalVarDef = { name = "type_header_text" value = "''" }
     	globalVarDef = { name = "annotation_source_key" value = "''" }
     	globalVarDef = { name = "type_footer_text" value = "''" }
     	globalVarDef = { name = "javabasic_entities" value = "''" }
     	globalVarDef = { name = "classes_operation_implementation_strategy" value = "'${classes_operation_implementation_strategy}'" }
     	globalVarDef = { name = "javabasic_generateSerialVersionUID" value = "'${javabasic_generateSerialVersionUID}'" }
     	globalVarDef = { name = "use_overridden_equals_hashcode_toString" value = "'${use_overridden_equals_hashcode_toString}'" }
     	globalVarDef = { name = "java_version" value = "${java_version}" }
    
     	fileEncoding = "UTF-8"
     	outlet = { path = "${runtimeProject}/src/generated/java" postprocessor = org.eclipse.xpand2.output.JavaBeautifier {} }
    
    	advice = "templates::advices::javaBasicAssociationAdvices"
    
    	expand = "template::Root::Root FOR model"
     }
    

    Snippet 20

    and creating Spring State Machine

     component = org.eclipse.xpand2.Generator : papyrusGenerator {
    	metaModel = umlMM
    	metaModel = swf_statemachine
    
    	fileEncoding = "UTF-8"
    	outlet = { path = "${runtimeProject}/src/generated/java" postprocessor = org.eclipse.xpand2.output.JavaBeautifier {} }
    
    	expand = "template::papyrusSSM::Root FOR model"
    }
    

    Snippet 21

    -Eclipse M2T and XPand:
    Now let’s look how the Eclipse M2T XPand template looks like for generate Spring State Machine from UML, papyrusSSM.xpt (this file lies in swf_statemachine_xpand project)

    «IMPORT uml»
    
    «EXTENSION utility»
    «EXTENSION org::fornax::cartridges::uml2::javabasic::m2t::Helper»
    «EXTENSION templates::extensions::SwfStatemachineExtensions»
    
    «DEFINE Root FOR uml::Model»
    	«EXPAND Root FOREACH (List[uml::Package])ownedElement»
    «ENDDEFINE»
    
    /**
    * Creates all packages
    */
    «DEFINE Root FOR uml::Package»
    	«EXPAND Root FOREACH ownedType.typeSelect(uml::StateMachine)»
    	«EXPAND Root FOREACH nestedPackage»
    «ENDDEFINE»
    
    «DEFINE Root FOR uml::PackageImport»
    «ENDDEFINE»
    
    «DEFINE Root FOR uml::ProfileApplication»
    «ENDDEFINE»
    
    «DEFINE Root FOR uml::StateMachine»
    	«EXPAND BuildStateMachine»
    	«EXPAND ActionContainer»
    	«EXPAND GuardContainer»
    	«EXPAND ControlObjectLocator»
    «ENDDEFINE»
    
    «DEFINE BuildStateMachine FOR uml::StateMachine»
    	«FILE getFQNPackagePath() + "/configuration/" + this.name + "Configuration.java"»
    		package «getFQNPackageName()».configuration;
    
    		import org.salgar.statemachine.domain.ControlObject;
    		import org.salgar.swf_statemachine.«removeSM(this.name).toLowerCase()».controlobject.Abstract«removeSM(this.name)»ControlObject;
    		import «getFQNPackageName()».enumeration.event.«this.name»_EventEnumerationImpl;
    		import «getFQNPackageName()».enumeration.state.«this.name»_StateEnumerationImpl;
    
    		import org.springframework.beans.factory.annotation.Autowired;
    		import org.springframework.context.annotation.Configuration;
    		import org.springframework.context.annotation.Bean;
    		import org.springframework.messaging.Message;
    		import org.springframework.statemachine.config.EnableStateMachineFactory;
    		import org.springframework.statemachine.config.EnumStateMachineConfigurerAdapter;
    		import org.springframework.statemachine.config.builders.StateMachineConfigurationConfigurer;
    		import org.springframework.statemachine.config.builders.StateMachineTransitionConfigurer;
    		import org.springframework.statemachine.config.builders.StateMachineStateConfigurer;
    		import org.springframework.statemachine.StateContext;
    		import org.springframework.statemachine.action.Action;
    		import org.springframework.statemachine.listener.StateMachineListener;
    		import org.springframework.statemachine.listener.StateMachineListenerAdapter;
    		import org.springframework.statemachine.state.State;
    
    		import java.util.EnumSet;
    
    		import org.apache.log4j.Logger;
    
            «IF (this.ownedComment != null) && (!this.ownedComment.isEmpty)»
            /**
                «FOREACH this.ownedComment AS comment»
                    «comment.body»
                «ENDFOREACH»
            */
            «ENDIF»
    		@Configuration
    		@EnableStateMachineFactory(name="«this.name»")
    		public class «this.name»Configuration extends EnumStateMachineConfigurerAdapter<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> {
    			private static final Logger LOG = Logger.getLogger(«this.name»Configuration.class);
    
    			@Autowired
    			private «this.name»ActionContainer «this.name.toFirstLower()»ActionContainer;
    
    			@Autowired
    			private «this.name»GuardContainer «this.name.toFirstLower()»GuardContainer;
    
    			@Autowired
    			private «this.name»ControlObjectLocator controlObjectLocator;
    
    			@Override
    			public void configure(StateMachineConfigurationConfigurer<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl>config)
    					throws Exception {
    				config
    					.withConfiguration()
    						.listener(listener());
    			}
    
    			@Override
    			public void configure(StateMachineStateConfigurer<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> states) throws Exception {
    				states.withStates().initial(«this.name»_StateEnumerationImpl.«findIntialState(this.allOwnedElements().typeSelect(uml::Pseudostate)).name», initialState«this.name»Action())
    						.states(EnumSet.allOf(«this.name»_StateEnumerationImpl.class));
    			}
    
    			@Override
    			public void configure(StateMachineTransitionConfigurer<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> transitions) throws Exception {
    				transitions
    				«FOREACH this.allOwnedElements().typeSelect(uml::State).getOutgoings() AS transition SEPARATOR '.and()'»
    					//STATE - «transition.source.name»
    					«IF (transition.ownedComment != null) && (!transition.ownedComment.isEmpty)»
    					/**
    					    «FOREACH transition.ownedComment AS comment»
                                «comment.body»
                            «ENDFOREACH»
    					*/
    					«ENDIF»
    					.withExternal()
    					.source(«this.name»_StateEnumerationImpl.«transition.source.name»)
    					.target(«this.name»_StateEnumerationImpl.«transition.target.name»)
    						.event(«this.name»_EventEnumerationImpl.«transition.trigger.first().name»)
    						«IF transition.guard != null»
    								.guard(«this.name.toFirstLower()»GuardContainer.get«transition.source.name»___«transition.target.name»_«transition.name»_«transition.guard.name»_guard().«transition.source.name»_«transition.target.name»_«transition.name»_«transition.guard.name»_guard())
    								«EXPAND GuardImpl(transition)»
    						«ENDIF»
    						«IF transition.effect != null»
    							.action(«this.name.toFirstLower()»ActionContainer.get«transition.source.name»___«transition.target.name»_«transition.name»_«transition.effect.name»_action().«transition.source.name»_«transition.target.name»_«transition.name»_«transition.effect.name»_action())
    							«EXPAND ActionImpl(transition)»
    						«ENDIF»
    				«ENDFOREACH»;
    			}
    
    			public StateMachineListener<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> listener() {
                    return new StateMachineListenerAdapter<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl>() {
                        @Override
                        public void stateChanged(State<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> from, State<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> to) {
                            LOG.info("State change to " + to.getId());
                        }
    
                        @Override
                        public void eventNotAccepted(Message<«this.name»_EventEnumerationImpl> event) {
                            LOG.warn("The event " + event.toString() + " is not accepted!" );
                        }
                    };
                }
    
                @Bean
                public Action<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> initialState«this.name»Action() {
                    return new Action<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl>() {
                        @Override
                        public void execute(StateContext<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> context) {
                            Abstract«removeSM(this.name)»ControlObject controlObject = controlObjectLocator.getControlObject();
                            ((ControlObject)controlObject).resetStateMachine();
                            context.getExtendedState().getVariables().put("«this.name»ControlObject", controlObject);
                        }
                    };
                }
    		}
    	«ENDFILE»
    «ENDDEFINE»
    
    «DEFINE GuardImpl(uml::Transition transition) FOR uml::StateMachine»
        «FILE getFQNPackagePath() + "/configuration/" + transition.source.name.toLowerCase() + "/guard/" +  transition.source.name + "___" + transition.target.name + "_" + transition.name + "_" + transition.guard.name + "_guard" + ".java"»
            package «getFQNPackageName()».configuration.«transition.source.name.toLowerCase()».guard;
    
            import «getFQNPackageName()».enumeration.event.«this.name»_EventEnumerationImpl;
            import «getFQNPackageName()».enumeration.state.«this.name»_StateEnumerationImpl;
    
            import org.springframework.beans.factory.annotation.Autowired;
            import org.springframework.context.annotation.Configuration;
            import org.springframework.context.annotation.Bean;
            import org.springframework.statemachine.StateContext;
            import org.springframework.statemachine.guard.Guard;
    
    
            @Configuration
            public class «transition.source.name»___«transition.target.name»_«transition.name»_«transition.guard.name»_guard {
                @Autowired(required = false)
                private I«transition.source.name»_«transition.target.name»_«transition.name»_«transition.guard.name»Guard realImplementation;
    
                @Bean
                public Guard<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> «transition.source.name»_«transition.target.name»_«transition.name»_«transition.guard.name»_guard() {
                    return new Guard<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl>() {
    
                        @Override
                        public boolean evaluate(StateContext<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> context) {
                            if(realImplementation != null) {
                                return realImplementation.evaluate(context);
                            }
                            return false;
                        }
                    };
                }
                public interface I«transition.source.name»_«transition.target.name»_«transition.name»_«transition.guard.name»Guard {
                    boolean evaluate(StateContext<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> context);
                }
            }
        «ENDFILE»
    «ENDDEFINE»
    
    «DEFINE ActionImpl(uml::Transition transition) FOR uml::StateMachine»
        «FILE getFQNPackagePath() +"/configuration/" + transition.source.name.toLowerCase() + "/action/" +  transition.source.name + "___" + transition.target.name + "_" + transition.name + "_" + transition.effect.name + "_action" + ".java"»
            package «getFQNPackageName()».configuration.«transition.source.name.toLowerCase()».action;
    
            import «getFQNPackageName()».enumeration.event.«this.name»_EventEnumerationImpl;
            import «getFQNPackageName()».enumeration.state.«this.name»_StateEnumerationImpl;
    
            import org.springframework.beans.factory.annotation.Autowired;
            import org.springframework.context.annotation.Configuration;
            import org.springframework.context.annotation.Bean;
            import org.springframework.statemachine.StateContext;
            import org.springframework.statemachine.action.Action;
    
            import org.apache.log4j.Logger;
    
            @Configuration
            public class «transition.source.name»___«transition.target.name»_«transition.name»_«transition.effect.name»_action {
                private static final Logger LOG = Logger.getLogger(«transition.source.name»___«transition.target.name»_«transition.name»_«transition.effect.name»_action.class);
    
                @Autowired(required = false)
                private I«transition.source.name»_«transition.target.name»_«transition.name»_«transition.effect.name»Action realImplementation;
    
                @Bean
                public Action<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> «transition.source.name»_«transition.target.name»_«transition.name»_«transition.effect.name»_action() {
                    return new Action<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl>() {
    
                        @Override
                        public void execute(StateContext<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> context) {
                            if (realImplementation != null) {
                                realImplementation.execute(context);
                            } else {
                                LOG.warn("In the UML Model for this Action the Steorotype defines an implementation but Spring could not find a concrete implementation class!");
                            }
                        }
                    };
                }
                public interface I«transition.source.name»_«transition.target.name»_«transition.name»_«transition.effect.name»Action {
                    void execute(StateContext<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> context);
                }
            }
        «ENDFILE»
    «ENDDEFINE»
    
    «DEFINE ActionContainer FOR uml::StateMachine»
        «FILE getFQNPackagePath() + "/configuration/" +  this.name + "ActionContainer.java"»
            package «getFQNPackageName()».configuration;
    
            import org.springframework.beans.factory.annotation.Autowired;
            import org.springframework.context.annotation.Configuration;
    
            @Configuration
            public class «this.name»ActionContainer {
                «FOREACH this.allOwnedElements().typeSelect(uml::State) AS state»
                    «FOREACH state.getOutgoings() AS transition»
                    	«IF transition.effect != null»
    						@Autowired
    						private «getFQNPackageName()».configuration.«state.name.toLowerCase()».action.«transition.source.name»___«transition.target.name»_«transition.name»_«transition.effect.name»_action «transition.source.name.toLowerCase()»___«transition.target.name»_«transition.name»_«transition.effect.name»_action;
    
    						public «getFQNPackageName()».configuration.«state.name.toLowerCase()».action.«transition.source.name»___«transition.target.name»_«transition.name»_«transition.effect.name»_action get«transition.source.name»___«transition.target.name»_«transition.name»_«transition.effect.name»_action() {
    							return «transition.source.name.toLowerCase()»___«transition.target.name»_«transition.name»_«transition.effect.name»_action;
    						}
    					«ENDIF»
                    «ENDFOREACH»
                «ENDFOREACH»
            }
        «ENDFILE»
    «ENDDEFINE»
    
    «DEFINE GuardContainer FOR uml::StateMachine»
        «FILE getFQNPackagePath() + "/configuration/" +  this.name + "GuardContainer.java"»
            package «getFQNPackageName()».configuration;
    
            import org.springframework.beans.factory.annotation.Autowired;
            import org.springframework.context.annotation.Configuration;
    
            @Configuration
            public class «this.name»GuardContainer {
                «FOREACH this.allOwnedElements().typeSelect(uml::State) AS state»
                    «FOREACH state.getOutgoings() AS transition»
                    	«IF transition.guard != null»
    						@Autowired
    						private «getFQNPackageName()».configuration.«state.name.toLowerCase()».guard.«transition.source.name»___«transition.target.name»_«transition.name»_«transition.guard.name»_guard «transition.source.name.toLowerCase()»___«transition.target.name»_«transition.name»_«transition.guard.name»_guard;
    
    						public «getFQNPackageName()».configuration.«state.name.toLowerCase()».guard.«transition.source.name»___«transition.target.name»_«transition.name»_«transition.guard.name»_guard get«transition.source.name»___«transition.target.name»_«transition.name»_«transition.guard.name»_guard() {
    							return «transition.source.name.toLowerCase()»___«transition.target.name»_«transition.name»_«transition.guard.name»_guard;
    						}
    					«ENDIF»
                    «ENDFOREACH»
                «ENDFOREACH»
            }
        «ENDFILE»
    «ENDDEFINE»
    
    «DEFINE ControlObjectLocator FOR uml::StateMachine»
        «FILE getFQNPackagePath() + "/configuration/" + this.name + "ControlObjectLocator.java"»
            package «getFQNPackageName()».configuration;
    
            import org.salgar.swf_statemachine.«removeSM(this.name).toLowerCase()».controlobject.Abstract«removeSM(this.name)»ControlObject;
            import org.springframework.beans.factory.annotation.Lookup;
            import org.springframework.stereotype.Component;
    
            @Component
            public class «this.name»ControlObjectLocator {
    
                @Lookup
                public Abstract«removeSM(this.name)»ControlObject getControlObject() {
                    return null;
                }
            }
        «ENDFILE»
    «ENDDEFINE»
    

    Snippet 22

    If we divide to small pieces…

    «DEFINE Root FOR uml::Model»
    	«EXPAND Root FOREACH (List[uml::Package])ownedElement»
    «ENDDEFINE»
    

    Snippet 23
    this is the entry point for a XPand template, it accepts an UML Model and iterate over its elements, here we are iterating over all UML Packages…

    «DEFINE Root FOR uml::Package»
    	«EXPAND Root FOREACH ownedType.typeSelect(uml::StateMachine)»
    	«EXPAND Root FOREACH nestedPackage»
    «ENDDEFINE»
    

    Snippet 24
    here, we are recursively searching all UML Package and also expanding UML State Machines if we encounter one…

    «DEFINE Root FOR uml::StateMachine»
    	«EXPAND BuildStateMachine»
    	«EXPAND ActionContainer»
    	«EXPAND GuardContainer»
    	«EXPAND ControlObjectLocator»
    «ENDDEFINE»
    

    Snippet 25
    this part expands the core State Machine, the classes that contains Actions, Guards for Auto-Wired injections and Control Objects..

    «DEFINE BuildStateMachine FOR uml::StateMachine»
    	«FILE getFQNPackagePath() + "/configuration/" + this.name + "Configuration.java"»
    		package «getFQNPackageName()».configuration;
    
    		import org.salgar.statemachine.domain.ControlObject;
    		import org.salgar.swf_statemachine.«removeSM(this.name).toLowerCase()».controlobject.Abstract«removeSM(this.name)»ControlObject;
    		import «getFQNPackageName()».enumeration.event.«this.name»_EventEnumerationImpl;
    		import «getFQNPackageName()».enumeration.state.«this.name»_StateEnumerationImpl;
    
    		import org.springframework.beans.factory.annotation.Autowired;
    		import org.springframework.context.annotation.Configuration;
    		import org.springframework.context.annotation.Bean;
    		import org.springframework.messaging.Message;
    		import org.springframework.statemachine.config.EnableStateMachineFactory;
    		import org.springframework.statemachine.config.EnumStateMachineConfigurerAdapter;
    		import org.springframework.statemachine.config.builders.StateMachineConfigurationConfigurer;
    		import org.springframework.statemachine.config.builders.StateMachineTransitionConfigurer;
    		import org.springframework.statemachine.config.builders.StateMachineStateConfigurer;
    		import org.springframework.statemachine.StateContext;
    		import org.springframework.statemachine.action.Action;
    		import org.springframework.statemachine.listener.StateMachineListener;
    		import org.springframework.statemachine.listener.StateMachineListenerAdapter;
    		import org.springframework.statemachine.state.State;
    
    		import java.util.EnumSet;
    
    		import org.apache.log4j.Logger;
    
            «IF (this.ownedComment != null) && (!this.ownedComment.isEmpty)»
            /**
                «FOREACH this.ownedComment AS comment»
                    «comment.body»
                «ENDFOREACH»
            */
            «ENDIF»
    		@Configuration
    		@EnableStateMachineFactory(name="«this.name»")
    		public class «this.name»Configuration extends EnumStateMachineConfigurerAdapter<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> {
    			private static final Logger LOG = Logger.getLogger(«this.name»Configuration.class);
    
    			@Autowired
    			private «this.name»ActionContainer «this.name.toFirstLower()»ActionContainer;
    
    			@Autowired
    			private «this.name»GuardContainer «this.name.toFirstLower()»GuardContainer;
    
    			@Autowired
    			private «this.name»ControlObjectLocator controlObjectLocator;
    
    			@Override
    			public void configure(StateMachineConfigurationConfigurer<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl>config)
    					throws Exception {
    				config
    					.withConfiguration()
    						.listener(listener());
    			}
    
    			@Override
    			public void configure(StateMachineStateConfigurer<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> states) throws Exception {
    				states.withStates().initial(«this.name»_StateEnumerationImpl.«findIntialState(this.allOwnedElements().typeSelect(uml::Pseudostate)).name», initialState«this.name»Action())
    						.states(EnumSet.allOf(«this.name»_StateEnumerationImpl.class));
    			}
    
    			@Override
    			public void configure(StateMachineTransitionConfigurer<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> transitions) throws Exception {
    				transitions
    				«FOREACH this.allOwnedElements().typeSelect(uml::State).getOutgoings() AS transition SEPARATOR '.and()'»
    					//STATE - «transition.source.name»
    					«IF (transition.ownedComment != null) && (!transition.ownedComment.isEmpty)»
    					/**
    					    «FOREACH transition.ownedComment AS comment»
                                «comment.body»
                            «ENDFOREACH»
    					*/
    					«ENDIF»
    					.withExternal()
    					.source(«this.name»_StateEnumerationImpl.«transition.source.name»)
    					.target(«this.name»_StateEnumerationImpl.«transition.target.name»)
    						.event(«this.name»_EventEnumerationImpl.«transition.trigger.first().name»)
    						«IF transition.guard != null»
    								.guard(«this.name.toFirstLower()»GuardContainer.get«transition.source.name»___«transition.target.name»_«transition.name»_«transition.guard.name»_guard().«transition.source.name»_«transition.target.name»_«transition.name»_«transition.guard.name»_guard())
    								«EXPAND GuardImpl(transition)»
    						«ENDIF»
    						«IF transition.effect != null»
    							.action(«this.name.toFirstLower()»ActionContainer.get«transition.source.name»___«transition.target.name»_«transition.name»_«transition.effect.name»_action().«transition.source.name»_«transition.target.name»_«transition.name»_«transition.effect.name»_action())
    							«EXPAND ActionImpl(transition)»
    						«ENDIF»
    				«ENDFOREACH»;
    			}
    
    			public StateMachineListener<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> listener() {
                    return new StateMachineListenerAdapter<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl>() {
                        @Override
                        public void stateChanged(State<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> from, State<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> to) {
                            LOG.info("State change to " + to.getId());
                        }
    
                        @Override
                        public void eventNotAccepted(Message<«this.name»_EventEnumerationImpl> event) {
                            LOG.warn("The event " + event.toString() + " is not accepted!" );
                        }
                    };
                }
    
                @Bean
                public Action<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> initialState«this.name»Action() {
                    return new Action<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl>() {
                        @Override
                        public void execute(StateContext<«this.name»_StateEnumerationImpl, «this.name»_EventEnumerationImpl> context) {
                            Abstract«removeSM(this.name)»ControlObject controlObject = controlObjectLocator.getControlObject();
                            ((ControlObject)controlObject).resetStateMachine();
                            context.getExtendedState().getVariables().put("«this.name»ControlObject", controlObject);
                        }
                    };
                }
    		}
    	«ENDFILE»
    «ENDDEFINE»
    

    Snippet 26
    This creates the core of the Spring State Machine configuration, it is called for the State Machine previously identified in the UML Model. It identifies the UML Packages, State Machine name, it States ‘public void configure(StateMachineStateConfigurer states)‘, transitions ‘«FOREACH this.allOwnedElements().typeSelect(uml::State).getOutgoings() AS transition SEPARATOR ‘.and()’»‘, guards ‘«IF transition.guard != null»‘ (you have to create a Guard instance on Transition in Papyrus, if you want to have custom Guard implementation), actions ‘«IF transition.effect != null»‘ (you have to create a Effect instance on Transition in Papyrus, if you want to have custom Activity implementation), listeners for the ‘State Changed‘, ‘Event Not Accepted‘ and finally an initial state action ‘public Action initialState«this.name»Action()‘ which is initializing the control objects for the state machine.

    I will not clutter here with the details of how Guards and Actions are generated, if people further demand explanation, I can extend it here but with above information I think it should be understandable.

    -Spring Web Flow and integration with Spring State Machine:
    Integrating Spring Web Flow and State Machine was quite seamless action which happens in swf_statemachine_techdemo project.

    I have to change flow definition (swf_statemachine/swf_statemachine_techdemo/src/main/webapp/WEB-INF/flows/customersearch/customersearch.xml) of the previous blog slightly to initialize the Spring State Machine.

    <?xml version="1.0" encoding="UTF-8"?>
    <flow xmlns="http://www.springframework.org/schema/webflow"
    	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    	xsi:schemaLocation="http://www.springframework.org/schema/webflow
            http://www.springframework.org/schema/webflow/spring-webflow-2.0.xsd">
            
        <on-start>
        	<evaluate expression="stateMachineContainer.startStateMachine()"/>
        </on-start>
    	<view-state id="customerSearch">
    		<transition on="onStartCustomerSearch" history="invalidate">
    			<evaluate expression="customerSearchInputBB.searchCustomer()"/>
    		</transition>
    		<transition on="onCustomerAuthenticated" history="invalidate">
    			<evaluate expression="customerSearchAuthenticationBB.customerGivesAuthentication()"></evaluate>
    		</transition>
    		<transition on="onCustomerJoined" history="invalidate">
    			<evaluate expression="customerSearchJoinBB.customerJoined()"></evaluate>
    		</transition>		
    	</view-state>
    </flow>
    

    Snippet 27
    with the ‘on-start‘ event of the Spring Web Flow we initialize our Spring State Machine via ‘stateMachineContainer.startStateMachine()‘. StateMachineContainer is class that I use to introduce Spring Scope concept to Spring State Machine, at its current version Spring State Machine does not respect the Spring Scopes (it is in development) but I want that CustomerSearchSM State Machine to run in Flow Scope, while I want it to be valid only for this Flow. If the end user would start another Flow, it should receive another instance of the CustomerSearchSM State Machine, this is specially important for the multi tab behavior of the modern browsers. So the State of the 2 or more Browser Tabs should not mix.

    For this reason,

    <?xml version="1.0" encoding="UTF-8"?>
    <beans xmlns="http://www.springframework.org/schema/beans"
    			 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
    			 xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">
    
    	<context:component-scan base-package="org.salgar" />
    
    	<bean id="stateMachineContainer" class="org.salgar.swf_statemachine.techdemo.web.customersearch.container.StateMachineContainer" scope="flow" />
    </beans>
    

    Snippet 28

    stateMachineContainer defined in Flow Scope here to keep a reference to the State Machine during the Flow and container get the instances of the State Machine over the Factory Pattern.

    A word of caution here, it is stated the instance of Action and Guard beans are shared on every instances of one Spring State Machine. So please avoid to use instance variables in the implementation of the Action and Guards.

    -Preparing Project Environment
    You can the project from github with the following command.

    git clone -b papyrus git@github.com:mehmetsalgar/swf_statemachine.git

    depending on where did you get the project, you can go to the following directory

    swf_statemachine/swf_statemachine

    and execute the following command to start the Maven build

    mvn clean install -Pfull-build

    I have to point something here, while Eclipse M2T has dependencies to some Eclipse Plugins, we have to use Maven Tycho plugin, which will prepare some artifacts for us. For this reason, we have to build with ‘-Pfull-build‘ option, when Maven build runs once with this option and installs the artifacts the local Maven repository, in the subsequent runs we can use the ‘mvn clean install -o‘ to build the project, which will run in offline mode and considerably faster.

    To run the Web Application, you have install an Apache Tomcat (it might run in other containers also but I didn’t test it), you can download the Tomcat here.

    Apache Tomcat

    For the Comet/Atmosphere functionality (Primefaces push/socket functionality), you have to configure something in Tomcat Setup, which explained in details here in my previous blog.

    When the Maven build is complete, you can copy ‘swf_statemachine_techdemo.war’ from swf_statemachine_techdemeo/target directory to your $TOMCATE_HOME/webapps directory and start your tomcat with the following command ‘./catalina.sh start’ from $TOMCATE_HOME/bin directory.

    After you successfully deploy the application and call the URL http://localhost:8080/swf_statemachine_techdemo-1.0-SNAPSHOT/spring/customersearch the application should look like this.

    Conclusion:
    If you read the previous version of my blog, the weak point was always custom implementation of the State Machine. Now with the existence of the Spring State Machine, this weak point does not exists anymore.

    Second point was, State Machine concept is so unknown to Java Developers, it looks eccentric to them and create a reluctance to use it because years long, it is told to them that it is too complex. With the current Agile trends with software development, it is such a valuable tool which is making possible your software to learn iteratively your business cases. When you think in current IT chaos to reduce IT costs, more and more Enterprise go with Agile methods and to scarcer documentations, it is vital to have such a tool.

    I think Spring Organisation saw this fact also and with them putting their name on a State Machine framework can give really traction to ideas that we discussed here.

    Posted in Ajax, Asynchrounus Processing, Atmosphere, Comet, Fornax, Iterative, JSF, M2T, Maven, MDA, MDD, MDSD, MWE, MWE2, Native IO, Open Source, Papyrus, Primefaces, Software Development, Spring State Machine, Spring Web Flow, State Explosion, State Machine, Web Socket, XPand, XText | 2 Comments

    MWE2 and UML

    A while ago I wrote a blog about Extremely Ajaxified Web Applications, which was trying to explain the challenges of the modern web application development in the days of everything programmed in an asynchronous way and how this cause a strain on the classical development techniques.

    I proposed a Model Driven Approach with Spring Web Flow, Primefaces and State Machines. The part of the solution is to produces a runnable State Machine from UML Diagrams.

    In the original blog I used the OpenArchitectureWare Framework to create Spring Configuration and Java code. In the time period that I wrote the blog, OpenArchitectureWere was in the process of transferring the development to the Eclipse Platform und was under incubation period with the name of M2T with version 0.72.

    Lately to test some new ideas with the framework, I tried to build the project and to my shock discovered all the artifacts from the version 0.72 disappeared from Maven Repositories. I thought no problem lets find the new versions of the artifacts and make the build runnable.

    Ohh boy, I was wrong, I don’t know it is intentional or not but I think Eclipse Team was trying to demotivate people to M2T/Xpand with UML models, some of the very critical components (mainly org.eclipse.xtend.typesystem.uml2 artifact) was not published to any Maven repository and was only available as OSGI dependency.

    My old build structure could not deal with this so I had to make extensive changes. This blog is about this changes because during my struggles I couldn’t find anybody else mentioning and proposing solution to the similar problem. All the resources I found in the internet proposing solutions when all the artifacts are available in Maven Repositories or build as Eclipse Plugins.

    Personally I want that my framework to be accessible to everybody, so they should just download it from the GitHub (with this command: git clone -b MWE2 git@github.com:mehmetsalgar/swf_statemachine.git) and run the Maven project without extensive configurations. That proved to be a quite a challenge.

    There are some examples on the internet how to configure a MWE2 workflow for UML but none is newer then 3 years, it seems no one dealed with the problem lately as long as dependencies are available in Maven Repository everything works seamlessly.

    New pattern is using exec-maven-plugin to run a MWE2.

        <plugin>
             <groupId>org.codehaus.mojo</groupId>
             <artifactId>exec-maven-plugin</artifactId>
             <executions>
              <execution>
                <phase>generate-sources</phase>
                <goals>
                  <goal>java</goal>
                </goals>
               </execution>
              </executions>
            <configuration>
              <includeProjectDependencies>false</includeProjectDependencies>
              <includePluginDependencies>true</includePluginDependencies>
              <cleanupDaemonThreads>false</cleanupDaemonThreads>
              <stopUnresponsiveDaemonThreads>true</stopUnresponsiveDaemonThreads>
              <classpathScope>compile</classpathScope>
              <mainClass>org.eclipse.emf.mwe2.launch.runtime.Mwe2Launcher</mainClass>
              <additionalClasspathElements>
                <additionalClasspathElement>src/main/resources</additionalClasspathElement>
              </additionalClasspathElements>
              <arguments>
                <argument>file://${project.basedir}/src/main/resources/workflow.mwe2</argument>
                <argument>
                  -p
                </argument>
                <argument>
                  runtimeProject=${project.basedir}
                </argument>
              </arguments>
            </configuration>
            <dependencies>
              <dependency>
                <groupId>org.salgar.swf_statemachine</groupId>
                <artifactId>swf_statemachine_eclipse_dependencies</artifactId>
                <version>${project.version}</version>
                <classifier>repackaged</classifier>
              </dependency>
              <dependency>
                <groupId>org.fornax.cartridges</groupId>
                <artifactId>fornax-cartridges-uml2-javabasic-generator</artifactId>
                <version>${fornax.javabasic.cartridge.version}</version>
                <exclusions>
                  <exclusion>
                    <groupId>org.eclipse.uml2.uml</groupId>
                    <artifactId>resources</artifactId>
                  </exclusion>
                  .......
                </exclusions>
              </dependency>
              <dependency>
                <groupId>org.salgar.swf_statemachine</groupId>
                <artifactId>swf_statemachine_fornax_extension</artifactId>
                <version>${project.version}</version>
                <exclusions>
                  <exclusion>
                    <groupId>org.eclipse.xtext</groupId>
                    <artifactId>org.eclipse.xtext.xtext</artifactId>
                  </exclusion>
                </exclusions>
              </dependency>
            </dependencies>
          </plugin>
    

    exec-maven-plugin will run the class org.eclipse.emf.mwe2.launch.runtime.Mwe2Launcher which is actually the core of the fornax-oaw-m2-plugin.

    It will accept as parameter the name of the workflow file.

                <argument>file://${project.basedir}/src/main/resources/workflow.mwe2</argument>
                <argument>
                  -p
                </argument>
                <argument>
                  runtimeProject=${project.basedir}
                </argument>
    

    and execute it with the help of dependency information provided for the classes workflow needs it.

           <dependencies>
              .......
              <dependency>
                <groupId>org.fornax.cartridges</groupId>
                <artifactId>fornax-cartridges-uml2-javabasic-generator</artifactId>
                <version>${fornax.javabasic.cartridge.version}</version>
                <exclusions>
                  <exclusion>
                    <groupId>org.eclipse.uml2.uml</groupId>
                    <artifactId>resources</artifactId>
                  </exclusion>
                  ....
                </exclusions>
              </dependency>
              ........
            </dependencies>
    

    but it has its own hickups, for example if you have some M2T templates also in your workflow that is included in the same Maven project, it will not able to see in its classpath. For some odd reason it does not include the artifacts from the resource paths of the Maven project even the argument

    <includeProjectDependencies>false</includeProjectDependencies>
    

    is set to true.

    To fix this problem I have to add the following configuration to the plugin configuration.

              <additionalClasspathElements>
                <additionalClasspathElement>src/main/resources</additionalClasspathElement>
              </additionalClasspathElements>
    

    this will ensure that artifacts from the same project are visible also.

    Now I hear you saying, seems straight forward what is the big deal? Everything fine and dandy when all the dependencies are in the Maven Repositories. For some odd reason, org.eclipse.xtend.typesystem.uml2 artifact is not. If you try to insert this as dependency to the exec-maven-plugin, your build will fail.

    This artifact is available, at least I can find, only in Eclipse P2/OSGI repositories and unfortunately Maven out of the box can’t discover them. For this we need the help of another Maven Plugin, tycho-maven-plugin, its whole purpose is the integration of Eclipse RCP projects to the Maven projects.

    This plugin can scan the MANIFEST.MF files of OSGI/Eclipse RCP projects and discover their dependencies from Eclipse p2 repositories. For this we must first define in our project these p2 Repositories as following.

    For my purposes I need it the followings

       <repository>
          <id>4_6.repo.eclipse.org</id>
          <name>Project Repository - Releases</name>
          <url>http://download.eclipse.org/eclipse/updates/4.6</url>
          <layout>p2</layout>
        </repository>
        <repository>
          <id>4_5.repo.eclipse.org</id>
          <name>Project Repository - Releases</name>
          <url>http://download.eclipse.org/eclipse/updates/4.5</url>
          <layout>p2</layout>
        </repository>
        <repository>
          <id>indigo.repo.eclipse.org</id>
          <name>Project Repository - Releases</name>
          <url>http://download.eclipse.org/releases/indigo/</url>
          <layout>p2</layout>
        </repository>
        <repository>
          <id>xtext</id>
          <url>http://download.eclipse.org/modeling/tmf/xtext/updates/composite/releases/</url>
          <layout>p2</layout>
        </repository>
    

    please pay attention to the ‘p2’ element, this tells Maven that is a p2 Repository is and it will not be evaluated a normal Maven dependencies but only for certain type of projects, for ex eclipse-plugin, eclipse-feature, etc…

    Ok, so we need dependencies that does not exist in Maven Repositories then we should take them from p2 Repositories and we need a plugin to this for us and this plugin called tycho-maven-plugin. This plugin when it see the maven packaging type eclipse-plugin, eclipse-feature, etc.. it will look to the MANIFEST.MF file (which is OSGI definition of the artifact a la Maven) that lies in META-INF directory and try to locate stated dependencies over p2 repositories.

    This is independent of Maven Dependencies resolving mechanism.

    So if we look to the my previously stated problem that org.eclipse.xtend.typesystem.uml2 artifact is not in the Maven Repositories but in p2 repositories, it is clear that we have to use the tycho and eclipse-plugin packaging type. If you do this the exec-maven-plugin plugin will work correctly and successfully execute MWE2 Workflow if we set the following setting in the exec-maven-plugin ‘true’. This will ensure that dependencies resolved from tycho plugin will be visible to the exec-maven-plugin.

    So you can ask what is the problem then. First of all final artifact that I want to create in my feasibility project is a Web Application. So the artifacts that I create with my Model Driven approach should be included to Web Application’s war artifact. An eclipse-plugin artifact will cause me problems to include in a web application.

    We have a dilemma here, we have some dependencies that we need from p2 repository, so we have to use eclipse-plugin packaging type but also I have to have this artifact in a war archive so I need jar packaging type. I spend considerable amount to solve this paradox but I could not manage the way I described until now so I have to try something else.

    There was a proposed solution in the internet about if a project need OSGI/Eclipse RCP dependencies to build an interim project with packaging type eclipse-plugin and let the tycho plugin gets the dependency then use maven dependency plugin to expand all the classes in the downloaded p2 dependencies and package them again with assembly plugin so later model creation projects with MWE2 can use those.

    So for the people coming here from other blog I just created a new project ‘swf_statemachine_eclipse_dependencies’ which states in MANIFEST.MF file what dependencies are necessary.

    Manifest-Version: 1.0
    Bundle-ManifestVersion: 2
    Bundle-Name: org.salgar.swf_statemachine_eclipse_dependencies
    Bundle-SymbolicName: swf_statemachine_eclipse_dependencies; singleton:=true
    Bundle-Version: 1.0.0.qualifier
    Require-Bundle: org.eclipse.xtext;bundle-version="[2.8.3,2.8.4)";visibility:=reexport,
     org.eclipse.xtext.xbase;bundle-version="[2.8.3,2.8.4)";resolution:=optional;visibility:=reexport,
     org.eclipse.xtext.generator;bundle-version="[2.8.3,2.8.4)";resolution:=optional,
     org.apache.log4j;bundle-version="1.2.15";visibility:=reexport,
     org.apache.commons.logging;bundle-version="1.0.4";resolution:=optional;visibility:=reexport,
     org.eclipse.emf.codegen.ecore;resolution:=optional,
     org.eclipse.emf.mwe.utils;resolution:=optional,
     org.eclipse.emf.mwe2.launch;resolution:=optional,
     org.eclipse.uml2.uml,
     org.eclipse.xtext.util,
     org.eclipse.emf.ecore,
     org.eclipse.emf.common,
     org.antlr.runtime,
     org.eclipse.xtext.common.types,
     org.eclipse.uml2.codegen.ecore;bundle-version="1.7.0",
     org.eclipse.xtend.typesystem.uml2,
     org.eclipse.xpand
    Bundle-Vendor: salgar
    

    and maven-dependency-plugin which unpack dependencies

        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-dependency-plugin</artifactId>
            <executions>
              <execution>
                <id>unpack-dependencies</id>
                <phase>package</phase>
                <goals>
                  <goal>unpack-dependencies</goal>
                </goals>
                <configuration>
                  <outputDirectory>${project.build.directory}/dependency</outputDirectory>
                  <overWriteReleases>false</overWriteReleases>
                  <overWriteSnapshots>true</overWriteSnapshots>
                </configuration>
              </execution>
            </executions>
          </plugin>
    

    and maven-assembly-plugin which is packing all the classes again.

         <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-assembly-plugin</artifactId>
            <configuration>
              <descriptors>
                <descriptor>src/main/assembly/repackaged.xml</descriptor>
              </descriptors>
            </configuration>
            <executions>
              <execution>
                <id>make-assembly</id>
                <phase>package</phase>
                <goals>
                  <goal>single</goal>
                </goals>
              </execution>
            </executions>
          </plugin>
    

    and the assembly configuration repackaged.xml.

    <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0" 
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
        <id>repackaged</id>
        <formats>
            <format>jar</format>
        </formats>
        <includeBaseDirectory>false</includeBaseDirectory>
        <fileSets>
          <fileSet>
            <directory>${project.build.directory}/dependency/</directory>
            <outputDirectory>/</outputDirectory>
            <useDefaultExcludes>true</useDefaultExcludes>
          </fileSet>
        </fileSets>
    </assembly>
    

    which will create an artifact with ‘repackaged’ if we reference with a classifier ‘repackaged’ maven will able to find it as dependency for exec plugin.

         <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>exec-maven-plugin</artifactId>
            <executions>
              <execution>
                <phase>generate-sources</phase>
                <goals>
                  <goal>java</goal>
                </goals>
              </execution>
            </executions>
            <configuration>
              <includeProjectDependencies>false</includeProjectDependencies>
              <includePluginDependencies>true</includePluginDependencies>
              <cleanupDaemonThreads>false</cleanupDaemonThreads>
              <stopUnresponsiveDaemonThreads>true</stopUnresponsiveDaemonThreads>
              <classpathScope>compile</classpathScope>
              <mainClass>org.eclipse.emf.mwe2.launch.runtime.Mwe2Launcher</mainClass>
              <additionalClasspathElements>
                <additionalClasspathElement>src/main/resources</additionalClasspathElement>
              </additionalClasspathElements>
              <arguments>
                <argument>file://${project.basedir}/src/main/resources/workflow.mwe2</argument>
                <argument>
                  -p
                </argument>
                <argument>
                  runtimeProject=${project.basedir}
                </argument>
              </arguments>
            </configuration>
            <dependencies>
              <dependency>
                <groupId>org.salgar.swf_statemachine</groupId>
                <artifactId>swf_statemachine_eclipse_dependencies</artifactId>
                <version>${project.version}</version>
                <classifier>repackaged</classifier>
              </dependency>
              .........
          </plugin>
    

    With this we solved our problem related to the ‘org.eclipse.xtend.typesystem.uml2’ artifact.

    There are also substantial changes how the MWE2 worklflow configured compared to my previous blog entry, so I like to mention those instead of modifying the original blog.

    My MWE2 workflow look like following at the moment, it has now really similarity to Java code, may be to Groovy.

    module swf_statemachine_sm_model
    
    import org.eclipse.emf.mwe.utils.*
    import org.eclipse.xtext.generator.*
    import org.eclipse.xtext.ui.generator.*
    import org.eclipse.xpand2.*
    
    import org.salgar.swf_statemachine.uml2.model.ExtendedUML2Metamodel
    
    import org.salgar.swf_statemachine.uml2.Setup
    import org.eclipse.xtend.typesystem.uml2.*
    import org.eclipse.xtend.typesystem.uml2.UML2MetaModel
    import org.eclipse.xtend.typesystem.emf.XmiReader
    import org.eclipse.xtend.typesystem.uml2.profile.ProfilingExtensions.XmiReader
    import org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel
    import org.eclipse.emf.mwe.utils.Reader
    var targetDir = "src-gen"
    var fileEncoding = "Cp1252"
    var modelPath = "src/model"
    var projectName = "swf_statemachine_sm_model"
    var runtimeProject
    
    var list.set.property = 'order'
    var type_header_text = ""
    var annotation_source_key = ""
    var type_footer_text = ""
    var javabasic_entities = ""
    var classes_operation_implementation_strategy ="none"
    var javabasic_generateSerialVersionUID = "true"
    var use_overridden_equals_hashcode_toString= "true"
    var java_version = "5"
    var generate_additional_collection_methods = ""
    
    Workflow {
    	bean = org.eclipse.emf.mwe.utils.StandaloneSetup {
    		platformUri=".."
    		projectMapping = { projectName = "${projectName}" path = "${runtimeProject}" }
    		projectMapping = { projectName = "swf_statemachine_domain" path = "../swf_statemachine_domain" }
    		logResourceUriMap = false
    		scanClassPath = false
    	}
    	bean = org.eclipse.xtend.typesystem.uml2.Setup {
     		standardUML2Setup = true
     	}
     	bean = org.eclipse.xtend.typesystem.uml2.UML2MetaModel : umlMM { }
    
     	bean = org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel : swf_statemachine {
     		profile = "platform:/resource/swf_statemachine_sm_model/src/main/resources/swf_statemachine.profile.uml"
     	}
     	
     	bean = org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel : datatype {
     		profile = "platform:/resource/swf_statemachine_domain/src/main/resources/model/Datatype.profile.uml"
     	}
     	
     	bean = org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel : java {
     		profile = "platform:/resource/swf_statemachine_domain/src/main/resources/model/Java.profile.uml"
     	}
     	
     	component = org.eclipse.emf.mwe.utils.Reader {
     		uri = "platform:/resource/swf_statemachine_sm_model/src/main/resources/model/TechdemoModel.uml"
     		modelSlot = "model"
     	}
     	
     	 	component = org.eclipse.xpand2.Generator : enumerationGenerator {
     		metaModel = umlMM
     		metaModel = swf_statemachine
     		
     		globalVarDef = { name = "list_set_property" value = "'${list.set.property}'" }
     		globalVarDef = { name = "type_header_text" value = "''" }
     		globalVarDef = { name = "annotation_source_key" value = "''" }
     		globalVarDef = { name = "type_footer_text" value = "''" }
     		globalVarDef = { name = "javabasic_entities" value = "''" }
     		globalVarDef = { name = "classes_operation_implementation_strategy" value = "'${classes_operation_implementation_strategy}'" }
     		globalVarDef = { name = "javabasic_generateSerialVersionUID" value = "'${javabasic_generateSerialVersionUID}'" }
     		globalVarDef = { name = "use_overridden_equals_hashcode_toString" value = "'${use_overridden_equals_hashcode_toString}'" }
     		globalVarDef = { name = "java_version" value = "${java_version}" }
     		
     		fileEncoding = "ISO-8859-1"
     		outlet = { path = "${runtimeProject}/src/generated/java" postprocessor = org.eclipse.xpand2.output.JavaBeautifier {} }
    
    		advice = "templates::advices::javaBasicAssociationAdvices"
    		
    		expand = "template::stateMachineEnumeration::Root FOR model"
     	}
     	
     	component = org.eclipse.xpand2.Generator : springGenerator {
     		metaModel = umlMM
     		metaModel = swf_statemachine
     		
     		fileEncoding = "ISO-8859-1"
     		outlet = { path = "${runtimeProject}/src/generated/resources"  postprocessor = org.salgar.m2t.xml.XmlBeautifier {} }
    		
    		expand = "template::stateMachineSpring::Spring FOR model"
     	}
     
     	component = org.eclipse.xpand2.Generator : javaGenerator {
     		metaModel = umlMM
     		metaModel = datatype
     		metaModel = java
     		globalVarDef = { name = "list_set_property" value = "'${list.set.property}'" }
     		globalVarDef = { name = "type_header_text" value = "''" }
     		globalVarDef = { name = "annotation_source_key" value = "''" }
     		globalVarDef = { name = "type_footer_text" value = "''" }
     		globalVarDef = { name = "javabasic_entities" value = "''" }
     		globalVarDef = { name = "classes_operation_implementation_strategy" value = "'${classes_operation_implementation_strategy}'" }
     		globalVarDef = { name = "javabasic_generateSerialVersionUID" value = "'${javabasic_generateSerialVersionUID}'" }
     		globalVarDef = { name = "use_overridden_equals_hashcode_toString" value = "'${use_overridden_equals_hashcode_toString}'" }
     		globalVarDef = { name = "java_version" value = "${java_version}" }
     		
     		fileEncoding = "ISO-8859-1"
     		outlet = { path = "${runtimeProject}/src/generated/java" postprocessor = org.eclipse.xpand2.output.JavaBeautifier {} }
    
    		advice = "templates::advices::javaBasicAssociationAdvices"
    		
    		expand = "template::Root::Root FOR model"
     	}
    }
    

    first you have import java package that you are interested.

    Secondly, one thing that cause me lots of headaches, while MWE2 / M2T is now integrated to Eclipse their first priority is to make run under Eclipse, the way I try to build the project under Maven and completely free from other environmental restrictions, was causing problems. MWE2 calls this standalone setup and for this purpose the projects needs Project Mapping to simulate Eclipse Platform environment.

    This is not good documented at all and I have to debug MWE2 code understand what is going on unfortunately when this mapping is not done some really unexpected things stop working, like custom UML Profiles containing Steorotypes. When I didn’t not correctly made this mapping this Custom UML Profiles stopped working.

    The following element defines the mapping.

             bean = org.eclipse.emf.mwe.utils.StandaloneSetup {
    		platformUri=".."
    		projectMapping = { projectName = "${projectName}" path = "${runtimeProject}" }
    		projectMapping = { projectName = "swf_statemachine_domain" path = "../swf_statemachine_domain" }
    		logResourceUriMap = false
    		scanClassPath = false
    	}
    

    For the MWE2 workflow that are in our current workflow ‘projectMapping = { projectName = “${projectName}” path = “${runtimeProject}” }’ is enough it gets the necessary information from Maven build with ‘projectName’ and ‘runtimeProject’ variables.

    ‘runtimeProject’ is configured via ‘exec-maven-plugin’

          <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>exec-maven-plugin</artifactId>
            <executions>
              ......
            </executions>
            <configuration>
              ......
              <arguments>
                <argument>file://${project.basedir}/src/main/resources/workflow.mwe2</argument>
                <argument>
                  -p
                </argument>
                <argument>
                  runtimeProject=${project.basedir}
                </argument>
              </arguments>
            </configuration>
            <dependencies>
              .........
          </plugin>
    

    For the element ‘projectMapping = { projectName = “swf_statemachine_domain” path = “../swf_statemachine_domain” }’ which contains for me the Custom UML Profile we have to define manually.

    So why this mapping is important, while MWE2 like to reference things in platform pattern like the following.

     	bean = org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel : swf_statemachine {
     		profile = "platform:/resource/swf_statemachine_sm_model/src/main/resources/swf_statemachine.profile.uml"
     	}
    

    if you didn’t made the mappings or don’t match with yours, you will get exceptions for the mentioned resources.

    One more thing that annoyed me, in the previous version of the MWE you could use a property file for the properties in your M2T templates (i am using Fornax Javabasic Generator templates) to keep the workflow files clean but it seems that this feature is disappeared now I have to put the properties in the workflow and defines as followed.

           component = org.eclipse.xpand2.Generator : javaGenerator {
     		metaModel = umlMM
     		metaModel = datatype
     		metaModel = java
     		globalVarDef = { name = "list_set_property" value = "'${list.set.property}'" }
     		globalVarDef = { name = "type_header_text" value = "''" }
     		globalVarDef = { name = "annotation_source_key" value = "''" }
     		globalVarDef = { name = "type_footer_text" value = "''" }
     		globalVarDef = { name = "javabasic_entities" value = "''" }
     		globalVarDef = { name = "classes_operation_implementation_strategy" value = "'${classes_operation_implementation_strategy}'" }
     		globalVarDef = { name = "javabasic_generateSerialVersionUID" value = "'${javabasic_generateSerialVersionUID}'" }
     		globalVarDef = { name = "use_overridden_equals_hashcode_toString" value = "'${use_overridden_equals_hashcode_toString}'" }
     		globalVarDef = { name = "java_version" value = "${java_version}" }
     		
     		fileEncoding = "ISO-8859-1"
     		outlet = { path = "${runtimeProject}/src/generated/java" postprocessor = org.eclipse.xpand2.output.JavaBeautifier {} }
    
    		advice = "templates::advices::javaBasicAssociationAdvices"
    		
    		expand = "template::Root::Root FOR model"
     	}
    

    I could not find a way to use property file, if anybody that read this blog knows it please write it to the comment session so I can change it.

    ‘globalVarDef = { name = “list_set_property” value = “‘${list.set.property}'” }’ element will populate the collection of Global Variables for ‘org.eclipse.xpand2.Generator’.

    Actually we are nearly at the end, one last thing I like to mention, I am using Javabasic Generator templates from the Fornax, latest version still works but it seems its development is stopped, it also reference old maven dependencies, for this reason I have to exclude those and provide newer versions. May be another follow up project exist that does similar things now but while they are working for me I didn’t researched too much, if anybody knows anything about it, I will appreciate the tip.

    Well as always I hope somebody find this blog useful.

    Posted in Fornax, M2T, Maven, MDA, MDD, MDSD, MWE, MWE2, OSGI, Software Development, State Machine, Topcased, Tycho, UML, XText | 1 Comment

    JBoss GateIn Portal, Spring Webflow and Primefaces

    Foreword

    I previously wrote a blog A working sample with JBoss Portal – Spring Webflow – Primefaces how to make Jboss Portal 2.6.4 with Spring Web Flow and Primefaces function together. The versions of those mentioned libraries in that blog are now quite outdated. It is time to update the libraries and also tell what is changed.

    If you are curious why I wrote the previous blog and this one, it is better that you read the motivations chapter of that blog.

    If you are done that, first thing you will discern, JBoss Portal implementation does not exist in the same form, JBoss merged with Exo GateIn Portal to its application server and now it exist with the name of JBoss GateIn Portal and as a consequence several core mechanics are also changed.

    Further JBoss Portal was using JSF 1.2 as JSF implementation and Portlet Spec 1.0 wasn’t designed too much in consideration that it should co-exist with JSF technology, so many things were too clunky.

    Luckily with Portlet Spec 2.0 and JSF 2, people who are designing those specs saw the problems and make sure that those two technology stacks are now much more compatible.

    For those above reasons, I decided it is a good time to update that blog to reflect what is changed. There are still quite lot of hits to that blog page so I am guessing people does still need solutions for the problems caused from this technology stack.

    Most of things from the blog are still valid, like project structure, maven usage, git, use cases of the feasibility application and I will not to go too detailed to these subject, you can still look for the old blog for those.

    What is mainly changed is that project become much more light weight now that these technologies are much more compatible.

    If you get the project from here as it defined in the previous blog Git but this time you have to clone the following branch from the repository.

    git clone -b PF_SWF_UPDATE git@github.com/mehmetsalgar/jbp_swf_primefaces.git

    Project Structure
    Below you see the new Project Structure.

    ps_jbp
    picture 1

    If you look to the previous blog, due to improvements in GateIn Portal, Spring Web Flow and Primefaces several artifacts are disappeared and removed from Git Repository. Thanks to improvement in the technology stacks project structure is much more cleaner now.

    That off course doesn’t mean there are no problems, in the next section you will see the problems I discovered and my solutions for those problems.

    Challenges

    Challenge 1 – web.xml

    Naturally with so many change in the technology stack we have to start with the web.xml.

    One major change is how our Comet/Push mechanism configured, which is necessary one of our use case.

    	<servlet>
    		<servlet-name>Push Servlet</servlet-name>
    		<servlet-class>org.primefaces.push.PushServlet</servlet-class>
    		<load-on-startup>1</load-on-startup>
    		<!-- http://forum.primefaces.org/viewtopic.php?f=10&t=38078 -->
    		<init-param>
    			<param-name>org.atmosphere.cpr.broadcasterCacheClass</param-name>
    			<param-value>org.atmosphere.cache.UUIDBroadcasterCache</param-value>
    		</init-param>
    		<init-param>
    			<param-name>org.atmosphere.useNative</param-name>
    			<param-value>false</param-value>
    		</init-param>
    		<init-param>
    			<description>Force Atmosphere to use WebSocket. (default: true)</description>
    			<param-name>org.atmosphere.useWebSocket</param-name>
    			<param-value>false</param-value>
    		</init-param>
    		<init-param>
    			<description>Force Atmosphere to use WebSocket + Servlet 3.0 API. (default: false)</description>
    			<param-name>org.atmosphere.useWebSocketAndServlet3</param-name>
    			<param-value>false</param-value>
    		</init-param>
    		<init-param>
    			<description>Switching to Blocking I/O instead of asynchronous servlets</description>
    			<param-name>org.atmosphere.useBlocking</param-name>
    			<param-value>true</param-value>
    		</init-param>
    		<async-supported>true</async-supported>
    	</servlet>
    	<servlet-mapping>
    		<servlet-name>Push Servlet</servlet-name>
    		<url-pattern>/primepush/*</url-pattern>
    	</servlet-mapping>
    

    Snippet 1

    This is the critical part of the web.xml and most of the changes from the previous version of the blog lies here.

    If we like to analyze the critical changes, Primefaces 5.2 uses Atmosphere Framework 2.x and there are some major changes in the framework, the followings are the critical setting for atmosphere framework to function correctly and improve performance.

    	<param-name>org.atmosphere.cpr.broadcasterCacheClass</param-name>
    	<param-value>org.atmosphere.cache.UUIDBroadcasterCache</param-value>
    

    Snippet 2

    Following configuration is necessary to force Atmosphere framework not to work in native IO mode (web sockets) but blocking IO. JBoss EAP and JBoss 7 has problems with Web Sockets. Solving those problems were not a primary focus for this article (but I will do it in a future article) so I disable it for the purpose of this blog and our project will use Blocking IO.

    Websockets on paper will be more performant and will scale better but I used in another production application Blocking IO and it has an acceptable performance level.

    	<init-param>
    	       <param-name>org.atmosphere.useNative</param-name>
    	       <param-value>false</param-value>
    	</init-param>
    	<init-param>
    		<description>Force Atmosphere to use WebSocket. (default: true)</description>
    		<param-name>org.atmosphere.useWebSocket</param-name>
    		<param-value>false</param-value>
    	</init-param>
    	<init-param>
    		<description>Force Atmosphere to use WebSocket + Servlet 3.0 API. (default: false)</description>
    		<param-name>org.atmosphere.useWebSocketAndServlet3</param-name>
    		<param-value>false</param-value>
    	</init-param>
    	<init-param>
    		<description>Switching to Blocking I/O instead of asynchronous servlets</description>
    		<param-name>org.atmosphere.useBlocking</param-name>
    		<param-value>true</param-value>
    	</init-param>
    

    Snippet 3

    We also need the Gatein Servlet class mapped so it can intercept the call to the portlets application.

    	<servlet>
    		<servlet-name>GateInServlet</servlet-name>
    		<servlet-class>org.gatein.wci.api.GateInServlet</servlet-class>
    		<load-on-startup>0</load-on-startup>
    	</servlet>
    	<servlet-mapping>
    		<servlet-name>GateInServlet</servlet-name>
    		<url-pattern>/gateinservlet</url-pattern>
    	</servlet-mapping>
    

    Snippet 4

    Challenge 2 – Spring Webflow Configuration
    With better implementation of Portlet Spec 2.0 and JSF 2, some elements that are necessary to configure the Spring Web Flow in previous technology stack became obsolete.

    Spring Web Flow is in the following application context file configured.

       tech_demo/src/main/webapp/WEB-INF/config/web-application-config.xml
    

    Snippet 5

    If we analyze the configuration elements one by one.

    	<!-- Spring MVC -->
    	<!-- Maps portlet modes to handlers -->
    	<bean id="portletModeHandlerMapping"
    		class="org.springframework.web.portlet.handler.PortletModeHandlerMapping">
    		<property name="portletModeMap">
    			<map>
    				<entry key="view">
    					<bean class="org.salgar.techdemo.flowhandler.TechdemoFlowHandler" />
    				</entry>
    			</map>
    		</property>
    	</bean>
    

    Snippet 6

    This configuration tells Spring Web Flow which flow configuration it must read. Normally, without the Portal Engine Spring Web Flow will read from URL which flow is intended for the user but while the Portlet Engine masking the real urls from the Spring Web Flow and JSF, we have to state here explicitly which flow should be executed when this portlet is called.

    If you observe the TechdemoFlowHandler java class, you will see that class points for the ‘demo’ flow.

    	<!-- Enables FlowHandler -->
    	<bean class="org.springframework.webflow.mvc.portlet.FlowHandlerAdapter">
    		<property name="flowExecutor" ref="flowExecutor" />
    	</bean>
    
    	<!-- Maps logical view names selected by the url filename controller to .jsp view templates within the /WEB-INF directory -->	
    	<bean id="internalJspViewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver">
    		<property name="prefix" value="/WEB-INF/" />
    		<property name="suffix" value=".jsp" />
    	</bean>
    
    	<!-- Maps logical view names to Facelet templates (e.g. 'search' to '/WEB-INF/search.xhtml' -->
    	<bean id="faceletsViewResolver"
    		class="org.springframework.web.servlet.view.UrlBasedViewResolver">
    		<property name="viewClass" value="org.springframework.faces.mvc.JsfView" />
    		<property name="prefix" value="/WEB-INF/" />
    		<property name="suffix" value=".xhtml" />
    	</bean>
    	
    	<!-- WebFlow -->
    	<faces:resources />
    	
    	<!-- Webflow Scopes -->
    	<bean id="scopeRegistrar" class="org.springframework.webflow.scope.ScopeRegistrar" />
    	
    	<!-- Executes flows: the central entry point into the Spring Web Flow system -->
    	<flow:flow-executor id="flowExecutor">
    		<flow:flow-execution-listeners>
    			<flow:listener ref="facesContextListener" />
    		</flow:flow-execution-listeners>
    	</flow:flow-executor>
    	
    	<!-- Installs a listener that creates and releases the FacesContext for 
    		each request. -->
    	<bean id="facesContextListener"
    		class="org.springframework.faces.webflow.FlowFacesContextLifecycleListener" />
    

    Snippet 7

    Above elements are just standard configuration elements from Spring Web Flow and which are nice change from the previous version of the project. With the changes to Portlet Spec 2.0 and JSF 2 most of the special configuration are not necessary anymore.

    	<!-- The registry of executable flow definitions -->
    	<flow:flow-registry id="flowRegistry"
    		flow-builder-services="facesFlowBuilderServices">
    		<flow:flow-location path="/WEB-INF/flows/techdemo/demo.xml" />
    	</flow:flow-registry>
    

    Snippet 8

    One thing specific to our project of course the registry of our Web Flow to Spring, which happens with above snippet. As you can see we registered demo.xml flow definition file.

    	<!-- Configures the Spring Web Flow JSF integration -->
    	<faces:flow-builder-services id="facesFlowBuilderServices" />
    

    Snippet 9

    and one final one for integrating Spring Web Flow and JSF.

    Challenge 3 – faces-config.xml
    Naturally if we want to integrate Portal Engine and JSF, we have to do something in faces-config.xml also.

    	<!-- Install Replacement Factory Objects -->
    	<application>
    		<!-- Enables Facelets -->
    		<view-handler>org.springframework.faces.webflow.context.portlet.PortletViewHandler</view-handler>
    	</application>
    	<factory>
    		<faces-context-factory>org.primefaces.context.PrimeFacesContextFactory</faces-context-factory>
    	</factory>
    

    Snippet 10

    First thing that catches the eye, is an additional configuration for Spring Webflow and JSF integration. PortletViewHandler tell Spring Web Flow how to handle view correlation between Portal Engine and JSF.

    The second entry is necessary for Primefaces to configure itself in the Portal Environment.

    	<factory>
    		<faces-context-factory>org.primefaces.context.PrimeFacesContextFactory</faces-context-factory>
    	</factory>
    

    Snippet 11

    without this entry you will see NullPointerException in the server.log file. JSF Framework because of the complex structure of the Technology Stack, is not able to determine the order of the libraries it should initialize and it initialize the default JSF Context Factory. Unfortunately this is not working and we have to explicitly state JSF Engine to initialize the ‘PrimeFacesContextFactory’ first.

    Challenge 4 – JBoss GateIn Portal – Spring Web Flow – Primefaces Integration
    If you look to the screenshot with project structure, you will see a project called ‘spring-faces-patch’.

    This project is quite crucial for JBoss GateIn Portal – Spring Web Flow – Primefaces integration and here is why.

    I explained in the previous blog, one of the reason that I build this feasibility study project is because I find Primefaces quite powerful JSF library and I want to figure out that Primefaces will work with Bboss Portal Engine together or not. Primefaces is an Ajax rich framework, so when we try to integrate to a Portal Engine we should respect this fact.

    While there is no joined effort from JBoss Gatein or Primefaces to make them compatible, there are some friction points.

    Portal Spec 2.0 – JSR 286 defines that Ajax interactions in Portlet environment should act like resources requests. Out of the box JBoss GateIn Portal and Primefaces does not respect this fact. Mainly, JBoss GateIn Portal accept Ajax requests as Portlet resource requests, like CSS files or Javascript files. So we have to configure the Primefaces so that when it build it Ajax requests it should take this into count.

    For this purpose I have to prepare a special ‘JbpPortlerFacesContextFactory’ and configure in the faces-config.xml in the ‘spring-faces-patch’. JSF 2 will pickup this file during start of the application and make the necessary changes so Ajax Requests URLs.

    This will happen in the following class ‘JbpPortletExternalContextImpl’ and in the method ‘encodePartialActionURL’.

    If we look to the implementation of this method.

    	public String encodePartialActionURL(String url) {
    		Assert.notNull(url);
    		PortalActionURL portalActionURL = null;
    		try {
    			portalActionURL = new PortalActionURL(url);
    		} catch(MalformedURLException e) {
    			throw new FacesException(e);
    		}
    		MimeResponse mimeResponse = (MimeResponse) response;
    		ResourceURL resourceUrl = mimeResponse.createResourceURL();
    		resourceUrl.setParameters(portalActionURL.getParameters());
    		return response.encodeURL(resourceUrl.toString());
    	}
    

    Snippet 12

    Practically we are manipulating the Urls Primefaces tries to create and transform it to Portal Resource Urls.

            ResourceURL resourceUrl = mimeResponse.createResourceURL();
    

    Snippet 13

    This ensures that Portal Engine interprets these calls as Ajax calls and inform the Primefaces so it can create partial responses for it.

    This project must stated as dependency in our ‘tech-demo’ Maven pom.xml file so during the build process it will be copied to the ‘WEB-INF/lib’ directory, so JSF2 can automatically detect the faces-config.xml file and use our implementation.

    Challenge 5 – ACTION – RENDER PHASE of Portlets
    One more thing that always cause problems with Portal Engine integration and JSF was always the incompatibilities between the Portlet Lifecylce and JSF Lifecycle. Mainly problem occurs because of the Action and Render phase of the Porlet Lifecyle. Portlet Specification states that user events must be processed in the Action phase of the Portlet Lifecycle and no action parameters should be transferred to the Render phase of the Porlet lifecycle.

    Unfortunately JSF needs some those parameters to run its own lifecycle. Previously there is no established solution for this problem and we have to implement custom logic for it. Now JBoss GateIn developers saw this problem and implemented a standard solution for it.

    For every Portlet application that we will deploy in JBoss GateIn Portal, we need a portlet.xml file and now we have to define in this file the following parameter….

    	<container-runtime-option>
    		<name>javax.portlet.actionScopedRequestAttributes</name>
    		<value>true</value>
    	</container-runtime-option>
    

    Snippet 14

    This will guarantee that any parameter that is defined in the Action phase will be transferred to the Render phase.

    Challenge 6 – Javascript Libraries in GateIn (jquery.js, jquery-pluggins.js, primefaces.js, push.js)
    Now we arrived to the problem that caused me the most headache.

    As I previously mentioned, Primefaces is really Ajax/Javascript heavy library. For a quite long time I could not make jquery.js, jquery-pluggins.js, primefaces.js and push.js (Primefaces Atmosphere implementation) to work correctly in Jboss GateIn portal.

    When I deploy the project as J2EE application in the JBoss GateIn Portal everything worked perfectly but I deployed it in Portal Engine, they were not initializing correctly.

    Part of the problem was, in normal JSF application Javascript and CSS files delivered at the head part of the html document but a when a Portlet deployed to a Portal Engine it has no access to the head part of a html document with standard mechanism.

    Unfortunately jQuery, Primefaces and Atmosphere javascript files were all using self calling constructors to initialize themselves and these mechanism were not functioning correctly if they are not loaded with script elements inside of the head area of the html document.

    Most probably GateIn development team experienced these sort problems before and developed some standard mechanism to place portlet specific javascript files into the header part of the html page.

    Before we go further details of this mechanism when I try to implement this solution, I encountered several other problems, I want to mention those and finally explain the final solution.

    GateIn Portal to optimize the javascript loading (if several portlets needs the same javascript files only to load one copy of those) and to speed up page render speeds, integrated some special javascript libraries to every portal page.

    This is mainly AMD library which helps loading several libraries under different aliases to prevent name collisions and delay javascript file loading at the of the page render to increase performance but unfortunately causes some problems with the above mentioned javascript libraries.

    The 2nd mechanism GateIn Portal offers to load the scripts as a in the header part of html page. This one seems the ideal solution for our use case but there is one catch. All the modern javascript libraries I mentioned here are sort of AMD compatible. If they detect the existence of AMD library in the environment they try to initialize themselves with the help of the AMD and unfortunately this is not working in GateIn Portal.

    Now, so how GateIn Portal organize the loading of the javascript files. There is a special configuration file called ‘gatein-resources.xml’ which should lie in the WEB-INF directory.

    My first tries was to apply the ideal solution and organize the javascript files in AMD fashion with the following configuration.

     <gatein-resources
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://www.gatein.org/xml/ns/gatein_resources_1_3 http://www.gatein.org/xml/ns/gatein_resources_1_3"
        xmlns="http://www.gatein.org/xml/ns/gatein_resources_1_3">
      <module>
        <name>push</name>
        <script>
          <path>/javascripts/push.js</path>
        </script>
        <depends>
          <module>primefaces</module>
        </depends>
      </module>
      <module>
        <name>primefaces</name>
        <script>
          <path>/javascripts/primefaces.js</path>
        </script>
        <depends>
          <module>jquery_plugins</module>
        </depends>
      </module>
      <module>
        <name>jquery_plugins</name>
        <script>
          <path>/javascripts/jquery-plugins.js</path>
        </script>
        <depends>
          <module>jquery_1_11</module>
        </depends>
      </module>
      <module>
        <name>jquery_1_11</name>
        <script>
          <path>/javascripts/jquery_1_11.js</path>
        </script>
      </module>
      <portlet>
        <name>TechDemoPortlet</name>
        <module>
          <depends>
            <module>push</module>
          </depends>
        </module>
      </portlet>
    </gatein-resources>
    

    Snippet 15

    Which defines, as you may see javascript libraries as modules that will instruct the GateIn Container to initialize those as AMD Modules.

    This caused some complications, Primefaces relies heavily in Widget principles. The JSF components, when they are transformed to HTML elements during the render phase, are implemented as widgets. That means during the creation of the html page, javascript must be up and running.

    Unfortunately for optimization purposes, GateIn delayed the initialization of javascript files until the end of html tree creation so it was too late for the widget components to create themselves.

    So we have to go for other alternative, which is to place this libraries as raw javascripts in the html page head component with the following configuration.

    <gatein-resources
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://www.gatein.org/xml/ns/gatein_resources_1_3 http://www.gatein.org/xml/ns/gatein_resources_1_3"
        xmlns="http://www.gatein.org/xml/ns/gatein_resources_1_3">
    
      <scripts>
        <name>jQuery</name>
        <script>
          <path>/javascripts/jquery_1_11.js</path>
        </script>
      </scripts>
      <scripts>
        <name>jquery-plugins</name>
        <script>
          <path>/javascripts/jquery-plugins.js</path>
        </script>
        <depends>
          <scripts>jQuery</scripts>
        </depends>
      </scripts>
      <scripts>
        <name>primefaces</name>
        <script>
          <path>/javascripts/primefaces.js</path>
        </script>
        <depends>
          <scripts>jquery-plugins</scripts>
        </depends>
      </scripts>
      <scripts>
        <name>push</name>
        <script>
          <path>/javascripts/push.js</path>
        </script>
        <depends>
          <scripts>primefaces</scripts>
        </depends>
      </scripts>
      <portlet>
        <name>TechDemoPortlet</name>
        <scripts>
          <depends>
            <scripts>push</scripts>
          </depends>
        </scripts>
      </portlet>
    </gatein-resources>
    

    Snippet 16

    In this version instead of defining javascript libraries as Module we define those as simple script elements.

    After some testing, I encountered some problems with this option also. These libraries are build so to detect the existence of AMD framework and change their initialization procedure accordingly. Unfortunately for some reason this was clashing with GateIn mechanisms.

    So I have to modify in the javascript libraries from this

    (function( factory ) {
    	if ( typeof define === "function" && define.amd ) {
    
    		// AMD. Register as an anonymous module.
    		define([ "jquery" ], factory );
    	} else {
    
    		// Browser globals
    		factory( jQuery );
    	}
    }(function( $ ) {
    

    Snippet 17

    to this

    (function( factory ) {
        // Browser globals
        factory( jQuery );
    
    }(function( $ ) {
    

    Snippet 18

    which will bypass the AMD initialization mechanism and everything will initialize correctly so Primefaces Widgets can access it.

    Of course to be able load my modified version of the scripts, while there is no option in Primefaces to say not to load and execute its own javascripts, I have to manipulate Primefaces artifacts. Which happens in ‘../primefaces-patch/patch-primefaces-assembly’ project.

    This Maven project explodes Primefaces artifact, removes jquery.js, jquery-pluggins.js, primefaces.js, push.js and package the artifact again without it.

    The ‘tech-demo’ projects Maven POM file must reference ‘patch-primefaces-assembly’ as dependeny instead of primefaces artifact.

    Naturally we have to also deliver the modified version of the javascript files which we realized by copying the modified versions to the following directory ”../tech_demo/src/main/webapp/javascripts’.

    One side note, I want to mention here, in the gatein-resources.xml, the following element

      <portlet>
        <name>TechDemoPortlet</name>
        <scripts>
          <depends>
            <scripts>push</scripts>
          </depends>
        </scripts>
      </portlet>
    

    Snippet 19

    tells to portal container every time this portlet is initialized it should place this scripts to part of the html document.

    One more small trick that can be useful for debugging scripts in GateIn Portal, normal portal engine obfuscates and compresses the javascripts which makes them very hard to debug. If you use the following flag ‘-Dexo.product.developing=true’ during the start of the portal, javascripts would not be obfuscated and compressed.

    Installation

    This feasibility application developed with with JBoss GateIn Portal 6.1.0.Final which you download from the following link.

    JBoss GateIn Portal 6.1.0.Final

    After you unpack the downloaded package, you will see the following directory…

    jboss-jpp-6.1.0/jboss-jpp-6.1/standalone/deployments

    You have to place in this directory the artifact..

    tech_demo-2.0-SNAPSHOT.war

    and when you start the portal with the command…

    ./standalone.sh

    (or standalone.bat in Windows machines)

    in the directory…

    jboss-jpp-6.1.0/jboss-jpp-6.1/bin

    this would start the GateIn Portal.

    If you see the following output, that means JBoss GateIn Portal started successfully….

    15:52:51,293 INFO [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on http://127.0.0.1:9990/management
    15:52:51,293 INFO [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990
    15:52:51,294 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss Portal Platform 6.1.0 (AS 7.2.1.Final-redhat-10) started in 18856ms – Started 1740 of 1888 services (142 services are passive or on-demand)

    Now you only have to attach your portlet application to the specific window in JBoss Portal GateIn, you can use the following link

    Portlet Deployment

    and you can access the feasibility application with the following URL.

    http://localhost:8080/portal/classic/

    Conclusion:
    If you read the previous blog, you would see that it was a bigger fight to JBoss Portal, Spring Webflow and Primefaces to work togehter in the old technology stack, with the new version of the technology stack it became much easier. There are still some hurdles on the way but I would say a professional production quality IT project can be now developed easily on this technology stack.

    I hope this feasibility study would help somebody.

    Posted in Ajax, Asynchrounus Processing, Atmosphere, Comet, Cygwin, JBoss GateIn Portal, JBoss Portal, JSF, JSR286, Maven, Portlet, Primefaces, Software Development, Spring Web Flow | Tagged , , , | 2 Comments

    Cost Saving with OSGI in Business-to-Business (B2B) Model

    Foreword:

    One of the projects that I am involved provided Business-To-Business Service Model.

    The main idea is to provide the services that are normally only available for internal processes of the company, to the external customers of the company.

    It is basically providing an external interface build via web service to access company’s internal ESB.

    This is a very standard scenario that is discussed in many articles; it is a well known business practice. What I want to point out here how this model suffers from some basic business requirements and how these requirements can be fulfilled in a more cost effective way.

    If you observe several B2B systems, you will see that a common pattern immerges. You are providing your services to external company that has a different development cycle then yours, direct consequences of this, if you make changes to your services because of internal requirements, external partners may not have the budget or necessity to implement them (for ex you change an existing process which now needs an additional address information but the external partner will never need those changes because he doesn’t use those).

    If only one version of your services exists, your external partners are forced to implement those changes and this cause costs, because of this necessity several B2B projects provides several versions of those services for backward compatibility and a service level agreement for the time to live of those services, SLA will say that there will be no more then 3 backward compatible versions of it.

    When conditions are met the older versions will be phased out.

    Now you might read until this point and say what is the big deal about it? Well the project that I was in, was big SOA project that had several Partner Systems with their internal release cycles so when we want to deliver the actual versions of B2B interfaces, we would have to make up to 10 deliveries to the production systems per year.

    Well number the deliveries were naturally a problem but the actual biggest cost was the testing of these interfaces. If we would have 20 partner systems and 10 delivery dates those 20 Systems has to be all tested for these 10 deliveries and this was a huge cost, specially considering may be out of those 20 system only 2 has real changes during one delivery.

    Substantial cost savings were possible if we could test several delivery packages at one date and deliver on there dedicated delivery dates.

    Unfortunately conventional middleware platforms this is not that easy. We could only deliver one version of the application to the production (because middleware software, application servers only able to hold one version of the binaries) until OSGI technology appeared.

    OSGI technology give us the possibility of deploying several version of an application simultaneously, they can be all active at once or there can be activated/deactivated on demand.

    So what OSGI bring us to solve our problem.

    OSGI is a Service Platform implementing a dynamic component model which normally does not exist in Java Virtual Machines. In OSGI we can define our Java Libraries as Bundles and deploy them. Bundles are heavily versioned with descriptors, it might contain the same Java Classes but it is interpreted by OSGI runtime as different Java Libraries.

    So what this technology presented us an opportunity.

    As you remember our problem is that we have high cost while we have 10 different delivery dates per year so we have carry the cost builds, testing and deployment of our application 10 times per year.

    What if we know the version 1,2 and 3,4 and 5,6 and 7,8 and 9,10 of our application, will change so little in between them, we can actually test them together and deploy them together but activate them in different dates. OSGI will exactly give this possibility to do this.

    I can tell you testing and deployment of an enterprise quality application is expensive undertaking, cutting cost of 10 test/deployment cycle will be a considerable cost saving.

    Problem and Solutions:

    So what will be the problems that we will encounter and how can we solve them.

    First of most of the B2B Services presented as Web Services and Web Services are strongly controlled with their WSDL. That means if we are going to change the version of a B2B Service, most probably we will also change its WSDL and inevitably configuration changes at Service Customers. Most of the Service Consumers don’t like this because many of them have quarterly releases and if our application has 10 releases why they should increase the number of releases and their costs, just to stay compatible with our application.

    Secondly most of the B2B Service keeps 2 or more versions of them up to provide backward compatibility so a new release from us should not override or backward compatible services.

    So we have 2 major requirements, for once we need an OSGI platform dependency and for the second we should be able to present our B2B Service to our customer’s version independent.

    There is one platform that can present these two demands, it is Fuse ESB.

    Fuse ESB is the commercial version of the Apache Servicemix Open Source project. You can use it for free and buy commercial support when you need. So you might ask why we need an ESB.

    Now what are we going to do is to define an entry point in a Fuse workflow and also configure the workflow to redirect call to our B2B Service to the specific versions.

    Below is the picture which is showing how the most B2B Services deployed currently.

    1

    Service Consumer has 3 URLs to the versions of the Service. From the Service Consumer point of view if a new service introduced or an old one removed, Service Consumer should reconfigure it systems, even that is not in its development plans.

    Now in this second picture we see that Service Consumer is speaking with a central point for the communication, if a new service is introduced or removed it would be totally transparent for the Service Consumer.

    2

    So how are we going to achieve this? As I give hints above, we are going to use Fuse/Servicemix as an ESB which will intercept call to our B2B Services and redirect them to desired version with the information included in the calls. While Fuse/Servicemix based on OSGI we can turn on/off versions of our Services while staying completely transparent to the Service Consumer.

    Feasibility Study:

    I prepared a feasibility study project with the above mentioned concepts.

    Following 2 screenshots (screenshots are from Karaf, management UI for Fuse) will show you that OSGI is the right choice here.

    3

    4

    When you are looking to the previous pictures you will see three interesting details. When you check feature list you see that ‘osgi-b2b-router-feature’ has field written ‘installed’ which can configured on runtime to ‘uninstalled’ which will remove the feature.

    Also if you look to the OSGI Bundles list you will see that near to the ‘OSGI B2B Router’, ‘OSGI B2B Mediation’, ‘OSGI B2B Producer’ Active field, so you can turn bundles on/off (osgi:start/stop command) on the runtime.

    This is principle part of our idea/theory so we can install different routing versions for us or different versions of our b2b services and turned on/off those as it us pleases.

    To be able to demonstrate what we are talking about here functions, I built a SOAP UI project which is simulating our B2B Service and the consumer for it.

    5

    Above screenshot displays the Mockservice that we created for simulating B2B Service.

    6

    And this one simulates the customer, as you might see we called the Version 1 of our service and received the answer.

    Now lets call the same URL with the version 2 of our service, we have our Version 2 of our services simulated on port 9090.

    7

    You can see that mediation layer automatically detects that this is the new version of our service and directs the request correctly to the version 2 of the service.

    We modified the service so that now it delivers a date field in the response, if you want to be sure that it is not exactly the same answer.

    Now you can say, what is so special about it every Enterprise Service Bus can do this, then consider the following screenshot.

    8

    The screenshot belong to ‘Karaf’ OSGI management console of Fuse, you see there 2 version of our services OSGI B2B Producer 1.0.0.SNAPSHOT and 2.0.0.SNAPSHOT. The Active status of those bundles indicate both are deployed and active in the same time. We can turn those bundles on/off on the runtime or even install new version of those bundles without end user discerning anything.

    That is the power of OSGi and the solution we propose here.

    Project Structure:

    This is how the project structure looks like.

    9

    A short summary of the project functionalities will look like the following.

    osgi-b2b-router this is the central entry point for our B2B Services, this component will receive the initial request from Service Consumer and redirect to the mediation so it can be forwarded to the correct version.

    osgi-b2b-mediation this project will inspect the incoming message and will decide which version of the B2B service that we provide should serve this request.

    osgi-b2b-producer will call the real implementation behind our service and deliver the response (name producer is a concept in Servicemix and its Webservice implementation stack which I will go in detail later).

    osgi-b2b-feature is a construct of the Fuse framework and is actually a deployment unit which I will also explain it later stages.

    Detailed Analysis of Projects:

    osgi-b2b-router:

    This is how we configured our entry point to our project.

    <?xml version="1.0" encoding="UTF-8"?>
    <beans xmlns="http://www.springframework.org/schema/beans"
    	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    	xmlns:osgiRouter="http://osgi.salgar.org/osgirouter"
    	xmlns:orderProvider_v1="http://v1.salgar.org/B2BService"
    	xmlns:http="http://servicemix.apache.org/http/1.0"
    	xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
    						http://servicemix.apache.org/http/1.0 http://servicemix.apache.org/http/1.0/servicemix-http.xsd">
    
    	<http:consumer service="http:FindOrders" endpoint="httpEndpoint" locationURI="http://localhost:8088/MockB2BService"
    		defaultMep="http://www.w3.org/2004/08/wsdl/in-out" targetService=" osgiRouter:mediation"  />
    
    	<bean class="org.apache.servicemix.common.osgi.EndpointExporter" />
    </beans>
    

    You can see that we configured our central entry point with a http component of Servicemix(http:consumer).

    This is the component when configured that will receive the message arriving to our URI and port defined in the locationURI property. In thise case we are listening for the localhost port 8088.

    Later it will route this message to the osgiRouter:mediation target enpoint which will then decide to which version of services has to be called.

    osgi-b2b-mediation:
    This project will receive the message from the router project and inspect it. It will specially analyze the namespace property of the web service call which will indicate the intended version of the Service Consumer.

    <?xml version="1.0" encoding="UTF-8"?>
    <beans xmlns="http://www.springframework.org/schema/beans"
    	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
            xmlns:eip="http://servicemix.apache.org/eip/1.0"
    	xmlns:osgiRouter="http://osgi.salgar.org/osgirouter"
    	xmlns:orderProvider_v1="http://v1.salgar.org/B2BService"
    	xmlns:orderProvider_v2="http://v2.salgar.org/B2BService"
    	xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
    						http://servicemix.apache.org/eip/1.0 http://servicemix.apache.org/eip/1.0/servicemix-eip.xsd">
    
    	<eip:content-based-router</strong> endpoint="osgiRouterMediation"
    		service="osgiRouter:mediation">
    		<eip:rules>
    			<eip:routing-rule>
    				<eip:predicate>
    					<eip:xpath-predicate id="wsdl_version_predicate_v1" 				xpath="(//namespace::*[.='http://v1.salgar.org/B2BService']) = 'http://v1.salgar.org/B2BService'" />
    				</eip:predicate>
    				<eip:target>
    					<eip:exchange-target service="orderProvider_v1:findOrders" />
    				</eip:target>
    			</eip:routing-rule>
    			<eip:routing-rule>
    				<eip:predicate>
    					<eip:xpath-predicate id="wsdl_version_predicate_v2"				xpath="(//namespace::*[.='http://v3.salgar.org/B2BService']) = 'http://v3.salgar.org/B2BService'" />
    				</eip:predicate>
    				<eip:target>
    					<eip:exchange-target service="orderProvider_v3:findOrders" />
    				</eip:target>
    			</eip:routing-rule>
    		</eip:rules>
    	</eip:content-based-router>
    
    	<bean class="org.apache.servicemix.common.osgi.EndpointExporter" />
    </beans>
    

    At this endpoint an eip:content-based-router component evaluate the rules defined for this component and an eip:predicate component deciding based on the xPath statement (analyzing the names spaces of calling Webservice to decide which version of our Service is called, ‘orderProvider_v1:findOrders’ version  or the ‘orderProvider_v2:findOrders’ version).

    osgi-b2b-producer:
    Producer is a notation used on the CXF Web Service framework. This will be the component responsible to call the the version of the service we provide, in Servicemix .

    It’s configuration would look like the following.

    <beans xmlns="http://www.springframework.org/schema/beans"
    	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    	xmlns:cxfbc="http://servicemix.apache.org/cxfbc/1.0"
    	xmlns:orderProvider_v1="http://v1.salgar.org/B2BService"
    	xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
    						http://servicemix.apache.org/cxfbc/1.0 http://servicemix.apache.org/cxfbc/1.0/servicemix-cxf-bc.xsd">
    
    	<cxfbc:provider endpoint="findOrdersProviderEndpoint"
    		useJBIWrapper="false" wsdl="classpath:v1/B2BService.wsdl" service="orderProvider_v1:findOrders"
    		locationURI="http://localhost:8989/MockB2BService" />	
    
    	<bean class="org.apache.servicemix.common.osgi.EndpointExporter" />
    </beans>
    

    We define the wsdl of the Web Service this producer configured for (classpath:v1/B2BService.wsdl) and URL that the Web Service responds(http://localhost:8989/MockB2BService).

    In a real life project, 2nd version would be created in another Git branch and deployed from there but if I create a branch for this feasibility project, it will complicate the things too much for you, if you try download and install this project. So I just created another project under the name osgi-b2b-producer-v2 and placed under Git so it can simulate the 2nd version of our project.

    osgi-b2b-feature:
    This is a Servicemix proprietary artefact, a deployment feature from Servicemix. We can deploy Servicemix independent bundles or we can deploy several bundles configured to form a feature.

    For this we need special file (osgi-router\osgi-b2b-feature\src\main\resources\ osgi-b2b-router-features.xml) configuring the feature looks like following.

    <features>
    	<feature name="osgi-b2b-router-feature" version="${osgi-router.version}">
    		<feature version="${servicemix.version}">servicemix-cxf-bc</feature>
    		<feature version="${servicemix.version}">servicemix-http</feature>
    		<feature version="${servicemix.version}">servicemix-eip</feature>
    		<bundle>mvn:org.salgar/osgi-b2b-router/${pom.version}</bundle>
    		<bundle>mvn:org.salgar/osgi-b2b-mediation/${pom.version}</bundle>
    		<bundle>mvn:org.salgar/osgi-b2b-producer/${pom.version}</bundle>
    	</feature>
    </features>
    

    Please pay special attention to the notation used for the bundles mvn:org.salgar/osgi-b2b-router/${pom.version}. Servicemix is closely integrated with Maven and with previous notation it can find artefacts from Maven repositories and install it.

    The feature artefact is also really usefull notifiy Fuse if our Bundles are dependant of other artefacts or not, as you can see for ex. with ‘servicemix-http’. When Servicemix notify this it will locate and activate the Servicemix Http Component for us.

    There is one additional thing that we have to do to help our Maven build to correctly build the feature artefact, following elements has to be defined in the Maven pom.

    <plugin>
    	<groupId>org.codehaus.mojo</groupId>
    	<artifactId>build-helper-maven-plugin</artifactId>
    	<executions>
    		<execution>
    			<id>attach-artifacts</id>
    			<phase>package</phase>
    			<goals>
    				<goal>attach-artifact</goal>
    			</goals>
    			<configuration>
    				<artifacts>
    					<artifact>
    						<file>target/classes/osgi-b2b-router-features.xml</file>
    						<type>xml</type>
    						<classifier>features</classifier>
    					</artifact>
    				</artifacts>
    			</configuration>
    		</execution>
    	</executions>
    </plugin>
    

    This configuration will create a special artefact ‘osgi-b2b-router-features.xml’ which we will need when we like to install the feature to the Servicemix.

    Deployment:

    Servicemix is closely integrated with Maven, it can deploy features and bundle from Maven Repositories with the following command.

    features:addurl mvn:org.salgar/osgi-b2b-feature/1.0-SNAPSHOT/xml/features

    You see below is a screen shot of Karaf, management console for Servicemix OSGI platform. You can see here with the features:list command we see the all available features. Our one is also visible there, osgi-b2b-router in this case is already installed, otherwise we have to use features:install command to install it.

    10

    Below you can see the installed the bundles from it, the moment when we install the features all the bundles that are included are also installed.

    11

    Fuse/Servicemix Configuration:

    We have to make some home work to configure Fuse so it can work with Maven Repositories seamlessly.

    We have to configure the file org.ops4j.pax.url.mvn.cfg which lies in ‘fuse\etc’ directory.

    You have to modify the following property.

    org.ops4j.pax.url.mvn.localRepository

    You should have set the property to your local maven repository.

    One point, I also have to mention here, there are lots of pre-installed features/bundles in the Fuse but to keep resource usage low, they are not activated until they are needed. So when we install our feature it will try to access to those features and while we did configure necessary dependencies in our feature artefact Fuse will start-up without any problem.

    <feature version="${servicemix.version}">servicemix-cxf-bc</feature>
    <feature version="${servicemix.version}">servicemix-http</feature>
    <feature version="${servicemix.version}">servicemix-eip</feature>
    

    If you need any additional features you should I either start those manually or also place in feature artefact.

    Conclusion:

    I hope I was able to show you that implementing an interesting technology and Enterprise Integration Pattern can really help saving lots of cost. With above mentioned methods you can test several versions of your application in a moment in time and deploy on the production and turned on/off those on runtime with zero downtime or even make bug fixes with zero downtime.

    Appendix:

    Preparing Project Environment:

    Getting Fuse:
    You can download the Fuse from the following URL:

    http://www.jboss.org/download-manager/file/jboss-fuse-6.1.0.GA-full_zip.zip

    After you unzip the file and make the maven configuration that I mentioned before.

    You can start the fuse with the following command ‘fuse\bin\fuse.bat’.

    Getting Project:
    The project place under the GitHub, you can download it from it with any GitClient.

    I personally use cygwin (windows unix emulation) and do it with command line instructions. If you are windows operation system user and more comfortable with user interfaces you can use the following tool (http://code.google.com/p/tortoisegit/)

    If you use cygwin or any git shell version you can get the project with the following command:

    git clone git@github.com:mehmetsalgar/osgi_router.git

    Maven Setup:
    The build system we used for the project is Maven which you can download from the following URL (http://maven.apache.org/download.cgi)

    If you are not an experienced with Maven, one point you have to be careful if you are working behind company firewall, you have to configure the following lines in setting.xml in conf directory of your Maven installation.

    <proxies>
    <!-- proxy
    | Specification for one proxy, to be used in connecting to the network.
    | -->
    <proxy>
    <id>optional</id>
    <active>true</active>
    <protocol>http</protocol>
    <username>xxxxx</username>
    <password>xxxxxx</password>
    <host>some.proxy.com</host>
    <port>3128</port>
    <nonProxyHosts>127.0.0.1</nonProxyHosts>
    </proxy>
    </proxies>

    If you are working with windows, it might be wise to change your maven repository location from your MyDocuments to a simpler path because some classpaths becomes too deep and have problems with 255 character limit of windows. So it is better to select a shallow directory position for your repository, for ex . c:/repo

    <localRepository>c:/repo</localRepository>
    

    Now if you execute the following command from the directory where your root pom is lies, you should see the following output which means you successfully built the project.

    [INFO] ------------------------------------------------------------------------
    [INFO] Reactor Summary:
    [INFO]
    [INFO] OSGI Router ....................................... SUCCESS [0.250s]
    [INFO] OSGI B2B Router ................................... SUCCESS [2.391s]
    [INFO] OSGI B2B Mediation ................................ SUCCESS [1.859s]
    [INFO] OSGI B2B Producer ................................. SUCCESS [2.000s]
    [INFO] OSGI B2B Producer ................................. SUCCESS [1.203s]
    [INFO] OSGI B2B Feature .................................. SUCCESS [1.641s]
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 10.578s
    [INFO] Finished at: Tue Aug 26 19:17 CEST 2014
    [INFO] Final Memory: 26M/495M
    [INFO] ------------------------------------------------------------------------
    

    SOAP UI:

    SOAP UI is a really useful tool to test web service interfaces.

    When you have the wsdl’s of the web services you want to test, you can start a SOAP UI project and calls those this web services with the sample requests.

    You can download the SOAP UI from the following URL (http://dl.eviware.com/5.1.2/SoapUI-Pro-5.1.2-win32-standalone-bin.zip?_ga=1.124456330.1225656795.1409838533)

    I also created a SOAP UI Project to test our application, if you downloaded the project from github, you can find the project in the following directory ‘osgi-router/osgi-router-soapui-project.xml’.

    Servicemix

    ServiceMix is an enterprise-class open-source distributed enterprise service bus (ESB) based on the service-oriented architecture (SOA) model.

    Servicemix use Spring for it configuration purposes. If you want to configure components and endpoints for your project, you should create a Spring xml configuration files calles beans.xml and place under the following directory in your project ‘src\main\resources\META-INF\spring’, then it will automatically identified and initialized by Servicemix.

    It has several components that you can activate by only including their namespaces to the beans.xml file, like the following.

    “xmlns:eip=”http://servicemix.apache.org/eip/1.0&#8243;,”xmlns:http=”http://servicemix.apache.org/http/1.0″”, “xmlns:cxfbc=”http://servicemix.apache.org/cxfbc/1.0″”.

    This will inform Servicemix to load and initialize this component (Off course this component must be in installed state in your OSGi container) .

    You can see the list of available Servicemix components in the follwing URL Servicemix.

    Basic working principle of Servicemix components that you define Endpoints with them, when a Servicemix component finishes it work it send its result message to the endpoint defined in the targetService property. It follow this chain to the end and then send the result to the consumer.

    Servicemix documentation will indicate namespaces for the components.

    OSGI

    OSGi (Open Service Gateway Initiative) defines a service plaftorm for Java which is modular system allowing Java to behave like complete and dynamic component model. That is something that does not exist in standart Java/VM Environments.

    It makes possible that several version of Java Package/Classes exist in parallel/modular fashion. These Java Classes/Package are deployed in the form bundles which can be installed, started, stopped, updated and uninstalled without requiring a reboot.

    Service registries also allows bundles to detect addition or removal of the services on the runtime.

    Building blocks of OSGI consist of Bundles, which are group of Java classes and additional resources equipped with a manifest (MANIFEST.MF) which are describing their functionality and dependencies, Lifecycles defining state of a Bundle (Installed, Resolved, Starting, Active, Stopping, Uninstalled) and Services which can be registered under any Java Interface.

    OSGI Bundles and Maven

    Although the way how Maven and OSGI handle the bundles are quite similar there are still differences.

    In a Maven pom, a dependency definition looks like following.

    <dependency>
    	<groupId>org.apache.servicemix</groupId>
    	<artifactId>servicemix-common</artifactId>
    	<version>${servicemix.version}</version>
    	<scope>provided</scope>
    </dependency>
    

    We define here which groupId and version this artefact belongs.

    Things are little bit different for OSGI, the dependencies are defined a file called MANIFEST.MF as looks like as following (I will left out some details because these files can be huge in size).

    Import-Package:
    javax.jbi.component;version="1.0",javax.jbi.management;version="1.0",javax.jbi.messaging;version="1.0"
    

    As you see the notation is different but the information it contains is still quite similar. GroupId, artefact and version, unfortunately Maven and OSGI are not totally compatible without the use of some Maven plugins (like Tycho) they can totally solve their dependencies.

    But fortunately there are some Maven repositories built with idea of creating Maven and OSGI compatible artefacts, so they contain the pom.xml and MANIFEST.MF information at the same time.

    If you look to the our root pom, you will see a configuration element introducing one of those repository.

    <repository>
             <id>fuse</id>
    	 <name>FUSE Repository</name>
    	 <url>http://repo.fusesource.com/maven2/</url>
    	 <snapshots>
    		<enabled>false</enabled>
    	 </snapshots>
    	 <releases>
    		 <enabled>true</enabled>
    	 </releases>
    </repository>
    

    If you download this project and want to build it Maven, you should check that you have internet access to this repository.

    Posted in B2B, Business to Business, cxf, EIP, Enterprise Integration Pattern, ESB, Fuse, Karaf, Maven, Open Source, OSGI, Servicemix, SOA | 1 Comment

    Extremely Ajaxified Web Application with Spring Webflow, Primefaces and State Machines

    This post is a feasibility study to see how we can extend the Spring Webflow functionality so it can help us more with Extremely Ajaxified Web Applications with the help of Model Driven Software Development, State Machines and proven JSF library like Primefaces (you can look here to see how the feasiblity study looks like visually).

    Feasibility project is available under github.com and I will give here the instruction about how to build and deploy it. I paid special attention that all the technologies used in this project to be Open Source, to be able to give people a chance use these ideas in their projects without being blocked from their project management with the arguments about their costs.

    I like to mention, I used capital first letters on the words for the keywords representing important concepts for this blog, I know it is not english grammar conform but I like to emphasize this way, important topics for this blog.

    The structure of blog would look like this…

    *******SPECIAL NOTE – Spring State Machine, when I wrote this blog, there were no compatible open source State Machine with the requirements of this blog, now I can happily say there is one. I created another blog entry adapting the feasibility study to Spring State Machine. All the motivation reasons why this feasibility study is created, are still explained here. If you want to understand why it is good idea to use State Machines in Web Application, please read this blog entry further. If you want to see implementation details for a Spring State Machine based on an UML Diagram, please read the other blog also.
    ************

    *******SPECIAL NOTE – When I made the previous modifications to the blog to use the Spring State Machine via an UML Model, I was intrigued with the idea to figure it out how difficult would it be to create a Domain Specific Language(DSL) and create Spring State Machine configuration via that way. It seems that is quite feasible also, if you want to know how, please read my blog about the subject XText, Domain Specific Language and Spring State Machine.
    ************

    *******SPECIAL NOTE – It seems the some dependency information of the MWE that I used for MDSD purposes is changed and caused extensive changes on the structure of the feasibility project. Instead of modifying this blog I created a new blog entry which will explain the changes. I will not re-tell the subjects that are not changed in the new blog but only explain the parts effected from the technology change at MWE. So if you want to learn more about the idea expressed here, please continue to read, otherwise the new blog entry is reachable at MWE2 and UML.************

    Foreword:

    I was a member of Software Development team at 2009 which is fighting with the complexities of an Extremely Ajaxified Web Application. I had a several years of Web Application development experience but I was quite unprepared for the challenges that an Extremely Ajaxified Web Application offered.

    Normally J2EE Web Applications are form based applications that the user should navigate to one web form to the other one. This reality stayed like this for really long time in IT world until that business side of IT discovered that men can do really cool things with Ajax functionalities.

    Direct consequence is that normally the functionality that we can integrate to twenty web forms, was presented inside of the one web form. The amount of complexity such a decision can bring to a project was a complete surprise to me and to other members of our team.

    When the old means of solving these problems were not enough, we have to find new innovative ways like State Machines, UML, Spring Web Flow, Primefaces, Comet Technologies, Asynchronous Processing and learn their limits, specially the limits about Spring Web Flow which you can read here. Primefaces and Ajax integrates also quite seamlessly, if you look to ‘xhtml’ files in github you will find nice examples about how to use Primefaces components in Ajax environment.

    I will try to explain in this article the challenges and solutions we found it, appropriate to solve the problem at hand. I started working on this article around 2010 only able to work on off hours and dealing also with a family life 🙂 it took a while to bring at a publishable quality but I still see the same problems exist and no good solutions in sight, so I think the points that I will discuss here are still valid.

    Problems:

    I have here a sample project that represents the problems we encountered in real world and my proposed solutions for them.

    As I previously gave some hints, our problems started when the business side start loading too much responsibility to the single web form in our application. Our team was built from fairly experienced group of developers but we were all quite new to the Ajax world. Naturally we gone for proven solutions and thought MVC of the JSF will be enough as a solution for our problems.

    We have to use JBoss and naturally we choose the technology stack of the JBoss with Richfaces.

    At the beginning everything was ok, but in short time we are starting reaching controller classes with the sizes 5000 lines of code because of the excessive responsibilities that the views should have.

    To make the things spicier, we should have an asynchronous working model. The backend systems that were our partners systems were not able to reach the performance levels required from a modern web applications. We have to send the requests to partners systems but we should not wait for the responses directly. Otherwise these requests will block the front end clients and leave the impression that we have an unresponsive system. So answers from partner systems will be delivered with the Ajax (Comet working model).

    As a general rule of tumb, if your system architecture looks like our below, I strongly urges to consider our solution here.

    aj_sm_visio
    picture 1

    As the business side requiring that much information on the forms, a view can practically have 10 areas that we have to display the results. Depending what is contained in the results, these areas may contain another up to 10 elements that we have to turn on or off. So in total, we had up to 100 components that we have to control with Ajax requests.

    For you out there that are familiar with JSF, the way control this behavior in JSF to write 100 render methods which will decide that an element should be rendered or not. This was really cumbersome mainly because the logic that will decide what will be displayed or not, was data and user right dependent.

    It would at the end look something else.

    if(eventA) {
    return true;
    } else if(eventB) {
    return false;
    } else if(event C) {
    return true;
    }
    .
    ..
    ...
    etc
    

    snippet 1

    if(userRightA) {
      if(partnerSystemResponseA) {
        do something
      } else if(partnerSysresponseB) {
        do something
      }
    } else if(userRightB) {
       do something
    } else if(userRightC) {
       do something
    }
    

    snippet 2

    Additionally we had more then one system delivering us information and combined with the asynchronous behavior, we have a complexity that gets out of hand really quickly and we started having real problems.

    The cause was that every asynchronous data source meant another thread for the application and an additional event to the application which increased the parallelism. So we started having strange effects depending in which order events are coming into our application.

    The things that we developed and tested in development environment were not behaving same during testing because the events were coming in totally different order and timing.

    It was really difficult to diagnose the problems. We were not able to reproduce the problems in development environment and the render methods reached an extensive complexity. Simple render method should be aware of all business cases of the application to decide that certain GUI element should be shown to the end user or not.

    The render methods that attached to the GUI you saw above, distributed logic, global data container was making to difficult for the project to stabilize it via Unit Tests and some cases impossible to do so.

    That was the moment to press the panic button and search for alternative solutions (I am considering, may be you are at this stage while you are reading this article).

    From this point on we will make a very detailed discussion about our technical path, if you want to experience how we did reached to the proposed solution please read it further. If you are only interested with the end results please go to chapter ‘Tech Demo’.

    Detailed Analysis:

    First problem was our lack of experience with Ajax. We had a really experienced group of developers about single threaded MVC style web applications but no particular expertise on Ajax applications. We were believers of the proven frameworks so we started our research there.

    The logical starting point was MVC model of the JSF which is one backing bean deciding what should happen in the view. That mean if you will one element to show in the View, backing bean will contain a render method to decide business logic allows to display the elements or not.

    That model is fine for, lets 5 elements, may be it is it still ok lets say for 6, 7, 10, 20,… elements but where is the limit. It seems the limits must be much smaller then 10 because as I mentioned previously, the business side of the project was fascinated with the possibilities that Ajax world provides and they loaded the view with this functionality that normally at least will belonging to 10 views…

    So we ended up having at least 100 elements and extremely large backing beans (controllers) which are approaching to 5000 lines of codes. At this point it was clear something was wrong.

    Unfortunately architecture team didn’t get the clues that we have different type of a problem. They try to solve the problem, which they thought was only code cluttering, with a proven recipe, divide and conquer.

    They tried to reduce the responsibility of the backing beans by creating smaller pieces and apply indirections to delegate the business logic.

    I accept that in a classical single threaded web application, that would be exactly the solution I would try but we didn’t have a classical web application.

    So the solution, instead of providing clarity, had caused more chaos because instead of trying to follow what happening in the system in single place, we have to track over several places which are reaching decisions influencing the flow of the application independently.

    As you may see the code was looking better but that didn’t bring any improvement to the quality of life of our project.

    The problem stayed same, we were getting some effects that we can’t explain in the production. Our only tool was to read production logs to understand what is going wrong, but reading gigabyte big log files was not easy and that several threads that running business logic, writing their log information at the same time was also not helping.

    What we needed was a system that is reporting us, when it encounters an use case that we didn’t thought about in our applications. The question was how to bring such discipline to the project.

    Even the single threaded applications contains such use cases, the problem gravitated by us, while we had multiple threads that were running the business logic.

    To our luck there was such a methodic, called Finite State Machine.

    Finite State Machines are mainly used in rich GUI applications and mostly for the games, for these reasons, at the very beginning we didn’t consider those for our web application. Rich GUI applications and games need State Machines because they process several events at the same time from several different sources. It was not clear to us at the beginning that our application will confront the same challenges.

    There were some implementation efforts for the State Machine concept for the web applications, mainly from Spring Web Flow. Naturally we choose look to it as our first solution option.

    Spring Web Flow was developed to bring a discipline to the form based applications. It should be documented in XML level which actions (event) causing transitions in the system from one state to another one. This will help to identify the number of the use cases and their States.

    Unfortunate fact was that the Spring Web Flow was designed before Ajax 1.0 days. The solutions that it presents are mainly for Form Based classical web applications where any user action will produce a complete post back to the web server and Spring Web Flow will only control the navigation to one web form to the other (off course this is an over simplified view of the Spring Flow, it makes a lot for Action processing and Guard Condition evaluation and some Ajax functionality).

    So while we have nearly no Form to Form navigation (our complex application running only inside of one Web Form), was the Spring Web Flow was completely useless for us? What was the added value of the Spring Flow? It was clear that we need to extend the functionality of the State Machine it provides to control the functionality located inside of our one huge Web Form but it was able to present us really good groundwork, specially controlling the lifecycle of the web applications.

    After discovering this reality, it was clear that we have to improve the capabilities of the State Machine that Spring Flow contains. To do that the first question we have to answer is that how we are going to transfer our world to this new State Machine.

    Should we define our Domain Specific Language for our use cases or is there anything there we can use out of the box. Natural answer to this question is off course is to use UML, which some people which are much more clever then us standardized it already.

    The problem there, there is no standard State Machine implementation out there that can take that UML input and execute something. So the other alternative that we have is to take a product out there, which has its own specific DSL and use it.

    My personal choice, while I am graphical person, was to use the one of many available open source UML tools to define the contents of a State Machine instead of fighting with megabyte big XML files to read which in my opinion in the long run will kill the acceptance of the solution. If you have a long running project people working on the project will change all the time, one person who is expert on the DSL that you created can deal with it but when the next person would be there, he should spend considerable amount of time to learn this DSL. In comparison, in this age of the IT sector nearly every new employee that you will find will have some understanding of the UML.

    The problem we have to solve, is to make UML understandable to an existing State Machine Framework or implement the State Machine Framework ourselves.

    The existing Java State Machine Frameworks are built from a complete another point of view then the web applications, which make the integration quite complex. Also most the frameworks were implementing the full fledged State Machine spec which was not necessary for our requirements.

    And personally I didn’t want to bring another framework into our project which was already too complex.

    One more additional argument that prevented that we use an existing framework, none of them has a really good concept covering the Nested State Machines. This is a really important point for the State Machine theory. A very common problem on State Machine design is the State Explosion, which means if you design a small state machine everything is fine and dandy but when the number of the use cases increase and when transitions from every state to another state is possible the thing gets out of control very quickly.

    Fortunately there was always a way to control such problem which is called divide and conquer. This can be explained as following, lets say you have a system you have to search a customer. You can design a State Machine which will hold the overview of the all system under control and several other State Machines that would have smaller responsibilities, like a State Machine that is doing the actual search, a State Machine that will control the authentication system and a State Machine that is controlling the error system and nest all of these State Machines under the State Machine controlling the global use case.

    With this method, complexity of the every single State Machine stays in their black boxes and they only present the necessary information out of their boundaries, which eliminate the phenomenon called State Explosion.

    Considering these facts, it was more practical to develop a small framework which will understand the UML Model of the State Machine as DSL and with only the necessary feature sets of State Machine.

    So the way we followed, we choose an Open Source UML Modelling Platform, the TOPCASED application, Eclipse based graphical UML Modelling tool. The biggest plus side for this tool is that using Eclipse standard implementation of XMI to save the models, which proved to be very critical in the next step (There are other commercial UML tools that are saving there model information as XMI, like Borland Together), but the problem there, they were not following the specifications very closely and this was always causing problems with M2T (Eclipse Model to Text Framework) or Open Architectureware which we will use for code creation.

    Now we had a standard way to define our State Machine, how can we convert this information to some executable code for our web application. While we are planning to use Spring Webflow to control the lifecycle of our State Machine parallel to the lifecycle of the Web Application, it seems only natural that we define our executable code with Spring Bean Definition Language (Springs own DSL).

    To achieve this, we have to use one principal concepts of Model Driven Architecture and use a Model to Text (M2T) framework to translate our model information to Spring Bean Definition Language. Natural choice was again Eclipse hosted M2T Framework (old Open Architecture Ware).

    So the plan is to convert every State Machine defined in the UML model to a State Machine Bean, including this State Machine’s Events, States, Transitions, Guards and Actions to beans also (If you don’t have good grasp of these concepts please read the Appendix about State Machine first).

    So our State Machine would be programmed as an Abstract State Machine and these Spring Bean Configuration files will show it how to execute the Business Case.

    Well some of you might skeptical at the moment by the choice UML. UML and to some level Model Driven Architecture got really negative reputation lately. If you propose your project leader that you have to base your project to UML, you mostly get an eye roll.

    When UML was first become popular, it did really brought big overhead to the project but so little return of investment because when you start receiving changes to your project from your initial system design and nobody invested time to update your model, it stopped being a living document and useful tool.

    Especially with the popularization of the Agile Methods, project management had just started seeing UML and Model Driven Architect as a burden because it is established during the initialization phase of the project and becomes outdated with every changes that comes to project.

    In our case our State Machine UML is going to be a living document, because our all use cases are going to be represented as State Machines and any changes to the use cases would have to be transferred to the UML documents directly. So this way it is guaranteed the model would be always updated and never will be outdated.

    One other great advantage of our methodology would be the testablity. With State Machine by design it is possible to built isolated Unit Test independent from GUI layer and test our use case there which I analyze deeply in the following Appendix

    Startup Primers:

    At this part I will try to explain the necessary technical background to understand the next detailed explanation of the example project. One thing that I want to point out here, in the following pages you will see that there are lots of ground work to do to make these concepts to work but don’t mistaken when you do the ground work you can use %90 of it for other projects, so there is a really big re-usability factor.

    First of all, a state machine from an UML Diagram looks like this…

    picture_1
    picture 2

    As you see, there is a start point and final point for the system, also States and Transition that connects them, so if you like to know the relation to our system it look like this.

    Let’s say our system is started and the main responsibility of this system would be a customer search. When the web page is loaded we will be in STATE_A which represents that we are waiting input from the user to start the search.

    When the user gives the necessary input, we will receive Event_A and we will be ready to transition STATE_B which would be represent the situation that we found the customer (there can be more then one transition from one state to others, if all triggered via the same event most probably it would contain GUARD condition to decide which one should be chosen, like checking user rights or some quota exceeded or not).

    Search will be executed with ACTION of the transition, this is place the business logic should be executed for a chosen transition.

    If this is the User, we like to find in our search that would be the final state of our system but let’s say user is not sure the system delivered the correct information and user likes to see more information, then he can trigger the Event_B to see more details and to transition to STATE_C.

    If the user now thinks this is not the correct user, he/she can trigger the Event_C to return to the initial search state to start a new search.

    As you may see system only reacts to the commands that it knows, if you send Event_C when you are STATE_A, system will complain that we didn’t teach him what to do for this Event and would not able find a transition and complain about it.

    This is the biggest weapon we against the chaos caused by extremely Ajaxified (or multi-threaded) applications we have in the IT world at the moment. The ability that the system tells us that something happened that we didn’t teach to it.

    In Web 1.0 world, the application will do something what it thinks fits best to the situation without telling anything to the user. That might or might not fit to the use case of the application.

    Now I am hearing, you are asking yourself, can we tell from design phase one all the possible use cases for your application, because if you can’t that means your application will stop in production instead of doing its best bet.

    Thankfully Final State Machine fitting really good to the iterative development approaches of the modern IT world. You discover, with each new iteration of your application and you will enhance your model with new information until the software goes into production. If something will even slip under the radar, there are some techniques I will cover at later stages of this article which can help solving some bad surprises.

    After experiencing these processes, you will discover at some point in time, final amount of States that you application might have (there can unbelievably many but I guarantee you they are finite). Each iteration will place a piece in the puzzle.

    Tech Demo:

    As a proof of concept of above mentioned topics, I created a prototype project that uses these concepts. This project lies in github.com everybody can download, uses, changes fitting to their needs.

    It is a basically a small scale version of our real life project.

    The use case starts with a search of a customer, this is a direct search, when we start the search, partner system will directly deliver the response that the customer is found or not (synchronous system).

    If a customer found for the search criteria, the user has to authenticate and approve that the user is correct.

    If the user is authenticated, it might or might not give right to join his customer information, if the system found a duplicate information about it in the system.

    The use case continues with search of orders for this customer. This simulates a long running search and built asynchronously. Partner system will receive the search request, acknowledge to our system that the request is received, start processing and close the connection. When the results are ready it will callback our system and delivers the result.

    There are several side scenarios, like the customer is not found, not authenticated, not allowed to join, authentication cancellation during the order search, join cancellation during the order search.

    Our state machine will cover and model all these cases.

    We have the following projects forming this Tech Demo.

    picture_2
    picture 3

    If we have to explain the following picture,

    swf_statemachine is the main project containing the pom (Maven project information) for whole project. I have to use flat maven project model while M2T can’t figure out the nested project structure of the maven.
    swf_statemachine_domain is the where our local implementation of statemachine is modelled and its API from the model created.
    swf_statemachine_fornax_extension, Fornax is creator of the some Maven plugins, archetypes and cartridges(out of box the code generation templates for Java, Spring, Hibernate) for M2T. We have to modify some templates for our purposes.
    swf_statemachine_impl is the project responsible for the implementation code for the API created by the swf_statemachine_domain project.
    swf_statemachine_sm_model is the project we model State Machine depending of use cases. From this project contained UML Model, M2T will create the necessary artifacts (Spring configurations, Java classes to run these State Machines.)
    swf_statemachine_sm_model_impl, previous project created some skeleton of the classes for our live state machine (Guard and Actions can be modelled in UML but there can be only filled with business logic in Java layer), this is the project we will code live Java code for the State Machine.
    swf_statemachine_techdemo is the place, all parts of the puzzle are put together. It is the GUI layer which will contain JSF, Business Logic and State Machines.
    swf_ statemachine_techdemo_domain, while we are following Moden Driven Software Development principles, this project create domain objects from the UML Model via M2T, which will be used in swf_statemachine_sm_model_impl and swf_statemachine_techdemo.

    Detailed Functionality of the Projects:

    swf_statemachine_domain:

    The State Machine Framework that is going to be developed from us modeled in this project. You can see the details in the following UML model.

    picture_3
    picture 4

    This project model for Model Driven Software Development fundamental elemensts of the State Machine Framework (like State Machine, State, Transition, Guard, Action)
    which would be developed from us.

    It uses Fornax Cartridges for M2T to create Java Code.

    The M2T template ‘Root.xpt’ for interpreting UML Model looks like following.

    «DEFINE Root FOR uml::Model»
    	«EXPAND Root FOREACH (List[uml::Package])ownedElement»
    «ENDDEFINE»
    
    * Creates all packages
    */
    «DEFINE Root FOR uml::Package»
    	«EXPAND Root FOREACH ownedType.typeSelect(uml::Interface).select(e|e.getAppliedStereotypes().isEmpty)»
    	«EXPAND Root FOREACH ownedType.typeSelect(uml::Class).select(e|e.getAppliedStereotypes().isEmpty)»
    	«EXPAND Root FOREACH nestedPackage»
    «ENDDEFINE»
    
    /**
    * Creates all interfaces
    */
    «DEFINE Root FOR uml::Interface»
    	«EXPAND org::fornax::cartridges::uml2::javabasic::m2t::Interface::interface»
    «ENDDEFINE»
    
    /**
    * Creates all classes
    */
    «DEFINE Root FOR uml::Class»
    	«EXPAND org::fornax::cartridges::uml2::javabasic::m2t::Class::class»
    «ENDDEFINE»
    
    «DEFINE Root FOR Object»
    «ENDDEFINE»
    
    «DEFINE Root FOR PackageImport»
    «ENDDEFINE»
    

    snippet 2a

    With this template, with the help of the Fornax Templates ‘«EXPAND org::fornax::cartridges::uml2::javabasic::m2t::Interface::interface»’ and ‘«EXPAND org::fornax::cartridges::uml2::javabasic::m2t::Class::class»’ we are creating the necessary Java Elements.

    M2T Workflow for this project looks like following.

            <bean id="uml" class="org.eclipse.xtend.typesystem.uml2.UML2MetaModel"/>
    
    	<bean id="datatype" class="org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel">
    		<profile value="platform:/resource/swf_statemachine_domain/src/main/resources/model/Datatype.profile.uml"/>
    	</bean>
    
    	<bean id="java" class="org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel">
    		<profile value="platform:/resource/swf_statemachine_domain/src/main/resources/model/Java.profile.uml"/>
    	</bean>
            <component class="org.eclipse.xtend.typesystem.emf.XmiReader">
                    <modelFile value="model/domain.uml" />
                    <outputSlot value="model" />
            </component>
            <component id="generator" class="org.eclipse.xpand2.Generator"
    		skipOnErrors="true">
    		<fileEncoding value="ISO-8859-1" />
    		<!--metaModel idRef="EmfMM" /-->
    		<metaModel idRef="uml" />
    		<metaModel idRef="datatype" />
    		<metaModel idRef="java" />
    		<expand value="template::Root::Root FOR model" />
    		<outlet path="src/generated/java" >
    			<postprocessor class="org.eclipse.xpand2.output.JavaBeautifier" />
    		</outlet>
    	</component>
    

    snippet 2b

    We define here UML profile, some custom UML Types for our Workflow and the model the worklow should expand and the model elements that it should expand “template::Root::Root FOR model”.

    swf_statemachine_impl:

    Is the implementation of the State Machine framework(considered in the scope of our demands, it is not full fledged State Machine implementation).

    Main magic happens inside of the StateMachineImpl classes handle event method. Considering State Machine concept is there to control event processing that makes sense.

    Which looks like the following

    private boolean handleEventInternal(Event event) {
    		boolean eventHandled = false;
    
    		if (this.getActualState().getSubMachine() != null) {
    			AbstractStateMachine subStateMachine = (AbstractStateMachine) this
    					.getActualState().getSubMachine();
    
    			if (subStateMachine != null) {
    				eventHandled = subStateMachine.dispatch(event);
    
    				if (eventHandled == true) {
    					return true;
    				}
    			}
    		}
    
    		for (Transition transition : getActualState().getOutgoingTransitions()) {
    			if (event.getEventType().equals(transition.getEventType())) {
    				if (transition.evaluateGuard(event, this)) {
    					transition.processAction(event, this);
    
    					if (this.getActualState().equals(
    							transition.getTargetState()) == false) {
    						if (this.getActualState().getExitAction() != null) {
    							processExitAction(this.getActualState(), event);
    						}
    					}
    
    					setCurrentState(transition.getTargetState());
    
    					if (this.getActualState().equals(
    							transition.getTargetState()) == false) {
    						if (transition.getTargetState().getEntryAction() != null) {
    							processAction(transition.getTargetState()
    									.getEntryAction(), event);
    						}
    					}
    
    					return true;
    				} else {
    					if (log.isDebugEnabled()) {
    						log.debug("Guard condition "
    								+ transition.getGuard().getClass().toString()
    								+ " for the transition: "
    								+ transition.getName()
    								+ " not letting us to execute the transition for State Machine: "
    								+ this.getName().getStateMachineName() + " !");
    					}
    				}
    			}
    		}
    		log.error(getName()
    				+ ": We are not finding any transtion for this event type in this state! State: "
    				+ getActualState().getName()
    				+ " Event Type: "
    				+ event.getEventType().getEventName()
    				+ " the event is for the statemachine: "
    				+ event.getEventType().getStatemachineName()
    				+ " and we are at the statemachine: "
    				+ this.getName().getStateMachineName()
    				+ " or the guard conditions are not letting us to execute the transition!");
    
    		return false;
    	}
    

    snippet 3

    A State Machine should process events in first in first out principle.

    That means if it receives an event while it is processing one, it can’t process both in parallel. The reason for this when it is processing an event, State Machine is in an undefined state, you will see shortly that to be able to know which transition we will execute next, we must know in which current State we are. If a State is about to change when we got the second event, then we can’t exactly say which Transitions are valid for the second received event.

    For this reason, handleEvent method if it is currently processing an event, it will queue any further event it receives until the first one completely processed.

    After the state machine decides, it can process an Event, it looks to the current state and search a valid Transition for the event (handleInternalEvent method). To realise that it gets all the Transitions from the current State and start processing it’s guard conditions, if any of Guard Conditions returns a positive response then it will start executing the Action defined ong the Transition.

    At this point, I guess I have to give a brief sample about what can be these Guard Conditions. Let’s say, we are at the search form in our application. User must give as an input, a customer name, so we search our database for it. To allow this, the user must have RIGHT_A. So our applications is initially on WAITING_INPUT state, when the user enters the search criteria and click the search button, State Machine will check what is the current State is, in our case this is WAITING_INPUT and then check this State has a transition for this Event or not.

    Lets say there is a TRANSITION_A and this Transition has a GUARD_A which is controlling that the actual user that is starting the search has the RIGHT_A if it is, it will allow State Machine to go in Transition and execute TRANSITION_A’s Action which is to start customer search.

    It is as simple as that.

    Careful eyes will catch a detail at the begining of this method, before start searching possible transition for this Event we always check first that the current State has a Nested Sub State Machine. This is a very important notion that I will explain later on this Appendix which is mainly related to control the State Explosion problem.

    One really interesting object on the UML diagram that I want to point out is the Control Object. This is the heart of the state machine, it represent a meaning of state in the eye of our real life application. Do you remember the discussion that I had at the beginning, that our web form is consisted 20 or more JSF elements and every one them can turned on and off depending the state of our application.

    This is the place we control this behavior, let’s say we are in state WAITING_FOR_INPUT in this state only thing that we can display is the input fields and submit button.

    At this state our control object will look like this


    ControlObject
    displayUsernameInput = true
    displaySubmitButton = true

    displaySearchRunningLabel = false
    displaySearchResultTable = false

    snippet 4

    Now we click the submit button and our application switches to the SEARCH_RUNNING state in which control object will look like as follow.


    ControlObject
    displayUsernameInput = false
    displaySubmitButton = false
    displaySearchRunningLabel = true
    displaySearchResultTable = false

    snippet 5

    And when the search is complete and we switch to CUSTOMER_FOUND state.


    ControlObject
    displayUsernameInput = false
    displaySubmitButton = false
    displaySearchRunningLabel = false
    displaySearchResultTable = true

    snippet 6

    Considering the values on the Control Object can be only changed from the State Machine, it is a really powerful mechanism. We can protocol every time what value the variables have it, if it is changed, why it is changed

    Developers that are fighting all the time with global data containers in a multi-threaded environment can appreciate the power of such a mechanism much better.

    Another beauty of the solution that you will see from the following .xhtml snippet

    <p:outputPanel id="customerSearchAuthentication-empty-panel">
    		<p:outputPanel id="customerSearchAuthentication-panel" ajaxRendered="true"
    			layout="block" style="border:1px solid" rendered="#{customerSearchAuthenticationBB.customerSearchAuthenticationPanelRendered}">
    			<h:form id="customerAuthenticationForm" >
    				<h:panelGrid columns="1">
    					<h:outputText value="If you authenticate the customer please click below!" />
    					<p:selectBooleanCheckbox id="customerSearchAutheticationCheckBox" title="Authenticated" value="#{customerSearchAuthenticationBB.customerAuthenticated}" onchange="customerAuthenticated();" />
    					<p:remoteCommand name="customerAuthenticated" action="onCustomerAuthenticated" update=":customerSearchAuthentication-empty-panel,:customerSearchJoin-empty-panel,:customerSearchOrderLoading-empty-panel,:customerSearchOrder-empty-panel" />
    				</h:panelGrid>
    			</h:form>
    	    </p:outputPanel>
        </p:outputPanel>
    

    snippet 7

    interesting part is here ‘rendered=”#{customerSearchAuthenticationBB.customerSearchAuthenticationPanelRendered}”‘, is the way .xhtml code tries to access the information dictates it should be rendered or not.

    Most of the today’s JSF applications render methods would like the following.

    If( A TRUE) THEN
    THIS
    ELSE IF (B TRUE) THEN
    THIS
    ELSE IF (C TRUE) THEN
    THIS
    

    snippet 8

    And most of the time A, B, C are going to be global variables which will make testing the whole business logic extremely difficult.

    But now look here how our Backing Bean looks like for the isCustomerSearchAuthenticationPanelRendered() Method

    public boolean isCustomerAuthenticated() {
    CustomerSearchAuthenticationCO customerSearchAuthenticationCO =
    (CustomerSearchAuthenticationCO) stateMachine.getControlObject();
    return customerSearchAuthenticationCO.getCustomerAuthenticated();
    }
    

    snippet 9

    Yes, you are seeing it correctly, there is no single if statement, the correct value representing that this part of the GUI should be rendered or not, placed into the variable in the Control Object. When an Event is received by the State Machine and correct value placed to Control Object.

    So you don’t have to test the single part of the .xhtml and not the single part of the Backing Bean only thing that you have to test is State Machine and whether or not Control Object has the correct values.

    swf_statemachine_sm_model:
    This is the project that we are modelling our business cases for the UML State Machine.

    From this model, M2T will create executable State Machines via Spring and Java.

    Following screenshot is showing the State Machine Diagram of our technical showcase.

    picture_4
    picture 5

    As our use case defines, when the application start, it waits for the user input for the customer search (WAITING_CUSTOMERSEARCH_START State), after the user enters enough information for the customer search, he/she will click the customer search button (START_SEARCH Event) and we will switch to the next state (CUSTOMERSEARCH_RUNNING).

    After the partner system, that is searching the customer, reports that the customer is found, it would signal to our State Machine (CUSTOMER_FOUND Event), State Machine will switch to CUSTOMER_FOUND State and wait for the further events that are valid for found customer (CUSTOMER_FOUND State).

    In this case, only acceptable event is CUSTOMER_AUTHENTICATED_CLICKED (if by a programming mistake or for an use case that was discovered until now, it receives an another event, State Machine will report this and complain about it. This is the power of the State Machine concept, it provide us an iterative way to complete our Use Case with each discovery).

    A Customer Authentication event will take us to the next possible State in our application (CUSTOMER_AUTHENTICATED State), now from this point on, the number of events that we have to respond increases. It is possible that our authentication can be revoked (CUSTOMER_AUTHENTICATION_CLICKED Event) or further continuation of our use cases, Customer Joined (CUSTOMER_JOINED_CLICKED Event) (I just made up this use case, which practically says if the customer previously exists in the system it can be merged with the previous information).

    At this state, implicitly an asynchronous search for the orders of the customer is also started. Partner System is contacted and a request is sent. Partner system had acknowledged the request is received then closed the connection. When the results are ready, partner system will call us back with the results.

    Now this is a further complication of the AJAX applications, if the user triggers the CUSTOMER_JOINED_CLICKED Event and if the Partner System delivers the results, we have to display the Orders (CUSTOMER_JOINED State) but if the Partner System does not deliver the results and search for the customer order continues, we have to display an order loading screen.

    This is decided with the available transition and guard conditions. In our state machine model there are 2 Transitions on CUSTOMER_AUTHENTICATED State which are ready to accept CUSTOMER_JOINED_CLICKED Event. Now State Machine will go over these Transitions and ask to their Guard Conditions, which one of them should execute. Guard Condition checks that an order search is running or not. If it is still running we will display ‘Order Search Still Running Message’ (ORDERS_LOADING State). If the Order Search complete, second Transitions Guard Condition will take us to the display orders screen (CUSTOMER_JOINED State).

    At this point this Technology Demo reaches it’s end State (which can be further expanded if it is needed) off course there are some additional use cases to show how useful can a State Machine by reacting to event like what happens when Customer revokes the authentication or un-join the customer. This sort of side scenarios are the real power of the State Machines.

    Probably you can look to the main use case and say, I will never develop this with a State Machine what I described above, but bear in my mind that you get the real benefits of a State Machine, if the user will start doing unexpected things like removing the authentication, Partner System start sending unexpected error messages, programming mistakes and so. As it gets complexer and complexer you will need the State Machine more and more.

    Model to Code:
    Now at this moment we have to clear little bit how we are going to come from this UML model to an executable Java State Machine. You can find a more detailed discussion about M2T in the following Appendix.

    M2T (Model To Text) Mechanism will create fundamental structures that knows our use cases and in a further a step we will have fill the content of these structures with live code.

    If you check the following directory, you will see that our UML Model stored in XMI format.

    picture_5
    picture 6

    This is the format that M2T able to interpret and create Java and Spring Code. UML file as a small sample look like following.

    For ex, Customer Search State Machine look like following.

    <packagedElement xmi:type="uml:StateMachine" xmi:id="_vsIb0H1REeCgupXlFDV_aQ" name="CustomerSearchSM" submachineState="_BWlu8k3XEeOE05lJ4YwKXQ">
      <region xmi:id="_vsIb0X1REeCgupXlFDV_aQ" name="Region">
        <subvertex xmi:type="uml:Pseudostate" xmi:id="_2m3ZUH1REeCgupXlFDV_aQ" name="start"/>
        <subvertex xmi:type="uml:State" xmi:id="_41VcAn1REeCgupXlFDV_aQ" name="WAITING_CUSTOMERSEARCH_START"/>
        <subvertex xmi:type="uml:State" xmi:id="_NNY6Qn1SEeCgupXlFDV_aQ" name="CUSTOMERSEARCH_RUNNING"/>
        <subvertex xmi:type="uml:State" xmi:id="_gvoDIoEpEeC0au1QwVWf1Q" name="CUSTOMER_FOUND"/>
        <subvertex xmi:type="uml:State" xmi:id="_X3DXgobdEeCgzO61Ewoybw" name="CUSTOMER_AUTHENTICATED"/>
        <subvertex xmi:type="uml:State" xmi:id="_qRRQ0odxEeCgzO61Ewoybw" name="CUSTOMER_JOINED"/>
        <subvertex xmi:type="uml:State" xmi:id="_9kBRUMQhEeCYBJYY66CQSA" name="ORDERS_LOADING"/>
        <transition xmi:id="_CqMBRn1SEeCgupXlFDV_aQ" name="InitialTransition" kind="local" source="_2m3ZUH1REeCgupXlFDV_aQ" target="_41VcAn1REeCgupXlFDV_aQ"/>
        <transition xmi:id="_WYYDJX1SEeCgupXlFDV_aQ" name="SearchRunningTransition" kind="local" source="_41VcAn1REeCgupXlFDV_aQ" target="_NNY6Qn1SEeCgupXlFDV_aQ">
          <trigger xmi:id="_cOjzYH1SEeCgupXlFDV_aQ" name="onStartSearch" event="_kvhH0H1SEeCgupXlFDV_aQ"/>
         </transition>
         <transition xmi:id="_lJ_8tYEpEeC0au1QwVWf1Q" name="CustomerFoundTransition" source="_NNY6Qn1SEeCgupXlFDV_aQ" target="_gvoDIoEpEeC0au1QwVWf1Q">
           <trigger xmi:id="_l_gBUIEpEeC0au1QwVWf1Q" name="onCustomerFound" event="_qTnh4IEpEeC0au1QwVWf1Q"/>
          </transition>
          <transition xmi:id="_cQF0VYbdEeCgzO61Ewoybw" name="CustomerAuthenticatedTransition" source="_gvoDIoEpEeC0au1QwVWf1Q" target="_X3DXgobdEeCgzO61Ewoybw">
            <trigger xmi:id="_dXwnAIbdEeCgzO61Ewoybw" name="onCustomerAuthenticatedClicked" event="_PaRzsIeGEeCgzO61Ewoybw"/>
          </transition>
          ............
     </region>
    </packagedElement>
    

    snippet 10

    These are logical XML elements that are easy for M2T framework to interpret and they are actually human readable also.

    Of course, we will need the M2T DSL to be able to create the artifacts we need from these UML model, a small Snippet looks like following (At Appendix, I will give a much clear explanation of how they are working).

    These DSL files are located on the following directory.

    picture_6
    picture 7

    M2T DSL looks like the following

    «DEFINE StateMachines(uml::Model model) FOR uml::StateMachine»
    «IF this.getAppliedStereotype("swf_statemachine::SwfStateMachine")!= null»
    «IF !this.getValue(this.getAppliedStereotype("swf_statemachine::SwfStateMachine"),"submachine")»
    <!-- «name.toFirstUpper()» State Machine -->
    
    «LET name AS localName»
    «EXPAND StateMachinesCommon(model, this, localName)»
    «ENDLET»
    «ENDIF»
    
    «ENDIF»
    «ENDDEFINE»
    

    snippet 11

    Above snippet will create a State Machine in Spring configuration files for each UML State Machine that we have in the model.

    With the implementation of the Guard Conditions, Actions and Control Objects, we will implement the functionality to these structures.

    For now, when M2T executes against this UML model, for the above mentioned use cases, it will create the following artifacts.

    – First, the State Machine definitions will be expressed as Spring Beans (That is the beauty of this solution, we are not inventing another DSL to express these information but using the Spring notations to express them, which is well know for nearly all above average developer).

    – Second, an Abstract Java classes representing the control objects for every State Machine that exists in the System.

    – Third a Java Enumeration that representing all the State Machines in our model.

    – Fourth a Java Enumeration for every States that exists in the model for a State Machine.

    – Five a Java Enumeration for every Event that exists in the model for a State Machine.

    picture_6B
    picture 8

    M2T produces 3 XML files containing the bean definitions, each one named with the name of the State Machine which contains the definition of the State Machine, States and Transitions. Another one is defined for the Control Object and one for the Guard Conditions and Actions.

    Now you can ask, why 3 files, one is not enough? Actually it is enough but one of the biggest strengths of the State Machine, is the testability of the whole State Machine and Business Logic in a test rig (As you can see in the following Appendix). If we put the definitions of the Control Objects, Guards and Actions into the one single file, we can only make an integration test with real life implementation. Which can be possible for some projects but for the most scenarios out there, unit testing is much more viable, with this structure, we can change the implementation of our Control Objects, Guard and Actions for test ones and test the whole State Machine in a test rig.

    Now think the power of this, if it is not clear to you, in how many web applications you can test whole business logic with the all layers of GUIs, DAO, Facades and all sort of technological barriers that you have. With the above mentioned methodologies you can test all the Business Logic that you might have in production in a test rig.

    Other then State Machines and the methods I explained above, I do not know any other way to do it that extensively.

    Now lets look to the first files that is created for us, applicationContext-statemachine-customersearch.xml.

    So the Spring Bean xml that is representing a State Machine will look something like this…

      <bean id="CustomerSearchSM" class="org.salgar.swf_statemachine.impl.StateMachineImpl"
          lazy-init="true" scope="flow">
        <!-- aop:scoped-proxy proxy-target-class="false" / -->
        <property name="name">
          <bean class="org.salgar.swf_statemachine.enumeration.StateMachineEnumerationImpl"
              factory-method="valueOf">
            <constructor-arg>
              <value>org.salgar.swf_statemachine.enumeration.StateMachineEnumerationImpl</value>
            </constructor-arg>
            <constructor-arg>
              <value>CustomerSearchSM</value>
            </constructor-arg>
          </bean>
        </property>
        <property name="controlObjects">
          <map>
            <entry key="CustomerSearchSMControlObject" value-ref="CustomerSearchSMControlObject"/>
          </map>
        </property>
        <property name="startState" ref="CustomerSearchSM.WAITING_CUSTOMERSEARCH_START"/>
        <property name="existingStates">
          <list>
            <ref bean="CustomerSearchSM.WAITING_CUSTOMERSEARCH_START"/>
            <ref bean="CustomerSearchSM.CUSTOMERSEARCH_RUNNING"/>
            <ref bean="CustomerSearchSM.CUSTOMER_FOUND"/>
            <ref bean="CustomerSearchSM.CUSTOMER_AUTHENTICATED"/>
            <ref bean="CustomerSearchSM.CUSTOMER_JOINED"/>
            <ref bean="CustomerSearchSM.ORDERS_LOADING"/>
          </list>
        </property>
      </bean>
    

    snippet 12

    For ex, above you see Customer Search State Machine, his name is taken from the model and initialized in the Java Object via the Enumeration.

    Then a Control Object will be defined in the Spring configuration (Actual implementation of this control object will lie in the next project.)

    And the States that a State Machine has, will be defined in the Spring configuration.

    And an Initial State that is defined start-up state of the State Machine when it is initiated via Spring.

    As you see, Spring is totally enough to represent the information contained in UML Model.

    Next part will contain the information about the States defined above. I am only placing one piece here ,not clutter too much.

    <!-- STATES - CustomerSearchSM -->
      <bean name="CustomerSearchSM.WAITING_CUSTOMERSEARCH_START" class="org.salgar.statemachine.domain.State">
        <property name="name">
          <bean class="org.salgar.swf_statemachine.enumeration.state.CustomerSearchSM_StateEnumerationImpl"
              factory-method="valueOf">
            <constructor-arg>
              <value>
                org.salgar.swf_statemachine.enumeration.state.CustomerSearchSM_StateEnumerationImpl
              </value>
            </constructor-arg>
            <constructor-arg>
              <value>WAITING_CUSTOMERSEARCH_START</value>
            </constructor-arg>
          </bean>
        </property>
        <property name="outgoingTransitions">
          <list>
            <!-- SearchRunningTransition -->
            <ref bean="CustomerSearchSM.transition_WAITING_CUSTOMERSEARCH_START_CUSTOMERSEARCH_RUNNING"/>
          </list>
        </property>
        <property name="incomingTransitions">
          <list> </list>
        </property>
      </bean>
    

    snippet 13

    So beans name “CustomerSearchSM.WAITING_CUSTOMERSEARCH_START” defines to which State Machine this State belongs and the actual name of the state as defined in the UML Model, which will be represented by the Java Enumeration that is created by the M2T.

    Next pieces of information displays which transition can occur to this State, with property ‘incomingTransitions’ and which transitions can occur from this state with property ‘outgoingTransitions’.

    The next part defines the transitions that we defined in the UML Model.

    <!-- TRANSITIONS -->
      <!-- TRANSITION - SearchRunningTransition -->
      <bean name="CustomerSearchSM.transition_WAITING_CUSTOMERSEARCH_START_CUSTOMERSEARCH_RUNNING"
          class="org.salgar.swf_statemachine.impl.transition.TransitionImpl">
        <property name="name" value="CustomerSearchSM.SearchRunningTransition"/>
        <property name="sourceState" ref="CustomerSearchSM.WAITING_CUSTOMERSEARCH_START"/>
        <property name="targetState" ref="CustomerSearchSM.CUSTOMERSEARCH_RUNNING"/>
        <property name="eventType">
          <bean class="org.salgar.swf_statemachine.enumeration.event.customersearchsm.CustomerSearchSM_EventEnumerationImpl"
              factory-method="valueOf">
            <constructor-arg>
              <value>
                org.salgar.swf_statemachine.enumeration.event.customersearchsm.CustomerSearchSM_EventEnumerationImpl
              </value>
            </constructor-arg>
            <constructor-arg>
              <value>onStartSearch</value>
            </constructor-arg>
          </bean>
        </property>
        <property name="guard" ref="defaultGuard"/>
        <property name="action" ref="CustomerSearchSM.WAITING_CUSTOMERSEARCH_START.CUSTOMERSEARCH_RUNNING.SearchRunningTransition.action"/>
      </bean>
    

    snippet 14

    Name field of the bean for Transition defines to which State Machine Transition belongs and from which State to which State it leads. In this case CustomerSearchSM.transition_WAITING_CUSTOMERSEARCH_START_CUSTOMERSEARCH_RUNNING belongs to CustomerSearchSM and it leads WAITING_CUSTOMERSEARCH_START to CUSTOMERSEARCH_RUNNING.

    Additional it also defines which Event can trigger this transition over a Java Enumeration created by M2T which contains all the Events in the UML Model for this State Machine.

    The property ‘guard’ is representing the object that should run, to decide that this Transition should run for this event or not. For the example above ‘defaultGuard’ which is a pass through Guard Condition that returns all the time true and it is only used if a specific Guard Condition is not annotated over the Transition in the UML Model.

    A transition can also have an Action, which most of the time some logic that should occur during the transition.

    In our case, it is defined on the action property.The notation CustomerSearchSM.WAITING_CUSTOMERSEARCH_START.CUSTOMERSEARCH_RUNNING.SearchRunningTransition.action‘ defines the information about which State Machine and for which Transition, this Action belongs, it also includes the information about from which State to which State this Transition occurs.

    This whole information stored in UML Model as follow, if you select the transition from WAITING_CUSTOMERSEARCH_START to CUSTOMERSEARCH_RUNNING in the UML Model.

    picture_7
    picture 9

    You will see the following picture in Topcased.

    picture_8
    picture 10

    In this picture ,you see that a custom Steoreotype (swf_statemachine::SwfTransition) that we defined before assigned to this Transition (I will explain in the Appendix- How to create and use custom Steorotypes).

    This custom Steoreotype will contain the name of Java Class which will implement the Action of the Transition (also the Guard if one exist). When the M2T will read this information, it will instantiate the named Java Class in Spring but will leave the responsibility of implementing concrete class to the user (It is expected that this class lies in swf_statemachine_sm_model_impl project for the testablity purposes).

    picture_9
    picture 11

    In this case, a CustomerSearchRunningAction defined and M2T will place this information in Spring Configuration. CustomerSearchRunningAction.java class that we have to be create in swf_statemachine_sm_model_impl and will be responsible for initiating the search Action in the partner system.

    Such a feature bring us to the thema of to be able to test rig our complete State Machine, instead of real implementation of the CustomerSearchRunningAction (which we would require an integration test environment) in a test project that we can implement a Mock version of the Action and unit test of all of our Use Cases in test rig with the help of a State Machine.

    No other methodology and framework present at the moment such a possibility, the way the GUI and business logic coupled it nearly never lets that such a complete test to be applicable (even with the MVC you are not getting such a clear separation. Think about how much logic is always attached to the your backing bean and for that the difficulties you get to build test rigs).

    As you may see from the above picture you also have a possibility to provide a Guard implementation, if no input given, M2T would place the name DefaultGuard class which will always return true.

    swf_statemachine_sm_model_impl:
    This project contains the implementation of our control objects, actions, guards necessary to implement the business cases of our application.

    Everything organized under the java packages reflecting the name of the state machine.

    In test directory, you can also place the mock implementations that we can use as previously mentioned Test Rig.

    Following is a typical implementation of an Action in our object, which you can identify from the UML Diagram in the picture 5. This action occurs for the START_SEARCH event.

    public class CustomerSearchRunningAction implements Action, Serializable {
    	private static final long serialVersionUID = -181796739393959337L;
    
    	public void processAction(Event event, AbstractStateMachine stateMachine) {
    		CustomerSearchCO customerSearchCO = (CustomerSearchCO) stateMachine
    				.getControlObject();
    
    		CustomerSearchStartEventPayload customerSearchStartEventPayload = (CustomerSearchStartEventPayload) event
    				.getPayload();
    
    		Event findCustomerSmEvent = new Event();
    		findCustomerSmEvent
    				.setEventType(FindCustomerSM_EventEnumerationImpl.onStartSearch);
    		findCustomerSmEvent.setPayload(customerSearchCO.getCustomerNumber());
    		findCustomerSmEvent.setSource(stateMachine);
    
    		AbstractStateMachine findCustomerStateMachine = (AbstractStateMachine) stateMachine
    				.findObjects(StateMachineEnumerationImpl.FindCustomerSM
    						.getStateMachineName());
    		findCustomerStateMachine.resetStateMachine();
    		findCustomerStateMachine.dispatch(findCustomerSmEvent);
    
    		CustomerSearchSMControlObjectAccessor
    				.processCustomerSearchRunningAction(
    						(CustomerSearchSMControlObject) customerSearchCO,
    						customerSearchStartEventPayload.getCustomerNumber());
    	}
    }
    

    snippet 15

    When State Machine finds this action, it will call the processAction method with the parameters Event that triggered this Action (Event object that populated from the GUI Layer) and State Machine (which is the State Machine that action runs for).

    By the way, while we are making Model Driven Software Development, we have another project, which I will explain after this project, called ‘swf_statemachine_techdemo_domain’ which is modelling the GUI Layer objects which has to organize the communication between State Machine and GUI Layer.

    The payload ‘CustomerSearchStartEventPayload’ of our event object modelled in this project.

    Now we are coming to another important subject, which is concept of Master and Slave State Machines.

    Master/Slave State Machines


    One problem, I previously mentioned is State Explosions (which you can see in this Appendix), which occurs when our use cases are getting more and more complex. The number of states inside of the State Machine increases and the possibility of these States interacting with each other increase also and after certain number of States, this situation becomes uncontrollable and this is called State Explosion.

    Theoretically it is possible to define all States in one single State Machine but in practical programming it is better not to.

    We should divide our use cases and uses smaller State Machines to represent them.

    For our case here, it manifests itself as following.

    To realise a customer search, we will have a State Machine modelling the GUI Layer and one State Machine that models and controls the search customer functionality provided by an external system.

    In this case the State Machine controlling the GUI layer would be the Master State Machine and State Machine that is controlling the customer search functionality would be slave one.

    FindCustomerSM State Machine will be responsible encapsulating the functionality belongs searching a customer.
    picture_10
    picture 12

    This State Machine is small enough to be self contained, its only responsibility is to control the partner system which is executing findCustomer process. Any other State Machine that wants to interact with this partner system should only communicate with this State Machine and doesn’t have to know anything about the partner system.

    AbstractStateMachine findCustomerStateMachine = (AbstractStateMachine) stateMachine
    				.findObjects(StateMachineEnumerationImpl.FindCustomerSM
    						.getStateMachineName());
    findCustomerStateMachine.resetStateMachine();
    findCustomerStateMachine.dispatch(findCustomerSmEvent);
    

    snippet 16

    From the above code snippet, we see that Master State Machine gets the Slave, send the event to initialize the customer search.

    In case the customer search is successful, CUSTOMER_FOUND event will be triggered and the action in the following snippet will be executed.

    public class FindCustomerCustomerFoundAction implements Action {
    	public void processAction(Event event, AbstractStateMachine stateMachine) {
    		Event customerFoundEvent = new Event();
    		customerFoundEvent
    				.setEventType(CustomerSearchSM_EventEnumerationImpl.onCustomerFound);
    		customerFoundEvent.setPayload(event.getPayload());
    
    		((FindCustomerSMControlObject) stateMachine.getControlObject())
    				.getMasterStateMachine().handleEvent(customerFoundEvent);
    	}
    }
    

    snippet 17

    Slave State Machine, by configuration (via Spring Application Context) knows that it is a Slave State Machine and it should report at End States to the Master State Machine.

    In this case, the action processed with CUSTOMER_FOUND sends an Event to the Master State Machine, which will change the Master State Machine to CUSTOMER_FOUND State from CUSTOMER_SEARCH_RUNNING.

    Now this is the ideal case, we searched a customer and found a customer. What should happen if we get a timeout? How the system should behave?

      Now this is a fine point, there is a really useful tip here you can use against State Explosion.

    We can define naturally as much as necessary States in Customer Search State Machine, so that means we can add Error States.

    Then again lots of the existing States will have to be connected to these new States via transitions and depending to the nature of the error, there must be more then one extra State (Timeout Error, Connection Error, etc) which will make the things worst. This will cause a State Explosion, in the previous case, we identified State Explosion can be caused with the communication with Partner Systems and we solve that problem with Master/Slave State Machines but we can’t apply this solution to here because Errors are inheriting part of the Customer Search use case.

    There is another concept called Nested State Machines, based on ‘Ultimate Hook’ pattern, if an Event is not handled in most specialized State Machine then it should be propagated back in the Nested State Machine chain.

    We can explain this with a very known example, everybody knows how the Windows Operating System’s event mechanism works, for ex, a click event.

    In general, windows OS defines a general click event, if any application code provided by the developer is interested with this event, he/should register for the event in the application code.

    If there is no application code that is handling the event, it is propagated back to Windows OS so it can apply its default behaviour. Same pattern is valid for the Nested State Machines.

    Our GUI has standard error messages, one for Timeout, one for Connection problems, instead of handling and representing these States in Customer Search State Machine, if we will have Error State Machine and it will nest Customer Search State Machine, every time Customer Search State Machine receives an event that it cannot handle itself, is going to be propagated to Error State Machine to be handled there.

    This will prevent the State Explosion, instead of every State in the nested State Machine having transitions to these error states only Host State Machine builds these transitions, when nested State Machine can’t handle event and find a Transition it will delegate to Host State Machine and it will handle it.

    picture_11
    picture 13

    You can see above in a State Machine Diagram what we mean. TechDemoSM is our top StateMachine containing all of our use cases. You can see in the diagram that we have a CUSTOMERSEARCH_PROCESSING State containing the Nested State Machine Customer Search State Machine.

    When we are processing a Customer Search use case, we will be in CUSTOMERSEARCH_PROCESSING State and all the Event’s that TechDemoSM receives will be delegated to this State. Now this State doesn’t have an Transition for this Event but it has a Nested State Machine, CustomerSearchSM, next thing to do is to ask to this State Machine that it has a transition for this Event or not.

    If Customer Seach State Machine knows the Event have a Transition for it, it will process the Event. If it doesn’t have a Transition for the Event, it will re-delegate to the upper State Machine, in this case to TechDemoSM.

    picture_4
    picture 14

    Now let’s assume, CustomerSearchSM is in CUSTOMERSEARCH_RUNNING State and we received CUSTOMER_FOUND Event, the Event will be first delivered to the TechDemoSM, while we are in CUSTOMERSEARCH_PROCESSING State in TechDemoSM Event will be transferred to the CustomerSearchSM and it knows how to handle CUSTOMER_FOUND Event.

    Think about the following scenario, we are again in CUSTOMERSEARCH_RUNNING State but we are getting a SERVICE_NOT_AVAILABLE Event. Now we could naturally modelled a State in CustomerSearchSM to handle this event but most probably other States will need a transition to this State also, like ORDER_LOADING when it experiences an error case then it will need this transition also.

    This is the part cluttering of the State Machine and State Explosions starts, the solution that our Nested State Machine provides, the moment it discovers it can’t handle this Event, it propagates it back to the TechDemoSM and luckily it has a Transition to the SERVICE_NOT_AVAILABLE State to handle it.

    I guess with this sample you can see the power of the Nested State Machines.

    Test Rigs:

    Now we always stated that one of the biggest advantages of this methodology is to be able to test whole business case in a Test Rig.

    Our completed Techdemo application GUI will look like as following per State (if you look to the State Machine Diagram).


    WAITING_FOR_CUSTOMERSEARCH_START
    picture_12
    picture 15

    if you enter ‘123456789’ for the search parameter, you can proceed with customer search.

    CUSTOMERSEARCH_RUNNING
    picture_13
    picture 16

    CUSTOMER_FOUND
    picture_14
    picture 17

    CUSTOMER_AUTHENTICATED
    picture_15
    picture 18

    ORDERS_LOADING
    picture_16
    picture 19

    CUSTOMER_JOINED
    picture_17
    picture 20

    when the following URL is called.

    http://localhost:8080/swf_statemachine_techdemo-1.0-SNAPSHOT/spring/customersearch

    The application is sensitive to only one customer number ‘123456789’ 🙂 so if you search this customer number you would be able to access to further states.

    As you see every step has really good defined view states, which GUI elements are visible and which are not. With this information we can write our Unit Test.

    Following is a unit test testing WAITING_FOR_CUSTOMERSEARCH_START to CUSTOMER_FOUND.

    If you examine the JSF pages for the above displayed GUI elements you will see that their render methods consist of checking the value of the State Machine control objects.

    For ex. from customerSearchInput.xhtml

    <p:outputPanel id="customerSearchInputLayout" ajaxRendered="true"
    			layout="block" style="border:1px solid" rendered="#{customerSearchInputBB.customerSearchInputLayoutRendered}">
    

    snippet 18

    or customerSearchRunning.xhtml

    <p:outputPanel id="customerSearchRunning-panel_layout" ajaxRendered="true"
    			layout="block" style="border:1px solid" rendered="#{customerSearchRunningBB.customerSearchRunningPanelRendered}">
    

    snippet 19

    and customerSearchFound.xhtml

    <p:outputPanel id="customerSearchFound-panel" ajaxRendered="true"
    			layout="block" style="border:1px solid" rendered="#{customerSearchFoundBB.customerSearchFoundPanelRendered}">
    

    snippet 20

    The backing beans methods are not doing anything other then relaying the values defined in the control objects of the State Machines.

    public boolean isCustomerSearchInputLayoutRendered() {
    		CustomerSearchInputCO customerSearchInputCO = (CustomerSearchInputCO) stateMachine
    				.getControlObject();
    		return customerSearchInputCO.getRenderCustomerSearchInput();
    }
    

    snippet 21

    A detailed explanation of how exactly the Unit Tests should look like you can find in the following Appendix.

    swf_statemachine_techdemo_domain:
    This project contains artifacts necessary for the communication from the object in our GUI Layer (swf_statemachine_techdemo) and State Machine level (swf_statemachine_sm_model_impl) like Customer, Order object which are necessary for our GUI Layer to display the information and also necessary in State Machine Layer so that State Machines can populate the information.

    But also while we practicing here Model Driven Architecture (MDA) the objects that are used in the GUI Layer (but nothing to do in State Machine Layer, modelled here instead in Tech Demo project) like Manager’s (CustomerManager, OrderManager), event payloads (CustomerSearchStateEventPayload) and asynchronous Event Listeners for Comet functionality (the way the GUI Layer/Browser receive event from the State Machines to inform there is a State change and the information is ready).

    The class diagrams looks like as following.

    For Domain Object’s
    picture_18
    picture 21

    For Manager’s
    picture_19
    picture 22

    For Event Payload’s
    picture_20
    picture 23

    The necessary Java Code are created via Fornax M2T Java Template ‘s for these UML Objects.
    swf_statemachine_techdemo:
    This is the GUI layer of our project, it is a JSF project with Primefaces as the JSF implementation.

    Most interesting part of this project is how Spring Webflow is helping to integrate our State Machine with JSF lifecycles and events.

    I think it is best to start to analyze the structure of the application from web.xml which will be a statement of the technologies we use in the project.

    First we have to initialize naturally our Spring Application Context for Spring MVC, Spring Webflow and our State Machines.

    <servlet>
    	<servlet-name>spring_mvc</servlet-name>
    	<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
    	<init-param>
    		<param-name>contextConfigLocation</param-name>
    		<param-value>/WEB-INF/config/webmvc-config.xml,
    		/WEB-INF/config/webflow-config.xml,
    		/WEB-INF/config/applicationContext-customersearch.xml,
    		classpath:/META-INF/customersearch/applicationContext-statemachine-customersearch.xml,
    		classpath:/META-INF/customersearch/applicationContext-statemachine-customersearch-guards.xml,
    		classpath:/META-INF/customersearch/applicationContext-statemachine-customersearch-controlobjects.xml,
    		classpath:/META-INF/findcustomer/applicationContext-statemachine-findcustomer.xml,
    		classpath:/META-INF/findcustomer/applicationContext-statemachine-findcustomer-guards.xml,
    		classpath:/META-INF/findcustomer/applicationContext-statemachine-findcustomer-controlobjects.xml,
    		classpath:/META-INF/findorders/applicationContext-statemachine-findorders.xml,
    		classpath:/META-INF/findorders/applicationContext-statemachine-findorders-guards.xml,
    		classpath:/META-INF/findorders/applicationContext-statemachine-findorders-controlobjects.xml
    		/WEB-INF/config/applicationContext-manager.xml</param-value>
    	</init-param>
    	<load-on-startup>1</load-on-startup>
    </servlet>
    

    snippet 22

    A 2nd Servlet other then ordinary ones like FacesServlet and Primefaces is the following

            <servlet>
    		<servlet-name>Push Servlet</servlet-name>
    		<servlet-class>org.primefaces.push.PushServlet</servlet-class>
    	</servlet>
    

    snippet 23

    PushServlet is responsible to establish the asynchronous communication between Client Browser and our Application Server based on Comet Technologies.

    As you configuration in web.xml is quite simple.

    Next we have too look to the setup of the Spring Webflow, first webmvc-config.xml.

    <bean class="org.springframework.webflow.mvc.servlet.FlowHandlerMapping">
    	<property name="flowRegistry" ref="flowRegistry" />
    	<property name="defaultHandler">
    		<!-- If no flow match, map path to a view to render; e.g. the "/intro"  path would map to the view named "intro" -->
    		<bean class="org.springframework.web.servlet.mvc.UrlFilenameViewController" />
    	</property>
    </bean>
    

    snippet 23

    First while we have to redirect the request we get to the Spring WebFlow so we have to configure following in Spring MVC properties.

    We have to set the flowRegistry (which keeps account of which flows exist for us) of the Spring Webflow (those are configured in webflow-config.xml)

    Then we configure where is our JSF Views/Files are placed.

    <bean id="faceletsViewResolver"
    	class="org.springframework.web.servlet.view.UrlBasedViewResolver">
    	<property name="viewClass" value="org.springframework.faces.mvc.JsfView" />
    	<property name="prefix" value="/WEB-INF/" />
    	<property name="suffix" value=".xhtml" />
    </bean>
    

    snippet 24

    And we have to configure how JSF navigation events will be transferred to the Spring Webflow.

    <bean class="org.springframework.faces.webflow.JsfFlowHandlerAdapter">
                <property name="flowExecutor" ref="flowExecutor" />
    </bean>
    

    snippet 25

    And lets look to how the WebFlow configuration works out(webflow-config.xml).

    <flow:flow-executor id="flowExecutor">
    	<flow:flow-execution-listeners>
    		<flow:listener ref="facesContextListener" />
    	</flow:flow-execution-listeners>
    </flow:flow-executor>
    

    snippet 26

    This configuration ensures that Spring Webflow receives the JSF Lifecycle events and able to execute Spring Webflow flows.

    Our Flow Registry telling Spring Web Flow where to look to initialize our Spring Web Flows.

    <flow:flow-registry id="flowRegistry"
    	flow-builder-services="facesFlowBuilderServices">
    	<flow:flow-location path="/WEB-INF/flows/customersearch/customersearch.xml" />
    </flow:flow-registry>
    

    snippet 27

    In this case we are loading our Spring Web Flow configuration from WEB-INF/flows directory.

    Configuration of the Spring Web Flow and JSF integration.

    <faces:flow-builder-services id="facesFlowBuilderServices" />
    

    snippet 28

    Now lets look to the to our Flow configuration.

    <?xml version="1.0" encoding="UTF-8"?>
    <flow xmlns="http://www.springframework.org/schema/webflow"
    	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    	xsi:schemaLocation="http://www.springframework.org/schema/webflow
            http://www.springframework.org/schema/webflow/spring-webflow-2.0.xsd">
    
    <on-start>
    <evaluate expression="CustomerSearchSM.resetStateMachine()"/>
    </on-start>
    <view-state id="customerSearch">
    		<transition on="onStartCustomerSearch" history="invalidate">
    			<evaluate expression="customerSearchInputBB.searchCustomer()"/>
    		</transition>
    		<transition on="onCustomerAuthenticated" history="invalidate">
    <evaluate expression="customerSearchAuthenticationBB.customerGivesAuthentication()"></evaluate>
    		</transition>
    		<transition on="onCustomerJoined" history="invalidate">
    			<evaluate expression="customerSearchJoinBB.customerJoined()"></evaluate>
    		</transition>
    	</view-state>
    </flow>
    

    snippet 29

    You can see that Web Flow configuration is quite small. At this point you might think why I was saying Spring WebFlow has such a big role in our project.

    If you look to this configuration file you will see that Spring Flow handles two important aspect for our project.

    First it is initializing the State Machine every time our business case starts from the beginning, under the control of ‘on-start’ tag (it is the concept called ‘conversations’ in Spring Web Flow. Inside of your web sessions, every Business Case is started inside of a separated memory contention, if you run several instance of your Business Case in same web session, it will work without their state variables corrupting each other). This can be quite a big headache, if we have to figure out this our self with the JSF.

    <on-start>
    <evaluate expression="CustomerSearchSM.resetStateMachine()"/>
    </on-start>
    

    snippet 30

    Secondly, it transfers user events from the GUI (User clicking buttons and so) to our business logic/State Machine. Instead of fighting with JSF to receive this events from JSF, we are receiving over Spring Web Flow.

    Another major advantage, in our JSF pages, only names of the Events defined in the State Machine UML diagrams are mentioned, for ex, ‘onStartCustomerSearch’, ‘onCustomerAuthenticated’, ‘onCustomerJoined’. JSF code doesn’t reference any backing bean or any State Machine. It means that by changing Spring Web Flow definition file, you can change completely the implementation of your business logic without changing a single line in your JSF files. It brings quite impressive level of re-usability.

    If you look to the ‘customSearchInput.xhtml’ you see the following JSF commandButton element there.

    <p:commandButton id="customerSearchStart" value="Search"
    action="onStartCustomerSearch" ajax="true" update="customerSearch-form,customerSearchRunning-panel_empty_layout,handlePuplishRemoteCommand">
    </p:commandButton>
    

    snippet 31

    As you see there is no direct reference to any Backing Bean or Spring Bean, what this provides, Spring Webflow intercepting a JSF event, interprets it with the help of the Spring Webflow definitions and initiates the correct Event for our State Machine.

    Spring Webflow looks from the request, which JSF page is requested (in this case customerSearch.xhtml) and maps it to the following Spring Webflow configuration.

    <view-state id="customerSearch">
    

    snippet 32

    and transitions defined for this view state.

    <transition on="onStartCustomerSearch" history="invalidate">
    <evaluate expression="customerSearchInputBB.searchCustomer()"/>
    </transition>
    

    snippet 33

    In this snippet, it is configured to call the searchCustomer method on the customerSearchInputBB Backing Bean but it can be easily configured to another method on another Backing Bean. If we change our WebFlow file we can make the GUI reference a complete different State Machine/Business Logic implementation without changing one lines of code in .xhtml files.

    This method will send the ‘onStartSearch’ event to the State Machine.

    The method looks like following.

    public void searchCustomer() {
    	log.info("We are searching customer!");
    	Event event = new Event();
    	event.setEventType(CustomerSearchSM_EventEnumerationImpl.onStartSearch);
    
    CustomerSearchStartEventPayload customerSearchStartEventPayload = new CustomerSearchStartEventPayload();
    	customerSearchStartEventPayload.setCustomerNumber(customerNumber);
    
    	event.setPayload(customerSearchStartEventPayload);
    
    	stateMachine.handleEvent(event);
    }
    

    snippet 34

    The application GUI is built with small .xhtml fragments, put it together in customerSearch.xhtml with facelet include statements for re-usability purposes. If you are valuing re-usability (like using order table in several other places in your application) you can include it to other pages/use cases. Re-usability is also guaranteed with use of interfaces when accessing to the Control Objects of the State Machine so implementation can be changed quite easily, which look like this for ‘customerSearchOrder.xhtml’.

    <?xml version="1.0" encoding="ISO-8859-1"?>
    <ui:composition xmlns="http://www.w3.org/1999/xhtml"
        xmlns:ui="http://java.sun.com/jsf/facelets"
    	xmlns:h="http://java.sun.com/jsf/html"
    	xmlns:f="http://java.sun.com/jsf/core"
    	xmlns:p="http://primefaces.prime.com.tr/ui">
    
    	<p:outputPanel id="customerSearchOrderLoading-empty-panel">
    		<p:outputPanel id="customerSearchOrderLoading-panel" ajaxRendered="true"
    			layout="block" style="border:1px solid" rendered="#{customerSearchOrderBB.customerSearchOrderLoadingPanelRendered}">
    			<p1>Please wait that the orders would load!!!!</p1>
    		</p:outputPanel>
    	</p:outputPanel>			
    
    	<p:outputPanel id="customerSearchOrder-empty-panel">
    		<p:outputPanel id="customerSearchOrder-panel" ajaxRendered="true"
    			layout="block" style="border:1px solid" rendered="#{customerSearchOrderBB.customerSearchOrderPanelRendered}">
    			<h:form id="ordersForm">
    			<p:dataTable id="ordersTable" var="order" value="#{customerSearchOrderBB.orders}">
    				<p:column>
    					<f:facet name="header">
    						Order Id:
    					</f:facet>
    					<h:outputText value="#{order.id}"/>
    				</p:column>
    				<p:column>
    					<f:facet name="header">
    						Order Date:
    					</f:facet>
    					<h:outputText value="#{order.date}"/>
    				</p:column>
    				<p:column>
    					<f:facet name="header">
    						Order Description:
    					</f:facet>
    					<h:outputText value="#{order.description}"/>
    				</p:column>
    			</p:dataTable>
    			</h:form>
    		</p:outputPanel>
    	</p:outputPanel>
    </ui:composition>
    

    snippet 35

    The critical point in the code is the following.

    <p:dataTable id="ordersTable" var="order" value="#{customerSearchOrderBB.orders}">
    

    snippet 36

    Looks like this in the Backing Bean.

    public class CustomerSearchOrderBB {
    	private StateMachine stateMachine;
    
    	@SuppressWarnings("unchecked")
    	public List<Order> getOrders() {
    		return ((CustomerSearchOrderCO) stateMachine.getControlObject())
    				.getCustomerOrders();
    	}
    
    	public boolean isCustomerSearchOrderPanelRendered() {
    		return ((CustomerSearchOrderCO) stateMachine.getControlObject())
    				.getRenderCustomerOrders();
    	}
    
    	public boolean isCustomerSearchOrderLoadingPanelRendered() {
    		return ((CustomerSearchOrderCO) stateMachine.getControlObject())
    				.getRenderCustomerOrderLoading();
    	}
    
    	public void setStateMachine(StateMachine stateMachine) {
    		this.stateMachine = stateMachine;
    	}
    }
    

    snippet 37

    We only access to the State Machine Control Objects over Interfaces so that we can always change the implementation but use the same .xhtml fragment (lets say you want to show Order for the last 3 month and you want to re-use this.xhtml fragment part it is totally possible).

    When all fragments included to the ‘customerSearch.xhtml’ it look like as following.

    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml"
    	xmlns:ui="http://java.sun.com/jsf/facelets"
    	xmlns:h="http://java.sun.com/jsf/html"
    	xmlns:f="http://java.sun.com/jsf/core"
    	xmlns:fn="http://java.sun.com/jsp/jstl/functions"
    	xmlns:p="http://primefaces.org/ui">
    	
    	<f:view contentType="text/html" encoding="UTF-8">
    		<h:head>
    			<title>Customer Search - Spring WebFlow - Primefaces - State Machine Demo</title>
    			<meta content="text/html; charset=utf-8" http-equiv="Content-Type" />
    		</h:head>
    		<body>
    			<p:outputPanel ajaxRendered="true" id="handlePuplishRemoteCommand">				
    				<script id="sc1" type="text/javascript" language="Javascript">			
    					function handlePublishInternal() {
    						PrimeFaces.ajax.AjaxRequest('/swf_statemachine_techdemo-1.0-SNAPSHOT/spring/customersearch?jsessionId=#{externalContext.nativeRequest.session.id}&amp;execution=#{flowExecutionContext.key}',{formId:'customerSearchCustomerDetail_form',async:true,global:true,source:'customerSearchCustomerDetail_form:customerSearchViewCustomerDetail',process:'@all',update:'#{customerSearchBB.customerSearchJavaScriptRenderPanels}'});return false;												
    					};
    				</script>
    			</p:outputPanel>
    			<ui:include src="customerSearchInput.xhtml" />
    			<ui:include src="customerSearchRunning.xhtml" />
    			<ui:include src="customerSearchFound.xhtml" />
    			<ui:include src="customerSearchAuthentication.xhtml" />
    			<ui:include src="customerSearchJoin.xhtml" />
    			<ui:include src="customerSearchOrder.xhtml" />
    			<h:form id="customerSearchCustomerDetail_form">
    				<p:socket id="cometPush" channel="/customer_search_result_#{externalContext.nativeRequest.session.id}_#{flowExecutionContext.key}" >
    					<p:ajax event="message" update="#{customerSearchBB.customerSearchJavaScriptRenderPanels}" />
    				</p:socket>							
    				<p:commandButton id="customerSearchViewCustomerDetail" value="Continue"
    		    				ajax="true" update=":customerSearchRunning-panel_empty_layout,:customerSearchFound-empty-panel,:customerSearchAuthentication-empty-panel">
    		    	</p:commandButton>
    		    </h:form>
    		</body>
    	</f:view>
    </html>
    

    snippet 38

    An interesting element here is the ‘p:socket’ tag, which is providing Asynchronous AJAX (Comet/Atmosphere) communication between the browser and the server. When an interesting Event occurs for the GUI and the GUI has to be refreshed. The server will notify the GUI over this component.

    In our case, we search the customer over the Partner System when the customer found, server will notify us. p:socket tag implements exactly this. p:socket will listen the channel defined in its attribute ‘/customer_search_result’ and react to this message arriving to this channel. If a message received it will trigger an ajax event via ‘

    ‘ and render areas defined by our State Machine.

    Here, one word of warning, p:socket channel, acts like a topic, so several web client can subscribe to this topic, it is web application wide. So if you don’t want that all the client of your web application receives the message, you should configure the channels to be unique. Here I did this by first adding the sessionId to channel name ‘#{externalContext.nativeRequest.session.id}’, plus while we are using Spring Web Flow, flowId to support the multi tab behavior ‘#{flowExecutionContext.key}’.

    You can find more details about Comet/Atmosphere in the following Appendix.

    swf_statemachine_xpand:
    In this project, we have the necessary files containing the DSL of the M2T to interpret UML file and create the specific language we want. In our case, it is create Java and Spring code.

    For the creation of Java code we are using the some prepared libraries from Fornax.

    If you look to the template for the Java Code (stateMachineEnumeration.xpt), which only scans the model and creates Enumerations for all the State Machine, State and Event names.

    «DEFINE Root FOR uml::Model»
    	«FILE "org/salgar/swf_statemachine/enumeration/StateMachineEnumerationImpl.java"»
    		package org.salgar.swf_statemachine.enumeration;
    
    		import org.salgar.statemachine.domain.StateMachineEnumeration;
    
    		public enum StateMachineEnumerationImpl implements
    		StateMachineEnumeration {
    		«FOREACH allOwnedElements().typeSelect(uml::StateMachine) AS stateMachine SEPARATOR ','»
    			«stateMachine.name»("«stateMachine.name»")
    			«EXPAND StateEnumeration(stateMachine)»
    			«EXPAND EventEnumeration(stateMachine)»
    		«ENDFOREACH»;
    
    		private String name;
    
    		StateMachineEnumerationImpl(String name) {
    			this.name = name;
    		}
    
    		public String getStateMachineName() {
    			return this.name;
    		}
    
    		@Override
    		public String toString() {
    			return this.name;
    		}
    	 }
        «ENDFILE»
    «ENDDEFINE»
    

    snippet 39

    If we analyze the M2T template,

    «DEFINE Root FOR uml::Model»
    says for every element in the model make a loop and process the element.
    
    «FOREACH allOwnedElements().typeSelect(uml::StateMachine) AS stateMachine SEPARATOR ','»
    states for all the Elements on the model with the type ‘uml::StateMachine’ run a loop and apply templates, which are
    «EXPAND StateEnumeration(stateMachine)» and
    «EXPAND EventEnumeration(stateMachine)»
    these templates you can find in the same file.
    
    To give a feel about them, I will go over one of them quickly.
    «DEFINE StateEnumeration(uml::StateMachine sm) FOR uml::Model»
    Following defines that this template will process every uml::StateMachine in out uml::Model.
    «FOREACH sm.allOwnedElements().typeSelect(uml::State) AS state SEPARATOR ','»
    here, we will search all the uml::StateMachine to find all of its states.
    «IF state.submachine != null»
    ,«EXPAND SubStateMachineStateEnumeration(sm, state.submachine)»
    «ENDIF»
    

    and here if we can find a Sub StateMachine for the State we will expand that also.

    This is the basic idea about Xpand/M2T templates, find specific elements in the UML models and loop over them and expand them.

    We can now look what is happening in Xpand template which is creating the Spring Files (which is quite complex, compared to templates of the Enumerations).

    «DEFINE Spring FOR uml::Model»
    	«EXPAND Root(this) FOREACH (List[uml::Package])ownedElement»
    	«EXPAND SteorotypeGuardsActions(this) FOREACH     (List[uml::Package])ownedElement»
    	«EXPAND ControlObjects(this) FOREACH (List[uml::Package])ownedElement»
    «ENDDEFINE»
    

    snippet 40

    Above template is searching for the uml::Package’s we have in the uml::Model (while we organized our State Machines in uml::Packages) and the it tries to creates Spring xml configuration files.

    «DEFINE Root(uml::Model model) FOR uml::Package»
    	«IF ownedType.typeSelect(uml::StateMachine).isEmpty==false»
    		«FILE "META-INF/" + name + "/applicationContext-statemachine-"+name+".xml"»
    		<?xml version="1.0" encoding="UTF-8"?>
    		<beans xmlns="http://www.springframework.org/schema/beans"
    			xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd ">
    			«EXPAND StateMachines(model) FOREACH ownedType.typeSelect(uml::StateMachine)»
    			<bean name="defaultGuard"  class="org.salgar.swf_statemachine.impl.guard.DefaultGuardImpl" />
    			<bean name="defaultAction" class="org.salgar.swf_statemachine.impl.action.DefaultAction" />
    		</beans>
    		«EXPAND Root(model) FOREACH nestedPackage»
    		«ENDFILE»
    	«ELSE»
    		«EXPAND Root(model) FOREACH nestedPackage»
    	«ENDIF»
    «ENDDEFINE»
    

    snippet 41

    This part of the template searches inside of the every uml::Package for the instances of uml::StateMachine so it can further process them, if a package does not contain a uml::StateMachine it further searches nestedPackages.

    If it finds an uml::StateMachine it will execute the following Template

    «DEFINE StateMachines(uml::Model model) FOR uml::StateMachine»
    	«IF this.getAppliedStereotype("swf_statemachine::SwfStateMachine")!= null»
    		«IF !this.getValue(this.getAppliedStereotype("swf_statemachine::SwfStateMachine"),"submachine")»
    			<!-- «name.toFirstUpper()» State Machine -->
    			<bean id="«name»" class="org.salgar.swf_statemachine.impl.StateMachineImpl" lazy-init="true"
    			«IF this.getAppliedStereotype("swf_statemachine::SwfStateMachine")!=null»
    				scope="«this.getValue(this.getAppliedStereotype("swf_statemachine::SwfStateMachine"),"Scope").name»"
    			«ENDIF»
    			>
    			«LET name AS localName»
    				«EXPAND StateMachinesCommon(model, this, localName)»
    			«ENDLET»
    		«ENDIF»
    
    	«ENDIF»
    «ENDDEFINE»
    

    snippet 42

    Above template will create the necessary xml configuration for the State Machine and further elements of it (like States, Transitions, Actions, Guards, etc…). One interesting thing is how we interpret the Steorotypes and its properties. Steorotypes as you may remember the elements we use for the information that we can’t represent with standard UML elements (You can find more information about UML Steorotypes in the following Appendix). In this case our custom Actions, Guards and signals for Sub StateMachines (Nested StateMachines).

    «IF this.getAppliedStereotype("swf_statemachine::SwfStateMachine")!= null» «IF !this.getValue(this.getAppliedStereotype("swf_statemachine::SwfStateMachine"),"submachine")»
    

    snippet 43

    I will not further analyze this Xpand templates here because it will clutter the discussion and the rest I think is self explanatory enough. If there will be further questions, I can expand this part in an Appendix.

    swf_statemachine_fornax_extensions:
    Fornax is a project providing out of the box M2T templates for Java Code creation (if you interested with Model Driven Software Development you can get nice Xpand templates for Spring and Hibernate also).

    We are using Fornax Java Template for creating the Java code for Domain objects, there are some places we have to modify the code creation, for ex.

    Xpand use the concept of Aspects like Java programming, you can pick Xpand Template method and create and Aspect and modify its behaviour.

    «AROUND org::fornax::cartridges::uml2::javabasic::m2t::Association::attribute FOR uml::Classifier»
    	«FOREACH AllAssociations().typeSelect(uml::Association) AS ass»
    		«LET ass.visibility AS assocationVisiblity»
    			«FOREACH ass.ownedEnd.select(e|e.type.name!=name) AS a»
    				«IF getVisiblityName(assocationVisiblity) != 'public'»
    					«EXPAND field(assocationVisiblity) FOR a»
    				«ELSE»
    					«EXPAND org::fornax::cartridges::uml2::javabasic::m2t::Association::field FOR a»
    				«ENDIF»
    			«ENDFOREACH»
    		«ENDLET»
    	«ENDFOREACH»
    «ENDAROUND»
    

    snippet 44

    Around Advice in javaBasicAssociationAdvices.xpt modifies the behaviour of the org::fornax::cartridges::uml2::javabasic::m2t::Association::attribute method in the Fornax Templates and implements our custom code creation.

    In this case Fornax did not generate the code for Association Types (Collections) other then ‘public’ classifier so we have to modify Fornax behaviour here with our custom behaviour.

    One other really powerfull mechanism we can define custom utility functions with Java Code to make the templating much easier.

    We defined the following utility method in .ext file(M2T file type for extensions) .

    String getVisiblityName(uml::VisibilityKind visibilityKind) :
    	JAVA org.salgar.swf_statemachine.extensions.SwfStatemachineExtensions.getVisiblityName(org.eclipse.uml2.uml.VisibilityKind);
    

    snippet 45

    Which references to a Java Method defined in class SwfStatemachineExtensions.

    public class SwfStatemachineExtensions {
    	public static String getVisiblityName(VisibilityKind visibilityKind) {
    		return visibilityKind.getLiteral();
    	}
    }
    

    snippet 46

    Only thing that you have to pay attention in this method, it must be a static Method.

    Conclusion:

    I hope I reach my goal my to show there is a practical way to use UML and Model Driven Software for a web GUI projects, which some would not find practicable or reasonable before this blog.

    The challenges of Web Application programming are quite different then the ones we have 5 years ago. The technologies from those eras forced us to use simple mono applications.

    Today Ajax world, loads more and more responsibilities to the web applications, combined with connected nature of the businesses that we have at the moment, thanks to SOA and Cloud technologies, methods that are seen obsolete like Spring Web Flow, State Machines, UML, Model Driven Software Development can be quite relevant and helpful to conquer the challenges of the Web 2 the Ajaxified Web.

    Appendix:

    -State Machine
    State Machines, better said, Final State Machines are containers to save status of something in giving point of the time. It has countable amount of states, there is the word Finite State is coming. In a giving FSM all states for all uses cases are known.

    And a FSM can be in an one State a time, the State it is in at any given time is called the current State. It can change from one State to another when initiated by a triggering event and a condition, this called a transition. A particular FSM is defined by a list of its states and triggering condition for each transition.

    State defines the persistent information of the objects that are dedicated for. For ex, a water cooker, if the water is cooler then defined value the water cooker is ON state, if the water is hotter then the predefined temperature then it is on OFF State.

    A transition is the passage from one state to another, in this case going from ON state to OFF state which triggered from an event; a good example would be sensor telling that the water is too hot or too cold.

    A transition can have a guard condition if a State lead from one possible State to several other States, for ex Water is too cold and the State must go from OFF to EXTRA POWER ON State to heat the water more quickly or the water is not that cold it can just go to ON State. This decision mechanism is called Guard which help to decide which Transition to take.

    When State Machines gets a signal that it has to switch from one state to another one and Transition is decided an Action will be executed to realise necessary operations. For ex, if we are going to ON State in our water cooker then may be servo has to push the magnet coil.

    -State Explosion
    If you read everything above you might think it is easy to design and use a State Machine but there are some traps on the way.

    As long as things stay small in scope you would not have that much problem but when the numbers of use cases are starting to increase you will have start experiencing following problem. It would be theoretically possible every State in your State Machine can have a transition to another State which will increase the number of transitions exponentially.

    This point really limited the popularity of State Machines in current software development methods. The standard method to prevent this is a ‘divide and conquer’ strategy. We should distribute our use cases to get smaller State Machines model that way.

    Combined with the previously explained Sub State Machine concept, in which when a State Machine receives an event, it will first check it for its in current State a Sub State Machine exist or not. If yes then it will delegate this Event to this Sub State Machine and expects to handle it and only to react to this Event if this Sub State Machine could not handle it.

    This way Sub State Machine will have only transitions to the States under its control and would not know anything about outside world and containing State Machine will only have Transition’s for its own States. This will drastically reduce the number of Transition’s that exists in the system and prevent the State Explosion.

    If your State Diagram’s start looking like as the following, you are really in big danger to running to the State Explosion problem.

    State Explosion
    StateExplosionStateMachine
    picture 24

    It is time to divide and conquer the problem.

    A better solution would be the following.

    picture_22
    picture 25

    picture_23
    picture 26

    You can see here what we mean with Nested State Machines. If we will try to model everything in one single State Machine we would have the Spagetti we show in the picture 24.

    If we divide to problem to two different State Machine’s everything looks much better isn’t. This is also good opportunity to explain ‘Ultimate Hook’ pattern.

    With the Nested State Machine following will happen, State Machine that you see in picture will get an event, in the case it can’t find a Transition to the Event it receive, it will check the current State contains a Nested State Machine, in our case STATE_WORKING has a Nested State Machine, so the event will be transferred there and this State Machine handles the transition from STATE_A to STATE_B.

    Now we are in State Explosion State Machine an Event occurs that signals an error condition but State B has no transition navigating to an error condition, so when Nested State Machine will not able to find any transition it will delegate the event again to the containing State Machine which luckily has a transition to ERROR_1 State to handle the event.

    – Multiple Browser Tabs and Spring Flows
    One point that Spring Web Flow can support our concept is the data scope
    ‘flow’.

    The nightmare of the Ajax Web Applications is the multi tab behaviour of the modern web browsers. Web application that are reacting to multiple events the global data containers is a huge problem because if the user starts another tab in the browser and start triggering additional events in the same session, global data containers will cause huge chaos.

    Think about it, you are logged to your internet banking web site, you are doing operations in one tab and you decided to start a new tab in the browser and start triggering other operations which would your change account balance. How should the web application react, it should let you modify simultaneously or try to isolate things. I can tell you for the sake of you bank account, it should better isolate it.

    For this purpose, Spring Web Flow presents ‘flow’ scope, when you define your Web Flow you define the entry points of your flow, when these entry points are passed, Spring Web Flow create these containers which would only hold the value of your data containers marked ‘flow scope’ in Spring Definition files.

    This practically means, if you start two tabs in your browser and you trigger to entry events, this two tabs will only see their version of data containers in the flow.

    Now this is a really nice future from Spring Flow but it has one annoyance, if you have lots data that you have to contain in your application, you should start tagging everything with this ‘flow scope’ which can be quite annoying.

    Our approach here brings us one step forward then what Spring Web Flow can offer us. Our State Machine is our data container that representing the whole state of our application so to mark it as ‘flow scope’ would be enough to make our all application flow sensitive.

    Only thing we have to do is to decorate our State Machine in UML Model with a Steorotype include the flow property as you can see in the below picture.

    picture_24
    picture 27

    picture_25
    picture 28

    If you apply this option you will get the behaviour explained above.

    There is one implementation point that I have to mention here. Normally if something tagged as ‘flow scope’ Spring Web Flow apply the following behaviour with every Web Request Spring Web Flow restores and persist the Flow with default Spring Web Flow configuration. Spring Web Flow before sending the answer of the Web Request, it use Java Serialisation to persist the state of the object marked as flow scope in the Spring configuration. When a new request comes for the same flow, it restores the object from this serialised object. Spring Webflow does that to remember several historical snapshots of this object.

    Unfortunately this model does not fit us. Our State Machine continues to live and respond to the events even after the Web Request from the client answered (We called a partner system during the web request and the answer from the partner system comes after the Web Request is completed). With this scheme our serialized State Machine can not receive this event and when it is restored from the new Web Request arrives, restored State Machine will know nothing about this event.

    Fortunately there is a possibility in Spring Web Flow to configure this behaviour, so instead of it serialize our State Machine it will keep it in memory.

    The setting looks like following.

    <flow:flow-executor id="flowExecutor">
    	<flow:flow-execution-listeners>
    		<flow:listener ref="facesContextListener" />
    	</flow:flow-execution-listeners>
    	<strong><flow:flow-execution-repository max-execution-snapshots="0" /></strong>
    </flow:flow-executor>
    

    snippet 47

    In webflow-config.xml at the project techdemo project.

    If the ‘max-execution-snapshots’ property is set to 0, Spring Web Flow will no more serialize the objects but keeps them in memory so the when web request ended if our State Machine receives it can still process them.

    The end effect would look like following,

    picture_26
    picture 29

    at the above picture you see the aplicationg with flowId e1s1 in a certain state and in the following picture with flowId e1s2.

    picture_27
    picture 30

    as you browser view 2 tabs has no negative effect on the functionality of the browser.

    Primefaces and Spring
    Integration of Primefaces and Spring surprisingly extremely easy, we have to make some configurations in web.xml.

    <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee"
    	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    	xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/j2ee/web-app_3.0.xsd">
    	.......
    	<!-- /WEB-INF/config/applicationContext.xml -->
    	<servlet>
    		<servlet-name>spring_mvc</servlet-name>
    		<servlet-class>org.springframework.web.servlet.DispatcherServlet
    		</servlet-class>
    		<init-param>
    			<param-name>contextConfigLocation</param-name>
    			<param-value>/WEB-INF/config/webmvc-config.xml,
    				/WEB-INF/config/webflow-config.xml,
    				/WEB-INF/config/applicationContext-customersearch.xml,
    				classpath:/META-INF/customersearch/applicationContext-statemachine-customersearch.xml,
    				classpath:/META-INF/customersearch/applicationContext-statemachine-customersearch-guards.xml,
    				classpath:/META-INF/customersearch/applicationContext-statemachine-customersearch-controlobjects.xml,
    				classpath:/META-INF/findcustomer/applicationContext-statemachine-findcustomer.xml,
    				classpath:/META-INF/findcustomer/applicationContext-statemachine-findcustomer-guards.xml,
    				classpath:/META-INF/findcustomer/applicationContext-statemachine-findcustomer-controlobjects.xml,
    				classpath:/META-INF/findorders/applicationContext-statemachine-findorders.xml,
    				classpath:/META-INF/findorders/applicationContext-statemachine-findorders-guards.xml,
    				classpath:/META-INF/findorders/applicationContext-statemachine-findorders-controlobjects.xml
    				/WEB-INF/config/applicationContext-manager.xml</param-value>
    		</init-param>
    		<load-on-startup>1</load-on-startup>
    	</servlet>
    	<servlet>
    		<servlet-name>SpringResourceServlet</servlet-name>
    		<servlet-class>org.springframework.js.resource.ResourceServlet</servlet-class>
    		<load-on-startup>0</load-on-startup>
    	</servlet>
    	<servlet>
    		<servlet-name>PrimefacesResourceServlet</servlet-name>
    		<servlet-class>org.primefaces.resource.ResourceServlet</servlet-class>
    	</servlet>
            <servlet>
    		<servlet-name>FacesServlet</servlet-name>
    		<servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
    		<load-on-startup>1</load-on-startup>
    	</servlet>
            .....
           <servlet-mapping>
    		<servlet-name>PrimefacesResourceServlet</servlet-name>
    		<url-pattern>/primefaces_resource/*</url-pattern>
    	</servlet-mapping>
    	<servlet-mapping>
    		<servlet-name>SpringResourceServlet</servlet-name>
    		<url-pattern>/resources/*</url-pattern>
    	</servlet-mapping>
    	<servlet-mapping>
    		<servlet-name>spring_mvc</servlet-name>
    		<url-pattern>/spring/*</url-pattern>
    	</servlet-mapping>
            <servlet-mapping>
    		<servlet-name>FacesServlet</servlet-name>
    		<url-pattern>/faces/*</url-pattern>
    	</servlet-mapping>
    </web-app>
    

    snippet 47a

    It is actually consists of defining the Spring DispatcherServlet which is responsable for initializing the Spring Framework and the states application context files like ‘/WEB-INF/config/webmvc-config.xml, /WEB-INF/config/webflow-config.xml, /WEB-INF/config/applicationContext-customersearch.xml’ and of course FacesServlet.

    The rest is only defining Primefaces dependencies in the Maven pom file.

             <dependencies>
                    .........
    		<!-- PRIMEFACES -->
    		<dependency>
    			<groupId>org.primefaces</groupId>
    			<artifactId>primefaces</artifactId>
    		</dependency>
                    ........
                </dependencies>
    

    snippet 47b

    And of course using primefaces tags in the .xhtml files.

    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml"
    	xmlns:ui="http://java.sun.com/jsf/facelets"
    	xmlns:h="http://java.sun.com/jsf/html"
    	xmlns:f="http://java.sun.com/jsf/core"
    	xmlns:fn="http://java.sun.com/jsp/jstl/functions"
    	xmlns:p="http://primefaces.prime.com.tr/ui">
    
    	<f:view contentType="text/html" encoding="UTF-8">
    		<h:head>
    			<title>Customer Search - Spring WebFlow - Primefaces - State Machine Demo</title>
    			<meta content="text/html; charset=utf-8" http-equiv="Content-Type" />
    		</h:head>
    		<body>
    			<script>
    				function handlePublish(pushed) {
    					handlePublishInternal();
    				}
    			</script>
    			<p:outputPanel ajaxRendered="true" id="handlePuplishRemoteCommand">
    				........
    			</p:outputPanel>
                            .......
                            <p:commandButton id="customerSearchViewCustomerDetail" value="Continue"
    		    				ajax="true" update="customerSearchRunning-panel_empty_layout,customerSearchFound-empty-panel,customerSearchAuthentication-empty-panel">
    		    	</p:commandButton>
    		    </h:form>
    		</body>
    	</f:view>
    </html>
    

    snippet 47c

    One point that you have to pay attention here, Primefaces needs some Javascripts to operate, to be include those sucessfully to the rendered page, folowing tag should be included to the root .xhtml file.

    <h:head>
    	.......
    </h:head>
    

    snippet 47d

    After that you can use Spring and Primefaces together.

    You can even bring this one step further if you define the following element in your faces-config.xml, that mean you can reference your Spring beans directly from .xhtml files.

    <?xml version="1.0" encoding="UTF-8"?>
    <faces-config xmlns="http://java.sun.com/xml/ns/javaee"
    	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    	xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
    	http://java.sun.com/xml/ns/javaee/web-facesconfig_2_1.xsd"
    	version="2.1">
    
    	<application>
    		<el-resolver>
        		    org.springframework.web.jsf.el.SpringBeanFacesELResolver
    		</el-resolver>
      	</application>
    </faces-config>
    

    snippet 47f

    -Comet/Atmosphere
    If you follow our discussion until now, one really critical future our application is able notify asynchronously the web browser that there is a State change in our State Machine and it should re-render.

    The general concept of forcing a browser to re-render that is called Comet technologies which can be summarized keeping a channel open Web Servers to get asynchronous notifications.

    Typical implementation of this new technology is Jetty Continuation, Tomcat CometProcessor, Grizzly CometHandler or Netty WebSocket but as you they are tightly coupled with the platform mentioned or Open Source Frameworks like Atmosphere, cometD or JWebSocket.

    For our application we choose the use Primefaces for JSF implementation and their way of dealing with Comet is Atmosphere so we will analyze this.

    Atmosphere is based Web Socket technology and all major containers like Jetty, Tomcat, Grizzly, Netty are Web Socket compatible.

    All the magic of Atmosphere happens inside of the AtmosphereServlet which we should configure in our web.xml (we are using Primefaces so we are using a sub class of it ‘PushServlet’ from Primefaces).

    Dependent of your application server is WebSocket compatible or not, Atmosphere has two ways of dealing with the problem. If it is WebSocket compatible it will use Native IO and open a non blocking communication which means the channel with the web browser and server will stay open but it will not block any of the application server’s threads and if it is not WebSocket compatible it will block one of the application server threads. You should consider this when you are doing you capacity planning.
    When client connects to the atmosphere, it will open a communication channel and subscribe there topics that push components in the browser listen and when an interesting event for this topic happens a broadcaster will publish this.

    So lifecycle happens as following, client send a request which will suspend until an interesting event occurs for the GUI, broadcaster will publish this and when this connection is not necessary anymore it would be shutdown.

    -Creation of Custom Steorotypes
    We are using UML language but there are some cases that information that we can represent in our Model is not enough to our implementation.

    This is the case for us for ex, when we are try to store the information about what the name of the actual implementation class for our transition actions and guards.

    If you remember the following pictures

    picture_28
    picture 31

    picture_29
    picture 32

    We used here a new element to store the information we need. This is a standard UML concept called Stereotypes which is exactly designed for our purpose.

    As you see I created a stereotype called swf_statemachine::SwfTransition, which a stereotype that can be decorated on Transition (which in the screenshot I decorated on the SearchRunningTransition) and contains 2 attributes called Action Implementation and Guard Implementation.

    So how can you create stereotypes and introduces them to the UML Model. I will explain the process for Topcased.

    First of we have to create a new model in Topcased with the following settings.

    picture_30
    picture 33

    The keyword here, we selected a profile element as a template and swf_test as profile name.

    In the next step, we created our Stereotype, StatemachineTestStereotype

    picture_31
    picture 34

    based on MetaClass StateMachine, which means the moment we defined the MetaClass this Stereotype can only be applied to this type objects, in this case uml::statemachine object.

    Now we can use these Stereotypes to transfer more information to the UML Model or we can just use to tag the object that we want to have a special behaviour. For ex, M2T when it sees this Steorotype, it applies a special template to the tagged element.

    Now let’s see how we can define properties to a Stereotype which can have simple types like strings or complex types like enumerations.

    picture_32
    picture 35

    Now our Stereotype is complete but how we will introduce it to our model. First we have to define our Stereotype UML way which can be done with the following command.

    picture_33
    picture 36

    Now save and your new Stereotype is ready to use only thing you have to do include into the model.

    picture_34
    picture 37

    Load your new Stereotype/Profile as resource to the model you want to use and also choose apply profile after it (One thing to point out here, when you would try to load the Resource it will present the option to use the absolute path or the relative to your workspace, use relative option otherwise you might get problems with portability from environment to environment).

    picture_35
    picture 38

    Which will present you the menu and we should select StateMachineTest (in this case I am selecting SwfStateMachine profile I actually use in the model).

    picture_36
    picture 39

    Groovy – Runtime Modification of the State Machine on Production
    One thing that might be criticized for the ideas cited above is that you have to know all of your use cases before going to the production. This means that a system that is not using the above ideas if it encounters an use cases that it doesn’t know how to handle, it might somehow work.

    It might do the correct thing or not but it will continue to work, off course it will not report you that something occurred that it does not know how to handle, so you can take the precaution next time but it will not block the user.

    With a State Machine the situation is contrary if State Machine encounters an Use Case that it does not know, it will stop the world and the user will not able to proceed, off course it will report the use case so you can resolve it for the next software delivery.

    This can be quite problematic especially at the start-up of an application, which will contain lots of unknowns and bugs which will be fixed in future iterations but that will not help the user first discovering this problems.

    To deliver a solution to the user who doesn’t want to wait for next 3 months for the solution we should have a mechanism. This mechanism what I call is dynamic configuration of a State Machine during runtime.

    As you read in the previous sections, all the information influencing what will be displayed our web application GUI is held the Control Objects of our State Machines so if you can modify these data on the runtime; we can change the behaviour of our application on the runtime.

    So how are we going to do this?

    We will encounter most of the missing Use Cases when we are at certain State in our State Machine and user triggers an event that we were not expecting because either we didn’t know the Use Case or it is just a bug.

    Off course, this idea should not be abused, this is a solution to solve critical production software problems; it should not be used as a standard software development method. Dynamically defined State Machine definitions after the emergency solution had implemented, it has to be transferred to UML diagrams for the next release.

    So what will happen is that in our application we will have a database table which contains the information telling us, for which State Machine, Event, Transition and Guard condition dynamic configuration should occur and which Action should this dynamic configuration should execute.

    The critical point here is the definition of the Guard Condition and Activity, these will be saved as Groovy Scripts in the database so you can place those in a running Production system so it can change the behaviour of the Control Objects of the State Machine.

    To reach this goal, we will have special Action Type called GroovyAction which will be used for the Transition that we configured in the database.

    public class GroovyAction implements IAction {
        private String groovyScript;
    
        public void processAction(TvppEvent tvppEvent, IStateMachineContext stateMachineContext,
                IStateMachine stateMachine) {
            Map<String, Object> binderContainer = new HashMap<String, Object>();
            binderContainer.put("stateMachineContext", stateMachineContext);
    
            Binding binding = new Binding();
            binding.setVariable("bindingContainer", binderContainer);
    
            GroovyShell groovyShell = new GroovyShell(binding);
    
            groovyShell.evaluate(groovyScript);
    
        }
    
        public void setGroovyScript(String groovyScript) {
            this.groovyScript = groovyScript;
        }
    }
    

    snippet 48

    As you see, this binds the State Machine configured for this Transition to a binding container for Groovy so groovy script can access it. The Groovy Script configured in the database will be injected to this Action so we can call ‘groovyShell.evaluate(groovyScript)’ method so we can execute the Groovy code, which would look like the following.

    def stateMachineContext = (StateMachineContext) bindingContainer['stateMachineContext']
           def myTest = (CustomersearchSMControlledObject)    stateMachineContext.controlObjects['customersearchSMControlledObject']
    
    myTest.setRenderSearchEnabled = false;
    

    snippet 49

    which obtains the reference to the Control Object and set the value for the variable controlling the visibility of the Search button and set that to ‘false’ so it will not be rendered anymore.

    One thing to consider here, what we offer with this feature is just emergency case solutions, if your application in production would not work because of an use case you didn’t discovered during development and test phase, this will enable you quickly solve the problem. This should not be used as regular development method because this will take us away from the Model Driven Software Development principles.

    After the emergency is resolved, discovered Use Cases must be transferred to the UML Model and emergency solution should be removed from the database with the next deployment to the production.

    Preparing Project Environment
    I think we reached to the point where you want to get your hands dirty.

    The project controlled with a version control system called git, which you can download from following URL (http://git-scm.com/).

    Personally I am using cygwin (a windows unix shell emulator) and its git client but you can also download a graphical user interface like TortoiseGit (http://code.google.com/p/tortoisegit/).

    If you use cygwin or any git shell version you can get the project with the following command:

    git clone -b MWE2 git@github.com:mehmetsalgar/swf_statemachine.git

    This will load from the github, the project files and create the directory structure that I showed in the previous screenshots.

    A successful git clone will look like the following.

    git clone -b spring4_update git@github.com:mehmetsalgar/swf_statemachine.git

    Cloning into ‘swf_statemachine’…
    remote: Reusing existing pack: 926, done.
    remote: Counting objects: 35, done.
    remote: Compressing objects: 100% (29/29), done.
    remote: Total 961 (delta 7), reused 0 (delta 0)
    Receiving objects: 100% (961/961), 205.38 KiB | 309 KiB/s, done.
    Resolving deltas: 100% (270/270), done.

    If you see the following message that means you are successfully got the project to your environment.

    The project is a Maven project so if you don’t have a Maven in your environment, you have to download it from (http://maven.apache.org/download.cgi).

    If you are not an experienced with Maven, one point you have to be careful if you are working behind company firewall, you have to configure the following lines in setting.xml in conf directory of your Maven installation.

    <proxies>
        <!-- proxy
         | Specification for one proxy, to be used in connecting to the network.
         | -->
        <proxy>
          <id>optional</id>
          <active>true</active>
          <protocol>http</protocol>
          <username>xxxxx</username>
          <password>xxxxxx</password>
          <host>some.proxy.com</host>
          <port>3128</port>
          <nonProxyHosts>127.0.0.1</nonProxyHosts>
        </proxy>
      </proxies>
    

    snippet 50

    If you are working with windows, it might be wise to change your maven repository location from your MyDocuments to a simpler path because some classpaths becomes too deep and have problems with 255 character limit of windows. So it is better to select a shallow directory position for your repository, for ex . c:/repo_mvn3.

    <localRepository>c:/repo_mvn</localRepository>
    

    snippet 51

    Now you can build the project for the first, execute the following command in DOS prompt or in cygwin ‘
    /swf_statemachine/swf_statemachine’

    mvn clean install

    If you see the following message that means your build was successful.

    [INFO] ————————————————————————
    [INFO] Reactor Summary:
    [INFO]
    [INFO] Spring WebFlow – State Machine ……………….. SUCCESS [3.422s]
    [INFO] Spring WebFlow – State Machine – Comet ………… SUCCESS [37.095s]
    [INFO] Spring WebFlow – State Machine – Fornax Extensions SUCCESS [10.609s]
    [INFO] Spring WebFlow – State Machine – Domain Model ….. SUCCESS [16.313s]
    [INFO] Spring WebFlow – State Machine – XPand(M2T) Templates SUCCESS [0.812s]
    [INFO] Spring WebFlow – State Machine – Implementation … SUCCESS [2.766s]
    [INFO] Spring WebFlow – State Machine – Techdemo Domain Model SUCCESS [7.844s]
    [INFO] Spring WebFlow – State Machine – Statemachine Model SUCCESS [6:34.974s]
    [INFO] Spring WebFlow – State Machine – Statemachine Model Implementation SUCCESS [3.093s]
    [INFO] Spring WebFlow – State Machine – TechDemo ……… SUCCESS [22.751s]
    [INFO] ————————————————————————
    [INFO] BUILD SUCCESS
    [INFO] ————————————————————————
    [INFO] Total time: 8:20.303s
    [INFO] Finished at: Fri Mar 21 10:18:23 CET 2014
    [INFO] Final Memory: 76M/495M
    [INFO] ————————————————————————

    As I mentioned before I used TOPCASED for modeling purposes (you can download it from here)(Correction: It seems another company overtook the project but you can download the binaries from here: TOPCASED 5.3.1) on this project. TOPCASED is also a Java Development Environment.

    If you decide to work with Eclipse environment, you have two options to bring the projects under the Eclipse. All modern Eclipse releases will include automatically M2Eclipse plugin so you can just Import your project as a Maven Project.

    picture_37
    picture 40

    Or if you don’t want to use M2Eclipse, you can use following command

    mvn eclipse:eclipse

    and import created projects as Eclipse Projects.

    picture_38
    picture 41

    Personally I developed the techdemo application in a Tomcat but there is nothing preventing that it would be deployed to JBoss, Glassfish, Jetty and other containers.


    For Tomcat there is one critical modification in the container to bring Techdemo running. Techdemo application uses extensively Primefaces and Atmosphere/Comet framework for Ajax pull functionality. Tomcat a special configuration to activate the Websocket, which is a crucial part of the Atmosphere framework.

    The following changes has to be made in server.xml of the Tomcat

    <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol"
                   connectionTimeout="20000"
                   redirectPort="8443 />
    

    snippet 52

    org.apache.coyote.http11.Http11NioProtocol is the protocol which is supporting the Websocket protocol.

    After you that when you copy swf_statemachine_techdemo-1.0-SNAPSHOT.war to webapps directory of the tomcat and fire up the tomcat with startup.bat and you are ready to go.

    To reach the techdemo application you have to call the following url (http://localhost:8080/swf_statemachine_techdemo-1.0-SNAPSHOT/spring/customersearch).

    Test Rig

    One of the biggest advantages of the methodic we explaing until here is the testablity. With the information you have until now you can test your whole GUI with unit tests. Lets see how can we do that.

    We can actually test which GUI elements will be displayed in the GUI in a test rig.

    Unit test constructed as following with the test org.salgar.swf_statemachine.customersearch.CustomerSearchStateMachineTest.

    First of all we have to initialize all the Spring Application Context which consists from following elements…

    @ContextConfiguration(locations = {
    		"/META-INF/scope.xml",
    		"/META-INF/customersearch/applicationContext-statemachine-customersearch.xml",
    		"/META-INF/customersearch/applicationContext-statemachine-customersearch-guards.xml",
    		"/META-INF/customersearch/applicationContext-statemachine-customersearch-controlobjects.xml",
    		"/META-INF/findcustomer/applicationContext-statemachine-findcustomer.xml",
    		"/META-INF/findcustomer/applicationContext-statemachine-findcustomer-guards.xml",
    		"/META-INF/findcustomer/applicationContext-statemachine-findcustomer-controlobjects.xml",
    		"/META-INF/findorders/applicationContext-statemachine-findorders.xml",
    		"/META-INF/findorders/applicationContext-statemachine-findorders-guards.xml",
    		"/META-INF/findorders/applicationContext-statemachine-findorders-controlobjects.xml",
    		"/META-INF/applicationContext-customerManager-mock.xml" })
    public class CustomerSearchStateMachineTest extends
    		AbstractTestNGSpringContextTests {
    

    snippet 53

    which are in this case StateMachine definitions of the Customer Search SM, Find Customer SM, Find Orders SM and their control objects and guards.

    First we have to mock our Comet System (which is responsible for our asynchronous communication with the web browser, I will explain the principles in the Appendix).

    @Test(dependsOnMethods = { "initialisation" })
    public void startSearch() {
    	CometServiceLocatorMocker.mock();
    Broadcaster broadCasterMock = CometServiceLocatorMocker.getMockObject();
    

    snippet 54

    After that we will initialize our test for this StateMachine.

    StateMachine customerSearchSM = (StateMachine)this.applicationContext.getBean("CustomerSearchSM");
    
    	customerSearchSM.resetStateMachine();
    
    Assert.assertNotNull(customerSearchSM);
    	Assert.assertEquals(StateMachineEnumerationImpl.CustomerSearchSM
    				.getStateMachineName(), customerSearchSM.giveActualState()
    				.getName().getStateMachineName().getStateMachineName());
    

    snippet 55

    Initial State of our GUI, nothing should be rendered in the GUI other then CustomerSearchInput (which can be seen in the screen shot above for the State WAITING_FOR_CUSTOMERSEARCH_START)
    )

            Assert.assertEquals(				CustomerSearchSM_StateEnumerationImpl.WAITING_CUSTOMERSEARCH_START,
    		customerSearchSM.giveActualState().getName());
    	Assert.assertEquals(Boolean.TRUE,
    		((CustomerSearchInputCO) customerSearchSM.getControlObject())
    					.getRenderCustomerSearchInput());
    	Assert.assertEquals(Boolean.FALSE,
    		((CustomerSearchRunningCO) customerSearchSM.getControlObject())
    		.getRenderCustomerSearchRunning());
    	Assert.assertEquals(Boolean.FALSE,
    ((CustomerSearchAuthenticationCO) customerSearchSM.getControlObject())
    		.getRenderCustomerSearchAuthentication());
    	Assert.assertEquals(Boolean.FALSE,
    		((CustomerSearchFoundCO) customerSearchSM.getControlObject())
    		.getRenderCustomerSearchFound());
    	Assert.assertEquals(Boolean.FALSE,
    		((CustomerSearchJoinCO) customerSearchSM.getControlObject())
    		.getRenderCustomerJoin());
    	Assert.assertEquals(Boolean.FALSE,
    		((CustomerSearchOrderCO) customerSearchSM.getControlObject())
    		.getRenderCustomerOrders());
    	Assert.assertEquals(Boolean.FALSE,
    		((CustomerSearchOrderCO) customerSearchSM.getControlObject())
    		.getRenderCustomerOrderLoading());
    
    

    snippet 56

    As you might see I can check in Test Rig how exactly GUI should display when my real use case executed. It checks that in this state only INPUT form displayed and nothing else.

    As following step we are triggering the event simulating the user click to search start button.

    CustomerSearchStartEventPayload customerSearchStartEventPayload = new CustomerSearchStartEventPayload();
    		Event onSearchStartEvent = new Event();
    		onSearchStartEvent
    				.setEventType(CustomerSearchSM_EventEnumerationImpl.onStartSearch);
    		customerSearchStartEventPayload.setCustomerNumber(customerNumber);
    		onSearchStartEvent.setPayload(customerSearchStartEventPayload);
    

    snippet 57

    Following steps are initializing some Mock Objects that are simulating Partner System, this is not interesting technology for our test rig so I will not display the details here.

    Event looks like as following…

    CustomerSearchStartEventPayload customerSearchStartEventPayload = new CustomerSearchStartEventPayload();
    		Event onSearchStartEvent = new Event();
    		onSearchStartEvent
    				.setEventType(CustomerSearchSM_EventEnumerationImpl.onStartSearch);
    		customerSearchStartEventPayload.setCustomerNumber(customerNumber);
    		onSearchStartEvent.setPayload(customerSearchStartEventPayload);
    customerSearchSM.handleEvent(onSearchStartEvent);
    

    snippet 58

    After State Machine make the necessary changes and we check again the results. GUI will be in State CUSTOMERSEARCH_RUNNING and GUI elements that should be displayed is visible in above Screenshot.

    Assert.assertEquals(				CustomerSearchSM_StateEnumerationImpl.CUSTOMERSEARCH_RUNNING,
    		customerSearchSM.giveActualState().getName());
    Assert.assertEquals(customerNumber,
    	((CustomerSearchCO) customerSearchSM.getControlObject())
    		.getCustomerNumber());
    Assert.assertEquals(Boolean.FALSE,
    	((CustomerSearchInputCO) customerSearchSM.getControlObject())
    		.getRenderCustomerSearchInput());
    Assert.assertEquals(Boolean.TRUE,
    	((CustomerSearchRunningCO) customerSearchSM.getControlObject())
    		.getRenderCustomerSearchRunning());
    Assert.assertEquals(Boolean.FALSE,
    	((CustomerSearchAuthenticationCO) customerSearchSM.getControlObject())
    		.getRenderCustomerSearchAuthentication());
    Assert.assertEquals(Boolean.FALSE,
    	((CustomerSearchFoundCO) customerSearchSM.getControlObject())
    		.getRenderCustomerSearchFound());
    Assert.assertEquals(Boolean.FALSE,
    	((CustomerSearchJoinCO) customerSearchSM.getControlObject())
    		.getRenderCustomerJoin());
    Assert.assertEquals(Boolean.FALSE,
    	((CustomerSearchOrderCO) customerSearchSM.getControlObject())
    		.getRenderCustomerOrders());
    Assert.assertEquals(Boolean.FALSE,
    	((CustomerSearchOrderCO) customerSearchSM.getControlObject())
    		.getRenderCustomerOrderLoading());
    

    snippet 59

    You may see from here that we are checking the State Machine change to the correct state and all the correct GUI elements are displayed or not? In this case Search Input form should be invisible and the GUI must inform the user the search is ongoing and all other GUI elements must be invisible.

    We can again perfectly control in our test rig that application is working correctly.

    After this we will prepare some more Mock Services simulating partner Systems and finally send our State Machine the event signalising the customer is found.

    Event customerFoundEvent = new Event();
    customerFoundEvent.setPayload(customer);
    customerFoundEvent.setSource(this);
    customerFoundEvent.setEventType(FindCustomerSM_EventEnumerationImpl.onCustomerFound);
    ((CustomerSearchSMControlObject) customerSearchSM.getControlObject())
    	.getFindCustomerSlaveSM().handleEvent(customerFoundEvent);
    

    snippet 60

    Finally when we reach the state CUSTOMER_FOUND following elements would be controlled (can be seen in the screenshot).

    Assert.assertEquals(CustomerSearchSM_StateEnumerationImpl.CUSTOMER_FOUND,
          customerSearchSM.giveActualState().getName());
    Assert.assertEquals(CustomerSearchSM_StateEnumerationImpl.CUSTOMERSEARCH_RUNNING,
          customerSearchSM.giveActualState().getName());
    Assert.assertEquals(customerNumber,
          ((CustomerSearchCO) customerSearchSM.getControlObject()).getCustomerNumber());
    Assert.assertEquals(Boolean.FALSE,
          ((CustomerSearchInputCO) customerSearchSM.getControlObject()).getRenderCustomerSearchInput());
    Assert.assertEquals(Boolean.FALSE,
          ((CustomerSearchRunningCO)customerSearchSM.getControlObject()).getRenderCustomerSearchrunning());
    Assert.assertEquals(Boolean.FALSE,
          ((CustomerSearchAuthenticationCO) customerSearchSM.getControlObject())
          .getRenderCustomerSearchAuthentication());
    Assert.assertEquals(Boolean.TRUE,
          ((CustomerSearchFoundCO) customerSearchSM.getControlObject())
          .getRenderCustomerSearchFound());
    Assert.assertEquals(Boolean.FALSE,
          ((CustomerSearchJoinCO) customerSearchSM.getControlObject()).getRenderCustomerJoin());
    Assert.assertEquals(Boolean.FALSE,
          ((CustomerSearchOrderCO) customerSearchSM.getControlObject()).
          getRenderCustomerOrders());
    Assert.assertEquals(Boolean.FALSE,
          ((CustomerSearchOrderCO) customerSearchSM.getControlObject())
          .getRenderCustomerOrderLoading());
    

    snippet 61

    M2T
    M2T Xpand is a template language to generate Text Output from Meta Models based on EMF. It is possible to use this with many different DSL languages like UML if the correct Meta Model is defined.

    For Xpand to convert Model to Text, it needs a Workflow to define how/where to load models, checking them and generating Text.

    If we look to the worklow we used in swf_statemachine_sm_model/src/main/resources/workflow.mwe here are the Elements we used.

    <bean class="org.salgar.swf_statemachine.uml2.Setup" standardUML2Setup="true" />
    

    snippet 62

    which initialiaze the Namespaces for the UML for XPand.

    The following Snippet defines for this Workflow the Meta Model that XPand muss interpret UML is.

    <bean id="uml" class="org.eclipse.xtend.typesystem.uml2.UML2MetaModel"/>
    

    snippet 63

    This will make all standard UML Element known to Xpand but if you remember previous topics there is one concept called Steoropes which allow us to store information that is normally not possible via UML.

    We should also make this Meta Model known to Xpand if these Steorotypes would be interpreted via Xpand.

    <bean id="swf_statemachine" class="org.eclipse.xtend.typesystem.uml2.profile.ProfileMetaModel">
    		<profile value="platform:/resource/swf_statemachine_sm_model/src/main/resources/swf_statemachine.profile.uml"/>
    </bean>
    

    snippet 64

    Now with this information a XPand generator can interpret the information we contain in an UML Model and generate Text from it.

    Following is a definition of such a generator.

    <component id="springGenerator" class="org.eclipse.xpand2.Generator"
    		skipOnErrors="true">
    		<fileEncoding value="ISO-8859-1" />
    		<!--metaModel idRef="EmfMM" /-->
    		<metaModel idRef="uml" />
    		<metaModel idRef="swf_statemachine" />		
    
    		<expand value="template::stateMachineSpring::Spring FOR model" />
    		<outlet path="src/generated/resources" >
    			<postprocessor class="org.salgar.m2t.xml.XmlBeautifier" />
    		</outlet>
    </component>
    

    snippet 65

    As you see we introduce to the generator previously defined Meta Models, uml and swf_statemachine then we instruct generator to expand ‘template::stateMachineSpring::Spring FOR model’.

    Now if you look to the ‘swf_statemachine_xpand’ project there is a ‘template’ directory containing ‘stateMachineSpring.xpt’ template file. With the expression ‘template::stateMachineSpring::Spring FOR model’ Xpand was able to locate this file in the classpath.

    If you open and observe this file you will see that

    «DEFINE Spring FOR uml::Model»
    	«EXPAND Root(this) FOREACH (List[uml::Package])ownedElement»
    	«EXPAND SteorotypeGuardsActions(this) FOREACH (List[uml::Package])ownedElement»
    	«EXPAND ControlObjects(this) FOREACH (List[uml::Package])ownedElement»
    «ENDDEFINE»
    

    snippet 66

    it contains a definition for ‘Spring FOR uml::Model’ which means that UML Model that this workflow will transfer to this template file would be called ‘Spring’ and with the statement

    «EXPAND Root(this) FOREACH (List[uml::Package])ownedElement»
    

    snippet 67

    it should call the ‘Root’ function for the every ‘uml::Package’ inside of the ‘uml::model’.

    This is the general principle how XPand works. We define structures which tells Xpand how the handle specific elements in the model.

    For ex.

    «DEFINE Root(uml::Model model) FOR uml::Package»
    	«IF ownedType.typeSelect(uml::StateMachine).isEmpty==false»
    		«FILE "META-INF/" + name + "/applicationContext-statemachine-"+name+".xml"»
    		<?xml version="1.0" encoding="UTF-8"?>
    		<beans xmlns="http://www.springframework.org/schema/beans"
    			xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
    			xmlns:util="http://www.springframework.org/schema/util" xmlns:aop="http://www.springframework.org/schema/aop"
    			xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
    				http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
    				http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd
    				http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd">
    			«EXPAND StateMachines(model) FOREACH ownedType.typeSelect(uml::StateMachine)»
    
    			<bean name="defaultGuard" class="org.salgar.swf_statemachine.impl.guard.DefaultGuardImpl" />
    			<bean name="defaultAction" class="org.salgar.swf_statemachine.impl.action.DefaultAction" />
    		</beans>
    		«EXPAND Root(model) FOREACH nestedPackage»
    		«ENDFILE»
    	«ELSE»
    		«EXPAND Root(model) FOREACH nestedPackage»
    	«ENDIF»
    «ENDDEFINE»
    

    snippet 68

    tells Xpand create a Spring configuration while for every State Machine in the model that doesn’t containing a Sub State Machine and expand those State Machines

    «EXPAND StateMachines(model) FOREACH ownedType.typeSelect(uml::StateMachine)»
    

    snippet 69

    and expand those also for every nested UML Package that these package contains.

    «EXPAND Root(model) FOREACH nestedPackage»
    

    snippet 70

    This is the working principle of Xpand, we define structure to tell XPand how to handle certain type and let the Xpand loop over them.

    Posted in Agile, Ajax, Asynchrounus Processing, Atmosphere, Comet, Cygwin, Fornax, Groovy, Iterative, JSF, M2T, Maven, MDA, MDD, MDSD, Multi Tab, Open Source, Primefaces, Software Development, Spring Web Flow, State Machine, Topcased, UML, Web Socket, XPand | 7 Comments

    A working sample with JBoss Portal – Spring Webflow – Primefaces

    *****UPDATE***** Technologie stack on this blog is now quite outdated. There is a new blog entry at the following link JBoss GateIn Portal, Spring Webflow and Primefaces covering same themes with the updated technology stack. *****UPDATE*****

    Introduction

    You would find in this blog a feasibility study project (available under github) about making JBoss Portal, Spring Webflow and Primefaces to work together.

    Naturally I will tell my reasons to produce this project and along the way you will experience the adventures that was necessary to realize it. I will also provide informations about how to get the project from github, build it and deploy it in JBoss Portal, so you can use it for your researches or may be use it in production.

    Motivations: The idea for writing this blog come out with our struggle in one project with JBoss Portal.

    Our team was full of people with extensive J2EE web development experience but pratically no Portal development. Like every other good programmer,  :),  we were for using the proven frameworks like Spring, JSF, CXF, etc…

    One inevitable fact that we have to deal in the days of Web 2.0,  was the existence of lots of Ajax functionality (under the hype of Ajax, Business Side of the project had decided that under normal circumstance, the functionality that should belong 10 web pages in a forms based web application, should be presented under 2 web pages….).

    This produced the necessity turn on and off the JSF components in these 2 web pages with Ajax, made some things in the application much bigger challenge. Well the natural way to proceed in this situation, is to look first what was JBoss presenting as solution.

    That was the times when JBoss bought the Exadel and working hard to develop their JSF Portlet Bridge and Ajax functionalities on top of the Richfaces. While we didn’t have too much experience with Ajax – JSF combination in Portal world, so we decided to trust JBoss and follow their technology stack…with some initial pain actually the results were Ok but then we started to feel hidden problems presented with previously mentioned challenging Ajax requirements…..

    Our project was dependant from several other partner systems and some of them were not filling the performance requirements of a modern web applications, because of this particular situation, they were designed asynchronously. So when we need information from one partner systems, we should first send the request and we would get a technical confirmation that request succesfully received then the partner system would call us the with the real results when they are ready.

    Direct consequence of this scenario is that the GUI would render some wait messages and wait until the asynchronous response is there, with Ajax push functionalities (Comet) , would render the delivered results….. Now think about the following scenario, we have nearly 30 partner systems and 15 of them creating all the time events. If you can see what sort of a problems this can cause, let me summarize quickly.

    Our application developed with primitive web application concepts, so the decision of a JSF component should be rendered or not, occurs inside of the render methods of the components. For an application, that has one web page and only one form, that might not be too big of a problem but our page our overloaded with the functionality of 10 forms and the JSF components that are inside of these, are turning on and off themselves completely dependant on these asynchronous events….

    Well the things started getting out of hand quickly, everything that we developed looked good in development and test environments, but when the software deployed to production environment, events are coming totally different order and speed, then the production environment. Nothing that we developed and accepted from the QS department are looking the same in production environment, while we got the Ok from QS during the development phase, lots of the problems are discerned during the production phase and lots of code is already produced so we were inside of a trap….

    On top of that, we were having massive problems with basic frameworks because they were not designed to function under the Portal environment, biggest headache was the dependency injection functionality of the Spring…. The scope Session was not functioning correctly inside of Portal Framework (in the sense of J2EE web application). In Portal environment, Session Scope resolves to portlet session scope which is only accessible from one portlet, so that several different portlets can not see the same object under the session. While in our application, there were several portlets which are trying to share same model objects, to present a consistent view of the system and that is proved to be a big challenge. Naturally Spring has a solution for the mentioned scope problem, we should use a Spring aware Portlet (DispatcherPortlet).

    Unfortunately our decision to go with JBoss Portlet Bridge limited our possiblities here also. JBoss Porlet Bridge and Richfaces requires that any JSF based application should use their Portlet implementation, which is doing lots of initialisation for Richfaces Ajax framework….

    Solutions: For the above mentioned problems, the solutions were existing, the only problem was to convince them play together….

    Many of you certainly know that, Spring Web Flow is the way to go in such a form based application (wizard type lets say) that everything happens as a reaction to events passed to the system.

    Spring Flow is a basic State Machine (a little bit lightweigth version of State Machine theory) it is a good design for the requirements of most web applications with lots of useful functionality. It has some weak spots with Ajax that I will mention later but it was a good start-up to bring the discipline necessary to our application.

    Unfortunately any attempt to integrate Spring Web Flow to Portal environment has to overcome some of the hurdles that I mentioned previously, mainly making a JSF framework (with Ajax capabilities) to work with Dispatcher Portlet in JBoss Portal (I know that there are some samples with Life Array without Ajax functionality but the reason that I concentrate for JBoss, was that most of J2EE web application developers had used the JBoss Application in some point in their past and it is useful to have the services provided with JBoss).

    My first try to integrate the DispatcherPortlet and the RichFaces failed monumentally, JBoss Portlet Bridge was doing too much work for the initialisation of the Richfaces, my tries to imitate the functionality was doomed to fail while the functionality was dispersed to the several layers and it was nearly impossible to figure out without knowing the internals of the RichFaces and only with reading their code.

    I was about the give up but then some friend of mine, make me aware of the Primefaces and I was intrigued by its simplicity and I start looking deeper (mainly the way Primefaces handle the Ajax that it delegates main Ajax functionality to jQuery and some other proven libraries and with only one JSF Phase Listener provide the necessary functionality for Ajax).

    From now on you will read my adventures about integrating JBoss Portal, Spring Web Flow and Primefaces…. First of all, I have to talk about my proof of concept test application.

    I am a person that can think and understand much better with samples and I believe it is same for most of the developers out there. So I would like to have a sample application that people can look how the things are running, look into it and copy the parts of it, if it is useful to them….

    My other motivation, is to provide a code bases to discuss, if some problems comes up….. It is a maven project in github and if everything works correctly it shouldn’t be too big of a problem to get it up and running….

    Basically this project represent requirements our actual production project, we have a text field that we can enter random text which we can add into a list or reverse it, then add it to the list. To simulate the asynchronous functionality, we have a function that takes the texts inside of the lists and count how many words exists in the texts in a separate thread, then waits for 2 seconds and send results to the Web GUI with Ajax Push (Comet)(GUI renders first a wait status to signal a long running operation is in progress and with the event it re-render itself with the actual results) Here is screen shot from the simple GUI….

    A screenshot from proof of concept project

    Project Structure:

    Before I go some detail what were the challenges that I had experienced during the implementation of the technology demo, I like to show the demo project structure so you would not feel lost when I go to the details. The project is built with maven, so project structure is accordingly a nested maven project and it looks like this.

    Project structure

    As you may see there is one jbp_swf_primefaces main maven project and the other nested projects…the names are giving a good idea about the content I guess, in next part, challenges, you would get a pretty good idea why they are needed.

    Challenges:

    First challenge was naturally to convince DispatcherPortlet to work with a JSF framework inside of the JBoss Portal.

    Developers of the Spring has an interesting idea to simulate the Portal lifecycle phases, as you may know Portlet has two phases called Action and Render phase. Normally any request parameters from the action phase (action parameters should not go to the render phase) should not be transfered to the Render phase. To guarantee this, when the DispatcherPortlet receives a HTTP POST from JSF framework (JSF Form Submit) it goes over the lifecycle of the JSF framework and makes necessary changes to the model (inside of the doActionService).

    At the end this process, DispatcherPortlet redirects the response to the client with an 302 Http Status code so the browser will issue a HTPP GET requests then the parameters from the action phase are eleminated (that has also nice side effect while it prevents the re-submit of the page with refresh or back button). This nice feature has one consequences for our purposes, Primefaces uses some POST parameters to tell the JSF framework which partial updates the response should contain and while the GET request doesn’t contain any parameters this information is now lost.

    To prevent this I subclassed the DispatcherPorlet with the class SwfPrimefacesDispatcherPortlet (which you can find in dispatcher-portlet in github project) and override the method doActionService so at the end it can copy the parameters that are relevant for the Primefaces (if you have to modify this method, you must pay attention not pass every parameter to the GET request that would cause double submissions).

    One other point, at least in Firefox the browser has a tendency to cache the GET requests. To prevent this, I also added a request parameter that create random request parameters.

    I have a small request here, if anybody knows in which class this 302 redirect is triggered in Spring-Portlet, please let me know, I could find out that.

    Second challenge was, even that Spring Web Flow examples for portal were working with the JSF reference implementation (and that is not good enough if you need Ajax functionality) , Richfaces (while it is too dependant on JBoss Portlet Bridge) and Primefaces implementations were not working.

    For some reason, jsf-portletbridge from Sun is not working with JBoss Portal, so I have to change somethings (funnily JBoss Portal was handling the response streams differently then Life Array because the sample application from the Spring Webflow seems to be working under Life Array) . So below you can find this story…

    First thing you have to know, JSF Reference Implementation, JSF Portlet Bridge and Spring Web Flow contains all ViewHandlers and for all these technologies to work correctly under the portal, the classloader has to load them in a certain order.

    Unfortunately, I could not find anything in the JSF specification to wire all these handlers in a configuration to dictate the loading order. So for JBoss, it seems that only factor affecting in which order ViewHandlers initialized,  dependends on which order classloader loads the jars, in JBoss which happens to be  alphabetical….

    So you would see some funny maven artifact names in this blog, the reason is that I am trying to force JBoss to load the artifact in some certain order. That brings us to the artifact called mridge (actually bridge but that has to load after the JSF implementation ViewHandler so mridge). The other option to build a hard coded ViewHandler chain but that is an idea that I am too not impressed with it….

    In mridge project,  you will see that I have to implement my own version of a ViewHandler, JBPViewHandler.  Main motivation behind it, somehow JBoss Portal handles streams differently then the way jsf-portletbridge implementation of the Sun expects. For this reason, we are using a nullViewHandler, so when JSF framework tries to close/flush/etc… a stream, that has no effect to the real portal response and make necessary replace operation of the writers depending on what happens in JSF framework.

    This ViewHanlder also has to provide a ViewRoot for the JSF framework to support namespace issues that you would see in the next paragraph. Another functionality you would see in this project is an implementations of ExternalContext, one for Action phase portlet lifecycle, one for Render phase of the portlet lifecycle and a Factory to be able to configure that. Why we need that, well a Portlet can be included in a Page in the portal multiple times and portal framework (at least JBoss Portal) has an unique namespace for every inclusion of portlet (also a ViewRoot for the same purpose), so there would be no name collision for the JSF components. Somehow original implementation of the Primefaces can not handle that out of the box and requires some support from PortletBridge (actually there would be some other changes in Primefaces but we would come to that later).

    So the functionality to read the namespace exist only render phase of the portlet life cycle, so we have to get this value during the render phase and pass it to the action phase (while initial render phase happens first for every portlet before action phase) . We would get the value at the ExternalContext build for render phase (At the moment I am using session to pass this information to the action phase, actually I am not too happy about it, if you have a better idea please let me know). At the other side, ExternalContext that build for action phase will take this value and pass to the JSF framework. The factory object look to see which portlet phase we are in, pass the according instance to JSF  framework, naturally we have to configure this ContextFactory inside of the faces-config in the mridge project.

    Third challenge was related to the Primefaces, after the mentioned changes, Primefaces and Spring Web Flow functioned correctly but only for web 1.0 requests and not for Ajax requests. After some research, I figured it out that the problem appears to be Primefaces Ajax framework (jQuery btw) making the submits to Portal page.

    Now what I just wrote might seem weird but here is the problem. When jQuery makes an Ajax request it is expecting a partial response but because the request submitted to the Portal page, the  response is the whole Portal page and not partial response, jQuery was not able parse this response.

    JSF frameworks like Richfaces has a different approach, they are communicating directly web application that is lying behind the Portal framework. Primefaces, out of the box doesn’t have such a functionality. So it seems to be there is two potential solutions to the problem. First a filter that would alter the response coming from the Portal framework and extract partial response from it and send it to the jQuery at client side.

    I tested this and it works, you can find the necessary artifact in the maven project portal-filter. If you build the project with maven profile server_side (mvn clean install -Pserver_side), deployment artifacts that would be produced, provides this functionality.

    The problem with this, I was not quite happy with the solution and I was not sure it will work in production environment effectively and efficiently (creating two response streams and probable memory effects, etc….). Other possibility, is to do this modification at client side (extracting the partial reponse from the whole portal page response) with minimum consequences at client side, so I modified the jQuery implementation so that it would not throw an error when it gots the whole portal page as response from the server and try to extract partial response with javascript and update the page with jQuery. This way there would be no extra load at server side and it seems to be working good also. You can find the this version of the jQuery under the directory, \primefaces-patch\patch-primefaces\src\main\resources\META-INF\resources\jquery\jquery.js it httpData function which get the following lines

    if (ct.indexOf("text") >=0 ) { data = xhr.responseText; if(data.indexOf("") >= 0) { data = data.substring(data.indexOf(""), data.indexOf("") + "".length); //http://groups.google.com/group/jquery-en/browse_frm/thread/95718c9aab2c7483/af37adcb54b816c3?lnk=gst&q=parsexml#af37adcb54b816c3 data = parseXML(data); } }

    If any one has a better solution please let me know……

    Fourth challenge was to insert inside the modified jquery in primefaces and also some minor fixes in the PrimeFacesPhaseListener class while the version inside of the primefaces 1.1 is not programmed Portal in mind, it was expecting a Servlet request/response and has problems with Portlet request/response. You can find these changes inside of the patch-primefaces maven project, you will also discern that a maven assembly project exist there to create the artifacts that contains the patches for primefaces. The technology demo project would also reference this project as a maven dependency instead of original primefaces dependencies.

    Fifth challenge was actually some inconsistent behaviour in the system, the following line in the PrimeFacesPhaseListener.getIdsToUpdate was throwing NullpointerExceptions…

    List dynamicUpdateIds = requestContext.getPartialUpdateTargets();

    requestContext actually must be initialized from PostRestoreViewHandler class but for some reason that I can actually still not understand totally, was not invoked from the Spring Web Flow. Sometimes it worked, sometimes not. When I try to debug the system it always worked, most of the time this is a hint that there is some thread problem and a race condition occurring somewhere but I was not able figure out the reason. Only thing I can say, when FlowActionListener.processAction method called in Spring Web Flow and this part of the method is executed

    context.renderResponse();

    the mechanism forcing PostRestoreViewHandler to run is here turned off. The call to the mentioned method places the ‘flowRenderResponse’ flag to the FalshScope and that later read in JsfViewFactory.getView method with the following functionality

    if (!facesContext.getRenderResponse()) { JsfUtils.notifyAfterListeners(PhaseId.RESTORE_VIEW, lifecycle, facesContext); }

    and then after notifyers are not called….. If you read the comments in the method FlowActionListener.processAction about context.renderResponse()…. “tells JSF lifecycle that rendering should now happen and any subsequent phases should be skipped required in the case of this action listener firing immediately (immediate=true) before validation” This functionality is planned when JSF element that is controlling the submit has it property immediate set, that validation phase of the JSF life cycle must be bypassed .  Actual code was not checking the existence of immediate property but disables the notifiers so I have to modify the code to the following, then it started working correctly. I can still not say what was the real problem but at least I am not getting the NullPointerException anymore

    if (source.isImmediate()) { context.renderResponse(); }

    I didn’t have this problem when I was debugging, this is an indication of a thread problem but with the solution I can’t see a thread relevance… Anyway, if anybody from Spring Web Flow development team reads this blog and want to look what is going wrong, they have to change the patched FlowActionListener class in the patch-webflow-faces project and deploy, that would reproduce the problem…..

    Sixth and final challenge was little bit in fun nature, as I wrote at the beginning, our project was dependant from so many other partner systems and their performances, we have to built some asynchronous structures. To simulate this in the tech_demo project, I build it a service that make a word count over the sentences that are entered to the list in another thread and wait 2 seconds to deliver results to the GUI layer.

    Most of you probably heard in Ajax and Web 2.0 world the term Comet, Primefaces has a feasibility study version of it (from many points they are accepting this and promising more production ready version of it in the feature, for more details you can look the their user manual “http://primefaces.googlecode.com/files/primefaces_users_guide_260710.pdf” Chapter 6). With existing the implementation of Comet functionality in Primefaces, events to the Comet can only be triggered from the thread that is rendering the JSF components. So that was not compatible with the scenario that I explained before.  So I have to modify somethings in Comet mechanism in primefaces (which based Atmospehere implementation).

    I found some good ideas in a thread in primefaces forum (“http://primefaces.prime.com.tr/forum/viewtopic.php?f=3&t=1480”) and adapted to my needs. The idea is basically to create another version of AtmosphereServlet that places Comet broadcaster in a container that we can access outside of the JSF render thread. One thing I have to change while I have to access to http request and I can’t do that from a background thread, I created a service locator that would get instance of the broadcaster that can retrieve the instance from the background thread. So a call like this would be possible from the background thread…

    broadcaster.broadcast(“sentences are refreshed”);

    You can find the CometServletLocator and CometServlet in Comet maven project.

    Here I also modified JBoss/Tomcat Connector for port 8080 for the Nio Protocol implementation, because it is mentioned so in Atmosphere documentation. We should do the following change in server.xml in the deploy/jboss-web.deployer and it looks like this.

    <Connector port="8080" address="${jboss.bind.address}"             maxThreads="250" maxHttpHeaderSize="8192"         emptySessionPath="true" protocol="org.apache.coyote.http11.Http11NioProtocol"         enableLookups="false" redirectPort="8443" acceptCount="100"         connectionTimeout="20000" disableUploadTimeout="true" />

    Project build process and deployment:

    As I previously mentioned project source code is under github.com, I want to give here some pointers so people get the project from the github.com, build and deploy in JBoss Portal. Personally, I am working at windows at the moment, my best advice is to install the cygwin to get the Git functioanality.

    Cygwin is an Unix emulation for windows (you get an unix shell and you can execute unix commands). There are some other Git tools but Git and Maven works perfect with Cygwin.

    You can get the Cygwin from the following URL.

    Cygwin

    During the installation of the Cygwin, it is critical to install the git packages, if you are not familiar with the cygwin, the setup screen should look like this…..

    Cygwin git setup

    Now we are at the part that we have to get the project from the github. If you are not experienced with github, github itself has a really good help page and tutorials under the following url.

    Git Help

    The command that we have to execute in cygwin to get the project from github readonly (that only means you can’t commit your changes to my repository) looks like following. Please first go to the directory that you would like to get the project in and from this directory execute the following command.

    git clone git://github.com/mehmetsalgar/jbp_swf_primefaces.git

    This should create the project structure in the directory…

    Now we need Maven to build the project, if you don’t have the maven in your machine you can get it from the following url (I prefer Maven 2.2.1 Maven 3.0 is little bit picky at the moment)…

    Maven download Url

    If you call the following command at the directory that you downloaded the git repository that should build the project.

    mvn clean install -Pclient_side

    if the end result looks like this that means it worked….


    [INFO]
    [INFO] ------------------------------------------------------------------------
    [INFO] Reactor Summary:
    [INFO] ------------------------------------------------------------------------
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Technology Demo SUCCESS [4.844s]
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Dispatcher Portlet SUCCESS [3.203s]
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Portlet Bridge SUCCESS [1.593s]
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Portlet Filter SUCCESS [1.016s]
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Primefaces Patch SUCCESS [0.031s]
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Patch Primefaces SUCCESS [1.344s]
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Patch Primefaces Assembly SUCCESS [3.516s]
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Spring Webflow Patch SUCCESS [0.015s]
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Patch Webflow Faces SUCCESS [2.078s]
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Patch Webflow Faces Assembly SUCCESS [0.610s]
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Comet ... SUCCESS [1.203s]
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Technology Demo Portlet SUCCESS [7.578s]
    [INFO] JBoss Portal - Spring Web Flow - Primefaces - Package . SUCCESS [6.391s]
    [INFO] ------------------------------------------------------------------------
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESSFUL
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 34 seconds
    [INFO] Finished at: Sat Nov 27 17:09:33 CET 2010
    [INFO] Final Memory: 40M/63M
    [INFO] ------------------------------------------------------------------------

    I also strongly advice to execute the following statement, that would download the source code of the all frameworks that the project references so when later on, we debug the things inside of the Eclipse IDE, we can see the source codes of these framework (Spring Workflow, etc….)

    maven dependancy:sources

    The maven build also prepares the necessary artifacts for deploying project to the JBoss Portal. The project package is a maven assembly project, it is grouping the necessary artifacts  for the server layout. Which artifacts are going to be prepared is dependent on which maven profile we are building the project.

    The client_side profile is responsible to make necessary changes to jQuery-. It would be patched so that the browser can take the partial responses out of the portal response.

    The server_side profile is responsible for configuring the filter at the server side so that partial responses are extracted at the server side  and transfered directly to the client browser.

    After the build is complete, we should see under package project target directory the following artifact, package-1.0-SNAPSHOT.dir, which is also a directory and now contains the server layout for the deployment.

    You can drag&drop (or copy manually)  all the files under the package-1.0-SNAPSHOT.dir to the server/default  directory of JBoss Portal 2.6.4.

    This bring us to the server that we are going to use for this test application In my production project, we were dependent to the JBoss Portal 2.6.4, so the instructions here are valid for JSR168/Portlet Spec 1.0 and JBoss 4.2.2 based JBoss Application server. If I would have time in the future, I will try this solution with a JSR 268 / Portlet 2.0 spec container (JBoss Portal 2.7 or GateIn) and probably update this blog.

    You can get the bundled version of the JBoss Portal 2.6.4 from the following URL.

    JBoss Portal 2.6.4 Bundled

    Now we can make some adjustments to the JBoss startup script that would make our life easier. The changes are necessary to be able to debug the project with eclipse and also because some memory requirements for hot deployment.

    Please add the following lines to the run.bat file (or shell depending on which environment you are running the JBoss – please remember the the following snippet would look different in the unix derivates) which lies under the bin directory at JBoss Portal installation directory.

    set JAVA_OPTS=%JAVA_OPTS% -Xms512m -Xmx1024m -XX:+CMSPermGenSweepingEnabled -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000

    Before we start the application I can give you some pointers how to prepare eclipse also in the case you have no maven / eclipse experience.

    Please install the m2eclipse plugin to the eclipse from the following URL (Instruction can be found there also).

    M2Eclipse installation URL

    M2Eclipse will find over maven pom’s, the necessary dependencies for the projects, which is critical for debugging. As you remember we downloaded sources of all the dependencies known to the project . I can tell you that when you are searching a problem in an existing framework, it is extremely valuable to have this functionality.

    Now it seems, all the preparation steps are complete, we can start with real testing, please go to the bin directory of the JBoss Portal and call run.bat. The server would be up and running when  you see the following output in the console.

    18:04:34,690 INFO [Http11NioProtocol] Starting Coyote HTTP/1.1 on http-127.0.0.1-8080
    18:04:34,721 INFO [AjpProtocol] Starting Coyote AJP/1.3 on ajp-127.0.0.1-8009
    18:04:34,737 INFO [Server] JBoss (MX MicroKernel) [4.2.2.GA (build: SVNTag=JBoss_4_2_2_GA date=200710221139)] Started in 1m:5s:936ms

    You may see some warnings and errors because of the Nio Connector implementation (they are coming from the WSRP module) but it is not critical…..

    Now we are ready to call the application for the first time. Please use the following URL to see the default JBoss Portal page.

    http://localhost:8080/portal

    In this page you would see several tabs and one of them would be our TechDemo application, when we click the tab, we would navigate to our actual application.

    A screenshot from proof of concept project

    Which looks like as above, in the text entry field, I once entered the text ‘test’ and add it to the list, then I entered the text ‘test demo’, reversed and add it to the list and finally I call the word count which is asynchronously made a word count and refresh the GUI with Comet.

    Final thoughts:

    Now that we saw the contents of the project, how to build and deploy it, I want explain quickly what I had done with the Spring Web Flow, which was the actually start point of everything.

    I like to show you configuration file of the Spring Web Flow and discuss what is happening there.

    <?xml version="1.0" encoding="UTF-8"?>

    <flow xmlns="http://www.springframework.org/schema/webflow" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/webflow        http://www.springframework.org/schema/webflow/spring-webflow-2.0.xsd">

    <view-state id="techDemo-portlet">
    <on-render>
    <evaluate expression="textEntryRVC.setConsumer(textManagedBean)" />
    <evaluate expression="textListRVC.setConsumer(textManagedBean)" />
    <evaluate expression="textListResultRVC.setConsumer(wordCountManagedBean)" />
    </on-render>
    <transition on="add">
    <evaluate expression="textManagedBean.addToSentences(textEntryRVC.textEntered)" />          <evaluate expression="textEntryRVC.setTextEntered(null)" />
    </transition>

    <transition on="reverse">
    <evaluate expression="textManagedBean.reverse(textEntryRVC)" />
    </transition>
    <transition on="wordCount">
    <evaluate expression="wordCountController.executeWordCount(textManagedBean.sentences,          wordCountManagedBean)" />
    <evaluate expression="wordCountManagedBean.setWordCountRunning(true)"/>
    </transition>
    <transition on="refresh" >
    <evaluate expression="wordCountManagedBean.setWordCountRunning(false)"/>
    </transition>
    </view-state>
    </flow>

    Our system is event based and it needs some discipline to handle these events. Spring Web Flow is exactly the right technology to bring this discipline. If you look to the above configuration file, you would see that main element in the configuration is the view state. It defines the state of gui when a certain view is rendered (in this case it is techDemo-Portlet as you may see from the techDemo-portlet.xhtml in the project).

    When the GUI gets events in this states, it knows exactly what to do with them with the help of the transition elements, for ex, when ‘add’ event comes, it calls the managed bean to add this to the list, when ‘reverse’ event comes it calls the managed bean to reverse the string and finally ‘wordcount’ event arrive, it calls the the WordCountController to to count the sentences.

    Now what you have to pay attention here, if you take the same xhtml and place it under another portlet (or include the fragments of xhtml to another portlet) you would get the same events but you can do completely different things with these events. There would be another view state and the transitions can have a completely another meaning, for ex when you get the ‘add’ event this time it only makes 1 + 1 instead of adding to List, that brings some concept of re-usability also.

    That is the power of the Spring Web Flow and statemachine, an event under different contexts can have totally different meanings and we don’t have cover 1000 possible scenarios inside of our render methods. State machine tells us exactly under which state (and context)  we are and we can handle the events accordingly.

    Spring Web Flow has some weaknesses, mainly it is build for the Web 1.0 concept and Ajax world is not too much taken into count. It is expected that we receive events and navigate to the other pages/views but that is not the way it happens with Ajax. With Ajax we are also submitting to the same page and stay at same page. That is the reason you are seeing transition elements with no ‘to ‘ attributes because we are not navigating to another page/view state. Naturally when we have lots of Ajax functionality, Spring Web Flow reaches also to its borders but anyway it is a good starting point…..

    In a further blog, I would also try explain how can we add from UML generated state machines under the Spring Flow to gain a much bigger control over our business logic.

    Conclusion:

    Well I hope, you like what you read, I know it became little bit too long, so I hope you were able to stay with me. This my first time trying to write a blog, so if you have any advices the things that I can make better, please let me know, I am really interested to hear.

    I hope what you read would be useful somewhere in your professional life……

    PS. English is not my native language, I try to do my best but I am sure there are still some mistakes in it and I apologize for it.

    Posted in Ajax, Atmosphere, Comet, Cygwin, JBoss Portal, JSR168, JSR301, Maven, Portlet, Primefaces, Software Development, Spring Web Flow | Tagged , , , , , , , , , | 1 Comment