Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka

Content:

Introduction:
We are all aware that ‘Micro Services’ is one of the most popular buzz words in the last years. I always found the idea interesting but outside of the some specific business cases not so practicable. Sure Amazon, Netflix are doing unbelievable things with those but I found their Business Cases are quite different then classical businesses like Telecommunication, Banks, Insurances. Those branches Business Cases are much more glued together that they have to be transactional and have to depend to Transactions. How should Micro Services which should exists by design in their islands, can be adapted to these highly transactional environments. There are also other technical problems, if the Microservices should present their interfaces as REST and many Microservices must go into an orchestrations to fulfill complex Business Cases, how those will perform under high load because of the Network traffic and Serialization costs. I know some big Enterprises tried to use Microservices in their projects which hit this wall, which I call Fanout Problem (fan-out, fan out, which ever you want as a new buzz word :)), too much network communication even bringing Gigabit networks down. These are in my opinions biggest obstacles people who would try to implements Microservices in old school Enterprise environments will encounter. Well, if you want to learn how I did solve these problems with JBoss Wildfly, Spring Boot and Netflix Eureka, you have to read the rest of the article also.

Detailed Problem Definition:
If you look to the internet, you will not find any practicable solution about implementing Transactions over several Micro Services, you will only get an advice about that you should design the Micro Services so that transactions can be self contained, designed with bounded context principle from Domain Driven Design, in which it is stated that the business object related in the solution of a problem must be grouped and act together. This way these business objects can have their own databases and care about only about transaction between themselves, this way they are good candidate for Micro Services. My problem with this, we are living in an Agile world isn’t, what would be the consequences if I designed two bounded contexts/micro services and ran like two years now have a change and they should belong together, this would cause quite big technological and organizational challenge.

One other proposal is Event Queuing, which state instead of having databases modeling several business objects, which should only have one database and one table, a table recording all the events in the system, for a bank, for ex, all the money a customer deposit/draws to its account saved as events and his/hers current account balance is calculated from aggregation of values from all those events. This is such different way of thinking and designing systems, I am not even sure the classical businesses I mentioned can grasp it, forget about adapting, I have problems convincing these organizations much smaller changes then this.

One final proposed solution is to delegate to transactional queues and they will get guaranteed run to completion and reach eventually consistent states, again I am not even sure this can be accepted from above mentioned classical businesses.

You can find all these discussions in the following Youtube video Microservice Design Principles. Also the traditional Transaction mechanism and configurations would make the services tightly coupled and denies the main advantages of the Micro Service architecture and bring very complex configurations to the services actually that should not know anything about each other. What if I would tell you that I have designed a solution here that makes the self configuration of Transactions possible without any of the services knowing anything about each other. Would that be interesting?

Also not too many people discussing the Fanout Problem openly at the moment caused by extreme overhead of Serialization via REST/JSon or Java serialization mechanisms (you can see benchmarks about these costs here). The problem is so real that because of the Serialization costs and chattiness of Micro Services on the Network it even brings Gigabit Networks down.

Either people are in the middle of developing their systems, didn’t discovered the problem and will hit the wall soon or they discovered but they are not speaking loud about it, some of the concern about the topic you can see in the following youtube video Microservice Performance. If you follow the topic closely in the internet, read the articles of major Micro Services evangelist’s or watch their presentations, you will get clues about when they have to orchestrate big amount Micro Services, they are not going for REST – REST communication for orchestration but for binary protocols. Where are these hints you may say, for ex, watch the following video at time index Netflix Way. Even the renowned Netflix framework which are providing the best Open Source libraries for Micro Services/Cloud functionalities is using binary protocols for inter Micro Service orchestration. Off course I can’t exactly say what Netflix is doing, it is not openly published until now as I know. I heard some hints that people are using Apache Kafka for streaming the communication between the Micro Services but is this a solution for bare mortals. Do you want to develop software for your Business Cases or spending countless hours for trying to configure and maintain such a streaming solution? What if I would told you I have a solution that will solve this problem without having high end space shuttle technology but somethings that existed there for years and not betraying the Micro Services principles. Would that be interesting?

Solutions:
The following picture is an overview of how I will approach to solve the Fanout and Transaction Problems for Micro Services.
scenario_1
picture 1

Basically we are implementing the Use Cases that our Micro Services should cover as JMX Components, the reasons for it I will explain shortly. For every JMX Component, we will have a REST Facade first to present the functionality that implemented by JMX Components to outside world, secondly to register those REST Facades to Netflix Eureka so it will provide us Failover and Load Balancing services. In the above picture you will see three Micro Services, Customer, Product and Order. All of them having an API component (containing interfaces), a JMX Component, a REST Facade. Additionally you will also see a ProductProcess Component which is mainly responsible for Orchestration. This will be the component which will cause most headaches for Fanout and Transactions. If all its communication occurs over REST with other Micro Services, it will cause too much network overhead, serialization costs and it will not be possible to build transactional unit if the whole conversation occurs over REST Facades.

So how do we solve this, by injecting same JMX Components also to the ProductProcess Orchestration layer/Composite Service, if the ProductProcess runs in the same Java Virtual Machine as the Customer, Product, Order Micro Services whole communication will happen over binary protocol of JMX, no serialization, no TCP communication and while all 3 Micro Services and Orchestration layer runs on the same JVM it is possible to run inside of a Transactional unit.

The orchestration components will be smart enough to discover other Micro Services without any configuration, if a Micro Service JMX Components exist in the same JVM and they will will discover it automatically and use those (this will happen via Spring and Dependency Injection). If it can’t detect those and it need those Micro Services functionality, it will discover those over Netflix Eureka and communicate over REST, off course this will mean serialization cost and no means of using transactional orchestrations but you have the full ability to configure your application via deployment. If you want your application to have top performance and transactions, you have the deploy the components necessary for orchestration in the same JVM, if not, you can deploy them to different JVM and let the orchestration layer find them via Netflix Eureka. Off course, if something goes wrong and some of the components can’t initialize correctly then orchestration layer will again use REST Facades, if transactions are not required.

You can see in the following Sequence Diagram representing how the system should work if it is deployed to one single JVM….

localdeployment
picture 2

and this one representing if the Micro Services are not deployed to the same JVM.

remotedeploymentorjmxnotavailable
picture 3

Now you can ask does this not break the Micro Services principles, there comes two important libraries from Netflix to the rescue, Eureka and Hystrix. By definition Micro Services must have service discovery/fail-over/load balancing feature, by registering the REST Facades to Eureka, we let the Eureka know every instance of the Services running in the environment. I also implemented a functionality to the JMX Components so that JMX Components reporting their health status to Eureka. So if a JMX Component will goes down, Eureka will mark this and will not send any further request to this REST Facade instance of Micro Service of running in this JVM.

In the case of a failure occurs during processing at Orchestration Layer, while I used the Hystrix Annotations, Hystrix Fallback Mechanism will redirected failing call to the Eureka/Ribbon to find a healthy instance of Micro Service in the Cloud. This way, Micro Service is not accessed with binary protocol but at least the user didn’t see an error, his request is completed and this fulfills the High Availability requirement of the Micro Service.

Another important concept in the Micro Services is the avoidance of tight coupling, at the end that is what constitutes the Monoliths. So if you look to my solution, you might ask installing so many JMX Component under one JVM to fulfill Use Cases is not tight coupling. Actually not, first no JMX Component does know anything other then the API of the other JMX Component, secondly everything is plug and playable, if you install all the necessary JMX Component for your Business Case to one single JVM, they will auto discover each other and if not all JMX Component installed to one single JVM, the solution will find an instance over Netflix Eureka (but then off course the communication will be over REST and not over binary protocol). You have the complete freedom how to design your system, you need Transactional Support/Performance then install JMX Component to one single JVM. You need more Fail-over/Balancing then deploy them to different JVM with REST Facade and let them discovered by Netflix Eureka.

Another big advantage this solution has against Monoliths, is the ability to scale where it is needed. In a monolith, you might have a functionality that, lets say, called once per month and other service called 1000 times per second, when you want to scale your Monolith you will start deploying new instances of it, congratulations your service that called once a month now lives and eat resources on two deployments. With the above solution and with Micro Service you can scale only what is needed not everything while they just live in the Monolith.

Solution Walkthrough:
First thing I have to say this will be JBoss inclined solution. There are two features that JBoss possess that make implementation of this solution quite easy. Can these functionalities be replicated in other Application Servers like WebLogic, WebSphere, … etc, most probably but again this is a feasibility study and I choose the path that showed the minimum resistance.

Ok, first big advantage is how JBoss functions, it has a concept of Micro Kernel architecture, which provides possibility of additional services plugging in to this kernel via JMX, services like Transaction Service, Java Messaging Service, Security, Data Sources, etc…., sound familiar? That means the way the JBoss operates inherently support Micro Service ideas. So to take advantage of this, we will deploy the functionality we need from our Micro Service as JMX Components to the JBoss AS(WildFly, AS7, can be any version), this option will helps us a lot in the next steps of our solution.

Second big advantage is the class loading mechanism of the JBoss Application Server up version AS7. Many of you probably heard OSGi containers, now don’t panic if you burned your finger with OSGi Containers and stay with me until I finish explaining. If we are going to deploy our Micro Services as JMX Components and if these Components should communicate for Orchestration then they have to know each others Interfaces. Now we can’t include this information to each others container, this will cause ClassCastException’s and ClassLoading problems. Standard procedure to solve this problem in Application Servers to place these library in Root Library directory, so they will be visible to Root Classloader and can be loaded with every component. So you say now, everything nice and dandy, why to mention of OSGi.

As many of you already know, placing the libraries this way in Root Library directory can also cause all sort of problems. OSGi saw this and they build it a revolutionary Classloading Mechanism which allow components to load exactly the libraries they need to their ClassLoader’s and nothing more. The problem with OSGi was, it was too restrictive and Java World was not ready for it (OSGi libraries needs special descriptors, as today, still not all libraries in the internet has these descriptors), it caused too much problem and nearly abandoned in Java Enterprise development. But I guess JBoss Developers saw the power of it and took the all good ideas to their JBoss AS7 ClassLoading mechanism and leave the problematic things out. Now this gives to the Components in the JBoss Application Server to load exactly the libraries they need and nothing more with the help of the descriptor. This is specially important for us when we want that several Micro Services orchestrate, that they will be only the interface information about each other and nothing more. Also the Java World thinks this is a good idea and will implement this with Java 9 and with JSR-376 (JSR-376 Java Module System). This will help us alot when we start discussing the versioning for our Micro Services.

Next point I like to explain is about the integration of JBoss Micro Kernel structure with Spring. Spring contains a really useful library for accessing and publishing JMX functionality. This is extremely important for us while we want the Plug and Play functionality for our JMX Component, what I mean by that when I deploy those JMX Components to JBoss, they should publish themselves to JMX Server and also discover other JMX Components via Spring Autowiring functionality, this way we don’t have to configure anything explicitly.

Next step is to configure those JMX Components, so they can report their Health status to Netflix Eureka. This will give us failover and load balancing abilities. Netflix provides some interfaces that if implemented by the Services, during the discovery phase will be registered by Netflix Eureka and periodically ping to Eureka. With these pings Netflix Eureka will ask the status of the service and if it does not get an OK response, it mark the service DOWN and will not send any further requests to these instances. How this works in my project internally? Every Rest Facade must have an according JMX component injected, if not, Health Check component will send down signal (now this can be for 2 reasons, it is either there is a problem with JMX component then it can’t be injected or this JMX component is not installed to the local JVM and if this service want to communicate with this server it should find an instance over Netflix Eureka and communicate via Rest) or JMX component is injected but responded inquiries from Rest Facade as ‘NOT OK’ and for this reason it will be set to DOWN by Netflix Eureka.

There is an another interesting detail here, remember the orchestration component we spoke about a while ago, the one that should be responsible for the Transactional behavior, this Health Check is really important for this feature. I developed a special annotation (you can see here how exactly configured and internal workings), when it is placed on a method in the orchestration service, it will signal our transactional intentions. This annotation will also contain the name of the Services this transaction will be dependent and registers automatically the Health Check of these services, if any of these Services signal DOWN state, then Orchestration Service will mark itself DOWN in the Netflix and should not receive any further traffic for this orchestration service instance. We need this because if a Transaction must run over three Services, every one of these services must be available and installed to same JVM, if this is not achieved and this Service will not accept any further traffic.

In the following Diagram you can see the representation of the topics we discussed in an activity diagram.

fanoutactivity
picture 4

I guess this will end the overview of the system, we can look things more detailed with code samples in the next section.

Project Structure Overview and Code Samples
Lets first look to the project structure…

project_structure_scenario_1
picture 5

So lets go over the project structure…

api‘ projects are the interface contracts of our micro services, which will be placed under the Module Classloading mechanism of the JBoss Aplication Server so our JMX Beans can communicate with each other. We have 6 ‘api’ projects, product_api, customer_api, order_api and vesion 1, version 2 of those (we will also observe how this concept works with multiple versions of Micro Services which happens also all the time in real projects).

sar‘ projects are the real implementation of our Micro Services. ‘sar’ is here Service Archives, a special packaging form for JBoss AS for JMX beans (mainly for JBoss Micro Kernel).  We have again 6 ‘sar’ projects product_sar, customer_sar, order_sar and version 1 and 2 of those.

services‘ projects are the Rest Facades for our JMX Beans, outside client will communicate with our JMX Bean over these Rest Facade and these will be registered to Netflix Eureka for Service Discovery.

processes‘ projects are the orchestration level Rest Facades. In this layer, the JMX Beans that are necessary for the orchestration scenarios will be injected , if they are present in the same JVM. If not Netflix Eureka will be queried for a Rest Facade instancse of the JMX Bean and business logic will be executed via these facades.

Transactional behavior will also be implemented in Orchestration Services, if ‘ProcessService’ class is annotated with the annotation ‘@TransactionalFanout‘ all the necessary Services cited in the annotation must be available as JMX beans for this ‘ProcessService’ instance to process this request otherwise it must mark itself DOWN in Netflix Eureka, so no additional request will be routed here.

One additional functionality that we use in ‘ProcessService’ is Hystrix library from Netflix. Every communication with other Micro Services are annotated with Hystrix annotations, if a call to another Micro Service as JMX Bean fails, Hystrix guarantees that as a fallback a Rest Facade instance is discovered from Netflix Eureka and called (a failed call will also guarantee that JMX Bean instance will be marked DOWN in Netflix Eureka so it will not get the same instance).

sar-utility‘ project contains necessary utility classes to introduce Spring capabilities to JBoss JMX Service Archives which they don’t possess out of the  box. There are already other libraries in the internet that can provide this functionality but one limitation with them they starting up a JBoss AS wide application context, which we don’t want for our Micro Service concept. We want that every Micro Service starts its own application context isolated from other Micro Services.

annotation‘ and its underlying projects are responsible of the implementation of @TransactionalFanout and @TransactionalOrchestration annotations. The goal we want to achieve with the first one, any class that has this annotation should have access to the all the JMX Bean defined in this annotation and if not this ‘Orchestration Service’ must mark itself down in Netflix Eureka and for the later to change the failure behavior of Hystrix.

eureka_patch‘ @TransactionFanout depends health check feature of the Netflix Eureka but unfortunately the ‘@ConditionalOnClass’ annotation in the class ‘EurekaDiscoveryClientConfiguration’ was causing problems because @TransactionalFanout annotation tries to configure Health Check on the run and this was too late for the Spring application context configuration lifecycle, for this reason I have to patch this class and remove this annotation from the static class declaration ‘EurekaHealthCheckHandlerConfiguration’.

hystrix_patch‘, we need this project for the modifications we have to do to the Hystrix framework for Transactional Orchestration.

health-check‘ this project contains basic interfaces and components that necessary to have for Netflix Eureka health check mechanisms.

support/eureka‘ project is a Spring Boot application to initialize and run Netflix Eureka framework.

support/zuul‘ project is also Spring Boot application to initialize and run Netflix Zuul framework, which is an Edge Server (by definition our Micro Services should not be open to outside world without security measure, it is also use the best practice to not to open simply Micro Service directly but only the services making the Orchestration, this way we will only allow external user to access ProcessService over Zuul Server).

assembler‘ this project is there to assemble our Micro Service artifacts via Maven Assembler plugin to not creates a nightmare for our Operation guys during the delivery(depending the number of Micro Services you can 100 of Web Archives (war’s) or jar’s to deploy and while we are not using Enterprise Archives (ear’s) to prevent Monolith behavior we need such constructs).

Detailed Project Analysis
The source code for the project is available at Github under the following link….
Github
but I would like to give here some hint here about the magic happening so people would have pointer when searching the Github.

– API Projects – Modular Classloading
The base of the solution depends on JBoss Modular Classloading system, for this we need the API projects because those would be only thing that we will share between Micro Services.

For ex, Product class, which is defining one of our model objects, in the ‘product_api_v1’ project looks like the following.

package org.salgar.product.api.v1.model;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;

@Entity
@Table(name = "PRODUCT_TA")
public class Product {
	@Id
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	private Integer productId;
	private String name;

	public Integer getProductId() {
		return productId;
	}

	public void setProductId(Integer productId) {
		this.productId = productId;
	}

	public String getName() {
		return name;
	}

	public void setName(String name) {
		this.name = name;
	}
}

snippet 1

It is a standard POJO and only interesting thing about it is ‘javax.persistence’ annotations, while our final solution will have database persistence and transactional capabilities these POJO’s must be decorated this way. Purist can say, define these POJOs as interfaces and annotate the classes in another layer but JBoss Modular Classloading mechanism can deal with it without any problem.

We also have to define our Service interfaces while we are going to communicate with other Micro Services via JMX Beans.

package org.salgar.product.api.v1;

import org.salgar.healthcheck.HealthCheck;
import org.salgar.product.api.v1.model.Product;

public interface ProductService extends HealthCheck {
	Product giveProduct(Integer productId);
	Product saveProduct(Product product);
}

snippet 2

This is a simple service only interesting thing here is the extended HealthCheck interfaces which containing standard method that will be useful in Netflix Eureka health check mechanisms.

The Maven artifacts that are produced for ‘product_api_v1’ must be placed in a special directory with a special descriptor in JBoss AS. The descriptor (called module.xml) will look like the following.

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="com.salgar.product_api.v1">
    <resources>
        <resource-root path="product_api_v1-1.0-SNAPSHOT.jar" />
    </resources>
    <dependencies>
        <module name="org.salgar.healthcheck.api" export="true"/>
        <module name="javax.persistence.api" export="true" />
        <module name="org.hibernate" />
        <module name="org.javassist" />
    </dependencies>
</module>

snippet 3

This will be placed in the following directory structure in JBoss.

JBoss Module 1
picture 5

and directory content will look like the following.

JBoss Module 2
picture 6

Path structure where we place the artifacts reflect the ‘name’ attribute of module.xml, ‘com.salgar.product_api.v1’ and the ‘resource-root’ element indicate which jars this module should provide.
The real interesting part is the ‘dependencies’ elements, JBoss Modular Classloading mechanism give us the possibility to reference other modules that are already configured in JBoss. As you see while we used javax.persistence annotations in our POJOs the modules ‘javax.persistence.api’, ‘org.hibernate’, ‘org.javassist’ are referenced. As I previously mentioned before, we will also use the Health Check functionality of the Netflix Eureka, so we are referencing that module also.

– Service Archive’s(SAR)
As we discussed previously, we will implement our business logic via JMX Beans to be able to use Plug and Play functionality of JBoss Micro Kernel provide so we design our Micro Services independent from each other, in a loosely coupled fashion.

There are two sticking points here, first, JMX definition/configuration files are really verbose and we don’t want to deal with them because those will make daily development effort quite tedious. Second, to be able to use loosely coupling, we want to use Dependency Injection, if a JMX Bean is dependent on another JMX Bean in the same VM, it should locate the Bean instance and inject it via Autowiring. If a Bean instance not there it should continue without causing any error. This is the Auto Discovery feature that I promised.

Fortunately Spring Framework can fulfill these two requirements for us, Spring has a special library for registering and discovering JMX Beans (and use them as normal Spring Beans after the discovery so they can be autowired).

Unfortunately out of the box, there is no Spring support in JBoss AS or in SAR concept (in the later version of the JBoss there is a support but they are starting Application wide Spring Application Context which we don’t want with our Micro Services, we want that every Micro Service to be self contained).

To be able to provide Spring support I have to write some code in the project ‘sar-utility’.

First of all, there are some conventions in the JMX Beans lifecycles, like ‘start’ and ‘stop’, if we follow these conventions and create a class implementing start and stop methods and place those in a SAR, then JBoss will automatically discover this JMX Bean, register it and execute ‘start’ method when Bean is initialized and call ‘stop’ when container is shutdown. One special point here, JMX Bean must implement an interface containing ‘MBean’ in its name. This is for us ‘SpringInitializerMBean’ and JMX Bean implementation that will configure Spring Application Context look like the following.

package org.salgar.mbean;

import java.lang.reflect.Constructor;
import java.lang.reflect.Method;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

public class SpringInitializer implements SpringInitializerMBean {
	private static final Log LOG = LogFactory.getLog(SpringInitializer.class);
	private static final String SPRING_CTX = "classpath:/META-INF/jboss-spring.xml";
	private Object ctx;
	
	public SpringInitializer() {
		LOG.info("Starting....");
	}
	
	public void start() throws Exception {
		LOG.info("starting");
		installApplicationContext();
	}
	
	public void stop() throws Exception {
		closeApplicationContext();
	}
	
	@SuppressWarnings("rawtypes")
	private void installApplicationContext() {
        try {
            Class contextClass = Class.forName("org.springframework.context.support.ClassPathXmlApplicationContext");

            @SuppressWarnings("unchecked")
			Constructor constructor = contextClass.getConstructor(new Class[] {String.class});
            Object tmpCtx = constructor.newInstance(new Object[] { SPRING_CTX });
            
            if (tmpCtx != null) {
                this.ctx = tmpCtx;
            }
        } catch (Throwable e) {
            LOG.error(" Unable to load Application Context '" + SPRING_CTX + "'. keeping existing context. Reason: " + e.getMessage(), e);
        }
    }

    private void closeApplicationContext() {
        if (ctx != null) {
            try {
                Method close = ctx.getClass().getMethod("close", null);
                close.invoke(ctx, null);
                LOG.info("applicationContext closed.");
            } catch (Throwable t) {
                LOG.error("Unable to close applicationContext '" + SPRING_CTX + "'. Reason: " + t.getMessage()
                        + ". Restart JBoss if possible.", t);
            }
        }
    }

	@Override
	public void test() {
		LOG.info("Starting ....");
	}
}

snippet 4

If you look closely to the code, in the ‘start’ method we are looking for a Spring Application Context configuration file called ‘classpath:/META-INF/jboss-spring.xml’ (which should exist under every SAR project that we implement) to initialize the Spring Application Context.

We will now look to the implementation logic for Order Micro Service in ‘order_sar_v1’.

First of all, we have to Maven to package everything as a Service Archive (SAR), fortunately JBoss provides such a Maven Plugin. Following is an excerpt from the important parts of the pom.xml of ‘order_sar_v1’.

 
 ......
 <packaging>jboss-sar</packaging>
 ......
  <plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>jboss-packaging-maven-plugin</artifactId>
    <extensions>true</extensions>
    <version>2.2</version>
  </plugin>
  .......

snippet 5

Important points here are the ‘packaging’ type ‘jboss-sar’ and ‘jboss-packaging-maven-plugin’, those will create an ‘order_sar_v1.sar’.

Then we can look to the famous ‘/META-INF/jboss-spring.xml’,

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">
        
        <context:component-scan base-package="org.salgar"/>
        
        <import resource="classpath:/META-INF/context-export/context-applicationContext.xml" />
        <import resource="classpath:/META-INF/order-service/applicationContext-orderService.xml" />
        <import resource="classpath:/META-INF/order-service/applicationContext-dao.xml" />
</beans>

snippet 6

which will activate ‘component-scan’ for autowiring, then initialize Spring JMX Components to register the beans to JVMs JMX Server ‘context-applicationContext.xml’, initialize the beans for our business logic ‘applicationContext-orderService.xml’ and finally ‘applicationContext-dao.xml’ for ‘javax.persitence’ layer.

‘context-applicationContext.xml’

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">

    <context:mbean-export registration="failOnExisting" server="mbeanServer" />

    <bean id="mbeanServer" class="org.springframework.jmx.support.MBeanServerFactoryBean">
        <property name="locateExistingServerIfPossible" value="true" />
    </bean>

</beans>

snippet 7

locates JMX Server and register MBean export mechanism.

applicationContext-orderService.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

	<!-- MBEAN EXPORTER -->
	<bean id="exporter" class="org.springframework.jmx.export.MBeanExporter" lazy-init="false">
		<property name="beans">
                         <map>
				<entry key="salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1" value-ref="order-service" />
			</map>
         	</property>
		<property name="registrationBehaviorName" value="REGISTRATION_REPLACE_EXISTING" />
		<property name="assembler" ref="assembler" />
	</bean>

	<!-- will create management interface using annotation metadata -->
	<bean id="assembler" class="org.springframework.jmx.export.assembler.MetadataMBeanInfoAssembler">
		<property name="attributeSource" ref="jmxAttributeSource" />
	</bean>

	<bean id="jmxAttributeSource" class="org.springframework.jmx.export.annotation.AnnotationJmxAttributeSource" />

	<bean id="order-service" class="org.salgar.order.v1.imp.OrderServiceJmx" />
</beans>

snippet 8

creates Order Service and registers it with

<!-- MBEAN EXPORTER -->
	<bean id="exporter" class="org.springframework.jmx.export.MBeanExporter" lazy-init="false">
		<property name="beans">
                        <map>
				<entry key="salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1" value-ref="order-service" />
			</map>
		</property>
		<property name="registrationBehaviorName" value="REGISTRATION_REPLACE_EXISTING" />
		<property name="assembler" ref="assembler" />
	</bean>

snippet 9

at this point we have to look to the interface of the ‘org.salgar.order.v1.imp.OrderServiceJmx’

package org.salgar.order.v1.imp;

import java.util.List;

import org.salgar.order.api.v1.OrderService;
import org.salgar.order.api.v1.model.Order;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jmx.export.annotation.ManagedOperation;
import org.springframework.jmx.export.annotation.ManagedOperationParameter;
import org.springframework.jmx.export.annotation.ManagedOperationParameters;
import org.springframework.jmx.export.annotation.ManagedResource;

@ManagedResource(objectName = "salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1", description = "Order Service V1", log = true, logFile = "jmx.log")
public class OrderServiceJmx implements OrderService {
	@Autowired
	private OrderService orderService;

	@Override
	@ManagedOperation(description = "Gets a parameter as String and delivers an Order")
    @ManagedOperationParameters({
    	@ManagedOperationParameter(name="orderId", description="Id of the order that we want to load.")
    })
	public Order giveOrder(Integer id) {
		return orderService.giveOrder(id);
	}

	@Override
	@ManagedOperation(description = "Saves an order object")
    @ManagedOperationParameters({
    	@ManagedOperationParameter(name="order", description="Order that we want to save.")
    })
	public Order saveOrder(Order order) {
		return orderService.saveOrder(order);
	}

	@Override
	@ManagedOperation(description = "Keep alive test")
    @ManagedOperationParameters()
	public String giveAlive() {
		return orderService.giveAlive();
	}

	@Override
	@ManagedOperation(description = "Gives the orders of the customer")
    @ManagedOperationParameters({
    	@ManagedOperationParameter(name="customerId", description="Id of the customer who owns the orders")
    })
	public List<Order> giveCustomerOrders(Integer customerId) {
		return orderService.giveCustomerOrders(customerId);
	}
}

snippet 10

as you might see, there are special Spring Annotations for JMX. Spring MBean Exporter uses this information to register MBean to JMX Server. For ex, ‘@ManagedResource(objectName = “salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1”, description = “Order Service V1”, log = true, logFile = “jmx.log”)’ register OrderServiceJmx class as JMX Bean ‘salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1’ with ‘@ManagedOperation(description = “Gives the orders of the customer”)’ will register operation ‘public List giveCustomerOrders’ with input paramer ‘@ManagedOperationParameter(name=”customerId”, description=”Id of the customer who owns the orders”)’ customerId.

OrderServiceJmx is just a Facade for JMX functionality registration, autowired ‘OrderServiceImpl’ class is responsible of the implementation of the Business Logic.

package org.salgar.order.v1.imp;

import java.util.List;

import org.salgar.order.api.v1.OrderService;
import org.salgar.order.api.v1.model.Order;
import org.salgar.order.v1.dao.OrderRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;

@Component
@Transactional
public class OrderServiceImpl implements OrderService {
	@Autowired
	private OrderRepository orderRepository;
	
	@Override
	@Transactional(readOnly = true, propagation = Propagation.NEVER)
	public String giveAlive() {
		return alive_signal;
	}

	@Override
	@Transactional(readOnly = true)
	public Order giveOrder(Integer id) {
		return orderRepository.findById(id);
	}

	@Override
	@Transactional(readOnly = false, propagation = Propagation.REQUIRED)
	public Order saveOrder(Order order) {
		return orderRepository.saveOrder(order);
	}
	
	@Override
	@Transactional(readOnly = true)
	public List<Order> giveCustomerOrders(Integer customerId) {
		List<Order> results = orderRepository.giveCustomerOrders(customerId);
		
		return results;
	}
}

snippet 12

As you might notice this class accessing the to API classes that we defined before, now it is time to show how to configure this SAR to use JBoss Modular Classloading. We need this for a special JBoss configuration file ‘jboss-deployment-structure.xml’ which will be placed in the META-INF directory, content of the file looks like the following….

<?xml version="1.0"?>
<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.1">
    <deployment>
        <dependencies>
        	<module name="org.salgar.order.api.1_0"/>
        	<module name="org.salgar.product.api.1_0"/>
        	<module name="org.salgar.customer.api.1_0" />
        	<module name="org.hibernate" />
       </dependencies>
        <resources>
        	<resource-root path="lib/sar-utility-1.0-SNAPSHOT.jar" />
        	<resource-root path="lib/spring-context-4.2.6.RELEASE.jar" />
        	<resource-root path="lib/spring-beans-4.2.6.RELEASE.jar" />
        	<resource-root path="lib/spring-core-4.2.6.RELEASE.jar" />
        	<resource-root path="lib/spring-aop-4.2.6.RELEASE.jar" />
        	<resource-root path="lib/spring-expression-4.2.6.RELEASE.jar" />
        </resources>
        <exclusions>
        </exclusions>
    </deployment>
</jboss-deployment-structure>

snippet 13

The modules that we defined before ‘org.salgar.order.api.1_0’, ‘org.salgar.product.api.1_0’, ‘org.salgar.customer.api.1_0’ are configured as dependency here, with these instructions our SAR classloader will load these libraries from JBoss Modular Classloading. Also while we are using ‘javax.persistence’ ‘org.hibernate’ is also loaded from JBoss Modular Classloading while JBoss is already providing this library (this also prevents that we get class cast exception when we later try to inject these JMX Bean to ‘ProcessService’ and try to orchestrate the transactions).

Finally we tell the classloader to load the following libraries from ‘lib’ directory of the SAR first ‘sar-utility-1.0-SNAPSHOT.jar’ which is responsible for initializing the Spring for our SAR and then necessary Spring libraries (This is also really critical, this give us possibility of deploying Micro Services with different versions of Spring if it is required. In our solution it is not required like a Monolith constant version number of libraries for whole application).

Other interesting details from ‘OrderServiceImp’ java class is the Autowiring of the ‘OrderRespository’ which is the Repository implementation of the ‘javax.persistence’ for Order beans. In this instance concrete implementation….

import org.springframework.stereotype.Repository;

import java.util.List;

import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import javax.persistence.Query;

@Repository
public class JpaOrderRepository implements OrderRepository  {
	@PersistenceContext
	private EntityManager entityManager;
	
	@Override
	public Order saveOrder(Order order) {
		return entityManager.merge(order);
	}

	@Override
	public Order findById(Integer id) {
		return entityManager.find(Order.class, id);
	}

	@Override
	public List<Order> giveCustomerOrders(Integer customerId) {
		Query query = entityManager.createQuery("SELECT o FROM Order o WHERE o.customer.id= :id ");
		@SuppressWarnings("unchecked")
		List<Order> results = (List<Order>) query.setParameter("id", customerId).getResultList();
		return results;
	}
}

snippet 14

which is requiring the Autowiring of EntityManager with @PersistenceContext annotation, to be able to do that we have to configure the persistence context, which happens in ‘META-INF/applicationContext-dao.xml’ and ‘persistence.xml’.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jee="http://www.springframework.org/schema/jee" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd">
        
        <jee:jndi-lookup id="datasource" jndi-name="java:jboss/datasources/microMySqlDS" />
        
        <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
        	<property name="jtaDataSource" ref="datasource" />
        </bean>
        
        <!-- bean id="txManager" class="org.springframework.orm.jpa.JpaTransactionManager">
        	<property name="entityManagerFactory" ref="entityManagerFactory" />
        </bean-->
        <bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager"/>
        
        <tx:annotation-driven transaction-manager="txManager" />
</beans>

snippet 15

The points that are interesting for us, first of is the choice of the TransactionManager, the way I designed the solution, Micro Services deployed in a Plug and Play fashion and they have to discover each other, this is specially critical for joining Transactions, which can only be realized with Container Managed transactions defined with Java Transaction API (JTA). For this reason we have to use ‘org.springframework.transaction.jta.JtaTransactionManager’ as transaction manager, which will discover automatically underlying Container Transaction Manager of JBoss. Secondly, we have to define here a datasource, our datasource is deployed to the JBoss and detected over the JNDI tree, ” (here we are connecting to a MySQL database but I will modify the sample to work with hypersonic database to be able to run this project with less configuration, I use here the MySQL to be able to simulate cluster feature.)

And the ‘persistence.xml’

<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.0" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
	<persistence-unit name="ORDER_V1_SAR">
		<provider>org.hibernate.ejb.HibernatePersistence</provider>
		<class>org.salgar.order.api.v1.model.Order</class>
		<class>org.salgar.product.api.v1.model.Product</class>
		<class>org.salgar.customer.api.v1.model.Customer</class>

		<properties>
			<property name="hibernate.bytecode.use_reflection_optimizer" value="false" />
			<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver" />
			<property name="hibernate.connection.password" value="${org.salgar.ds.pass}" />
			<!-- property name="hibernate.connection.url" value="jdbc:h2:data/micro" /-->
			<property name="hibernate.connection.username" value="root" />
			<property name="hibernate.show_sql" value="true" />
			<property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect" />
			<property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform"/>
		</properties>
	</persistence-unit>
</persistence>

snippet 16

Again interesting point, first we have to declare the classes that we want to use with ‘javax.persistence’, in this case ‘Order’, ‘Product’, ‘Customer’ and defining ‘hibernate.transaction.jta.platform’ to ‘org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform’ for Container Managed JTA Transactions from JBoss.

One final point, do you remember the HealtCheck interface that OrderService extends, we implemented the functionality for it in OrderServiceImp.

@Override
@Transactional(readOnly = true, propagation = Propagation.NEVER)
public String giveAlive() {
	return alive_signal;
}

snippet 17

it is quite primitive and you can think something definitely better in a production environment but the general idea is that, Netflix Eureka will always calls this methods and expect to get ‘alive_signal’ as correct response to be able direct traffic to this JMX Bean.

Other then these details OrderServiceImpl is not too complex, just doing some basic use cases that you will see in real life.

– Rest Facades
Now we configured our SARs to implement the Business Cases of our Micro Services. Unfortunately all the libraries in the market for providing support for Micro Services infrastructure (mainly Netflix Eureka) does not know anything about JMX, they are build to interact with REST. So remember what I said previously for orchestration purposes when several Micro Services communicates, they will do this over JMX as long as they are deployed to same JVM. For the outside clients (outside JVMs) they will provide their services via these Rest Facades.

For an easy integration with Netflix Eureka, we will implement Rest Facades as Spring Boot Application (mainly because of the Spring Cloud libraries) but instead of deploying them as standalone Spring Boot applications, we will deploy them as Web Archive’s (WAR) into JBoss AS.

For ex, in ‘order_v1_rest’ project, first of all, to implement a Spring Boot Application, we should prepare a class that implements initialization duties….

package org.salgar.service;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.boot.context.web.SpringBootServletInitializer;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.context.annotation.ImportResource;

@SpringBootApplication
@EnableDiscoveryClient
@ImportResource(locations = {"classpath:/META-INF/spring/org/salgar/orderservice/v1/applicationContext.xml"})
public class OrderServiceRestApplication extends SpringBootServletInitializer {
	
	@Override
	protected SpringApplicationBuilder configure(SpringApplicationBuilder builder) {
		return builder.sources(OrderServiceRestApplication.class);
	}
	
	public static void main(String[] args) {
		SpringApplication.run(OrderServiceRestApplication.class, args);
	}
}

snippet 18

Interesting parts here, @SpringBootApplication annotation which configure the spring boot application, @EnableDiscoveryClient annotation which configure the Netflix Eureka libraries and @ImportResource annotation which will load the application context that is necessary for the discovery of the JMX Services.

While we are deploying the Spring Boot application in a WAR, we have to extend the class ‘OrderServiceRestApplication’ from ‘SpringBootServletInitializer’ this will insure that application will run in a Servlet Container.

Spring Boot also need some configuration files, first ‘bootstrap.yml’..

spring:
  application:
    name: ${project.artifactId}
  jmx:
    default-domain: ${project.artifactId}
  cloud:
    config:
      uri: ${vcap.services.${PREFIX:}configserver.credentials.uri:http://user:password@localhost:8888}

snippet 19

here some entries are obvious, like application name that will pop up in Netflix Eureka but jmx entry is little bit tricky. Spring Boot also registers some beans into the JMX Server and while we will also deploy several Micro Services as Spring Boot applications in WAR’s these will collide, to prevent this we have to define a namespace with ‘default-domain’ entry.

And now application.yml,

eureka:
  appinfo:
    replicate:
      interval: 10
  instance:
    leaseRenewalIntervalInSeconds: 10
    metadataMap:
      instanceId: ${vcap.application.instance_id:${spring.application.name}:${spring.application.instance_id:${random.value}}}
  client:
    registryFetchIntervalSeconds: 5
    healthcheck:
      enabled: true
    instanceInfoReplicationIntervalSeconds: 10

snippet 20

this file contains mainly the configuration parameters that are necessary for Netflix Eureka, most interesting thing for us is the property ‘healthcheck->enabled’ this will activate the mechanism that the Netflix Eureka asks periodically to Rest Facades their Health Status.

Before we look what is happening in Java Classes, lets look to the Spring Application Context.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jee="http://www.springframework.org/schema/jee" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd">
	
	<context:component-scan base-package="org.salgar"/>
	
	<mvc:annotation-driven />
	
	<!-- jee:jndi-lookup id="mbeanServerConnection" jndi-name="jmx/rmi/RMIAdaptor" expected-type="javax.management.MBeanServerConnection" /-->

	<bean id="proxyOrderService" class="org.springframework.jmx.access.MBeanProxyFactoryBean">
		<property name="objectName" value="salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1" />
		<property name="proxyInterface" value="org.salgar.order.api.v1.OrderService" />
		<!-- property name="server" ref="mbeanServerConnection" /-->
	</bean>
</beans>

snippet 21

Once again, we see that how Spring JMX libraries can help us, ‘org.springframework.jmx.access.MBeanProxyFactoryBean’ class is automatically discovering the JMX Server, locate the Service ‘salgar:name=salgar-order-service-v1,type=org.salgar.order.v1.imp.OrderServiceImp,artifactId=salgar-order-service-v1’ and cast to the interfaces ‘org.salgar.order.api.v1.OrderService’ so we can inject it to the Rest Facades.

Now lets look how our Rest Facades are looking.

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class OrderServiceRest {
	@Autowired
	@Named("proxyOrderService")
	private OrderService orderService;
	
	@RequestMapping("/order/{orderId}")
	public Order giveOrder(@PathVariable("orderId") Integer id) {
		Order order =  orderService.giveOrder(id);
		
		return order;
	}
	
	@RequestMapping(path = "/save_order", method = RequestMethod.POST)
	public Order saveOrder(@RequestBody Order order) {
		return orderService.saveOrder(order);
	}
	
	@RequestMapping("/customerOrders/{customerId}")
	public List<Order> giveCustomerOrders(@PathVariable("customerId") Integer customerId) {
		List<Order> result =  orderService.giveCustomerOrders(customerId);
		
		return result;
	}
}

snippet 22

For the demonstration purposes, this is an extremely simple service, ‘OrderServiceRest’ is annotated with ‘@RestController’ so that Spring can configure it as a Rest Service. We also inject our JMX Bean which will do the real work for us, as you can remember Rest Facade nothing but delegating calls to JMX Bean. Now you can ask what happens for some reason JMX Bean is not available, what will prevent that Rest Facade would not crash? This will bring us to our next point, as a principle in Micro Service client layer should never communicate with with a certain instance of Rest Facade, we should have composite service which are orchestrating the Micro Services and these orchestration services should get the instance from Netflix Eureka, precisely said from Netflix Ribbon. As I previously mentioned, our JMX Beans will have a Health Check functionality, the moment Health Check reports problems Netflix marks the Rest Facades that these JMX Beans are injected as ‘DOWN’ state. So when an Orchestration Service ask an instance from Netflix Eureka/Ribbon, it would not deliver any instance that is in ‘DOWN’ state.

Java code that implements the Health Checks looks like the following.

package org.salgar.service.healthchecker;

import javax.inject.Named;

import org.salgar.healthcheck.RestHealthIndicator;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class HealthCheckerV1Factory {
	@Autowired
	@Named("proxyOrderService")
	private org.salgar.order.api.v1.OrderService orderService;
	
	@Bean
	public RestHealthIndicator<org.salgar.order.api.v1.OrderService> getHealtIndicator() {
		return new RestHealthIndicator<org.salgar.order.api.v1.OrderService>(orderService);
	}
}

snippet 23

‘HealthCheckerV1Factory’ autowires ‘OrderService’ JMX Bean because we will decide this Rest Facade will be ‘UP’ or ‘DOWN’ status depending on the health of the JMX Bean, then we create another bean ‘HealtIndicator’ with return type ‘RestHealthIndicator’ now this is the critical part, ‘RestHealthIndicator’ extends a ‘org.springframework.boot.actuate.health.HealthIndicator’ class….

package org.salgar.healthcheck;

import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;

public class RestHealthIndicator<T extends HealthCheck> implements HealthIndicator {
	private T service;
	
	public RestHealthIndicator(T service) {
		this.service = service;
	}
	
	@Override
	public Health health() {
		try {
			String result = service.giveAlive();
			if(result == null || "".equals(result)) {
				return Health.down().withDetail("result", result).build();
			}
		} catch (Throwable t) {
			return Health.down().withDetail("" + t.getMessage(), t).build();
		}
		return Health.up().build();
	}
}

snippet 24

this is the class Netflix Eureka search in the classpath and registers to do the Health Check is the option ‘healthcheck->enabled’ turned on in ‘application.yml’, which in our case is. Now Netflix Eureka will periodically check this JMX Bean instance is healthy or not, if not it will be marked ‘DOWN’ and wan’t be used by composite services.

Last point that we will concentrate on ‘order_v1_rest’ is its interaction with the JBoss Modular Classloading, we need here again access to ‘order_api’ so we load the dependency from module ‘org.salgar.order.api.1_0’.

<?xml version="1.0"?>
<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.1">
    <deployment>
        <dependencies>
            <module name="org.salgar.order.api.1_0" />
        </dependencies>
        <exclusions>
        </exclusions>
    </deployment>
</jboss-deployment-structure>

snippet 25

Process/Composite Services
During this whole article, you got a good idea about my vision of Process/Composite services. Micro Services by definition deals with concrete and isolated business cases, like finding/editing/creating products, orders, customers, etc, if they are start stocking functionality, that is the way leading us to the Monoliths. So we still need a layer to orchestrate the things, like finding the product and customer and creating an order with them. This layer is our Composite/Process Services, other then the business requirements I sited, must also fulfill technical requirements, like scalability, availability, etc…., Orchestration Services should help us fulfilling those also.

This is how we reach these goals…

package org.salgar.process.service;

.........

@Order(Ordered.HIGHEST_PRECEDENCE)
@RestController
@Transactional
@TransactionalFanout( services = {"proxyProductServiceV1" , "proxyOrderServiceV1", 
		"proxyCustomerServiceV1"})
public class ProcessService {
	private final static Log LOG = LogFactory.getLog(ProcessService.class);
	private boolean routeRestProductV1 = false;
	private boolean routeRestProductV2 = false;
	private boolean routeRestOrderV1 = false;
	private boolean routeRestOrderV2 = false;
	private boolean routeRestCustomerV1 = false;
	private boolean routeRestCustomerV2 = false;
	
	@Autowired(required = false)
	@Named("proxyProductServiceV1")
	private org.salgar.product.api.v1.ProductService productServiceV1;

	@Autowired(required = false)
	@Named("proxyProductServiceV2")
	private org.salgar.product.api.v2.ProductService productServiceV2;
	
	@Autowired(required = false)
	@Named("proxyOrderServiceV1")
	private org.salgar.order.api.v1.OrderService orderServiceV1;
	
	@Autowired(required = false)
	@Named("proxyOrderServiceV2")
	private org.salgar.order.api.v2.OrderService orderServiceV2;
	
	@Autowired(required = false)
	@Named("proxyCustomerServiceV1")
	private org.salgar.customer.api.v1.CustomerService customerServiceV1;
	
	@Autowired(required = false)
	@Named("proxyCustomerServiceV2")
	private org.salgar.customer.api.v2.CustomerService customerServiceV2;
	
	@Autowired
	private ProcessFacade processFacade;
	
	@PostConstruct
	private void defineRoutes() {
		if(productServiceV1 == null) {
			routeRestProductV1 = true;
		} else {
			try {
				String healthCheck = productServiceV1.giveAlive();
				if(healthCheck == null && "".equals(healthCheck)) {
					routeRestProductV1 = true;
				}
			} catch (Throwable t) {
				LOG.error(t.getMessage(), t);
				routeRestProductV1 = true;
			}
		}
		
		if(productServiceV2 == null) {
			routeRestProductV2 = true;
		} else {
			try {
				String healthCheck = productServiceV2.giveAlive();
				if(healthCheck == null && "".equals(healthCheck)) {
					routeRestProductV2 = true;
				}
			} catch (Throwable t) {
				LOG.error(t.getMessage(), t);
				routeRestProductV2 = true;
			}
		}
		
		if(orderServiceV1 == null) {
			routeRestOrderV1 = true;
		} else  {
			try {
				String healthCheck = orderServiceV1.giveAlive();
				if(healthCheck == null && "".equals(healthCheck)) {
					routeRestOrderV1 = true;
				}
			} catch (Throwable t) {
				LOG.error(t.getMessage(), t);
				routeRestOrderV1 = true;
			}
		}
		
		........
	}

	@RequestMapping("/product/v1/{productId}")
	@Transactional(readOnly = true)
	public org.salgar.product.api.v1.model.Product getProductV1(@PathVariable int productId)
			throws JsonParseException, JsonMappingException, IOException {
		if (routeRestProductV1) {
			return processFacade.executeFallBackProductV1(productId);
		}

		org.salgar.product.api.v1.model.Product resut = processFacade.giveProductV1(productId);

		return resut;
	}
	
	@RequestMapping(path = "/product/v1/saveProduct", method = RequestMethod.POST)
	@Transactional(readOnly = true)
	public org.salgar.product.api.v1.model.Product saveProductV1(@RequestBody org.salgar.product.api.v1.model.Product product)
			throws JsonParseException, JsonMappingException, IOException {
		if (routeRestProductV1) {
			return processFacade.executeFallBackSaveProductV1(product);
		}

		org.salgar.product.api.v1.model.Product resut = processFacade.saveProductV1(product);

		return resut;
	}

	@RequestMapping("/product/v2/{productId}")
	@Transactional(readOnly = true)
	public org.salgar.product.api.v2.model.Product getProductV2(@PathVariable int productId) throws JsonParseException, JsonMappingException, IOException {
		if (routeRestProductV2) {
			return processFacade.executeFallBackProductV2(productId);
		}
		
		org.salgar.product.api.v2.model.Product resut = processFacade.giveProductV2(productId);

		return resut;
	}
	
	@RequestMapping(path = "/product/v2/saveProduct", method = RequestMethod.POST)
	@Transactional(readOnly = true)
	public org.salgar.product.api.v2.model.Product saveProductV2(@RequestBody org.salgar.product.api.v2.model.Product product) throws JsonParseException, JsonMappingException, IOException {
		if (routeRestProductV2) {
			return processFacade.executeFallBackSaveProductV2(product);
		}
		
		org.salgar.product.api.v2.model.Product resut = processFacade.saveProductV2(product);

		return resut;
	}
	
	@RequestMapping("/order/v1/{orderId}")
	@Transactional(readOnly = true)
	public org.salgar.order.api.v1.model.Order giveOrderV1(@PathVariable int orderId) throws JsonParseException, JsonMappingException, IOException {
		if (routeRestOrderV1) {
			return processFacade.executeFallBackGiveOrderV1(orderId);
		}
		
		org.salgar.order.api.v1.model.Order resut = processFacade.giveOrderV1(orderId);

		return resut;
	}
	
	.......
	
	@RequestMapping(path = "/saveOrderWProductWCustomer/v2", method = RequestMethod.POST)
	@Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
	public void saveOrderV2WithProductWithCustomer(@RequestBody org.salgar.process.context.v2.OrderContext orderContext) throws JsonParseException, JsonMappingException, IOException {
		org.salgar.customer.api.v2.model.Customer customerInternal = null;
		
		if (routeRestCustomerV2) {
			customerInternal = processFacade.executeFallBackGiveCustomerV2(orderContext.getCustomer().getId());
		} else {
			customerInternal = processFacade.giveCustomerV2(orderContext.getCustomer().getId());
		}
		org.salgar.product.api.v2.model.Product productInternal;
		if (routeRestProductV2) {
			productInternal = processFacade.executeFallBackProductV2(orderContext.getProduct().getProductId());
		} else {
			productInternal = processFacade.giveProductV2(orderContext.getProduct().getProductId());
		}
		
		List<org.salgar.product.api.v2.model.Product> products = new ArrayList<org.salgar.product.api.v2.model.Product>();
		products.add(productInternal);
		orderContext.getOrder().setProducts(products);
		orderContext.getOrder().setCustomer(customerInternal);
		
		if (routeRestOrderV2) {
			processFacade.executeFallBackSaveOrderV2(orderContext.getOrder());
		} else {
			processFacade.saveOrderV2(orderContext.getOrder());
		}
	}
	
	@RequestMapping(path = "/saveOrderWProductWCustomerTransactionProof/v1", method = RequestMethod.POST)
	@Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
	public void saveOrderWithProductWithCustomerTransactionProof(@RequestBody org.salgar.process.context.v1.OrderContext orderContext) throws JsonParseException, JsonMappingException, IOException {
		org.salgar.customer.api.v1.model.Customer customerInternal = null;
		
		if (routeRestCustomerV1) {
			customerInternal = processFacade.executeFallBackSaveCustomerV1(orderContext.getCustomer());
		} else {
			customerInternal = processFacade.saveCustomerV1(orderContext.getCustomer());
		}
		org.salgar.product.api.v1.model.Product productInternal;
		if (routeRestProductV1) {
			productInternal = processFacade.executeFallBackSaveProductV1(orderContext.getProduct());
		} else {
			productInternal = processFacade.saveProductV1(orderContext.getProduct());
		}
		
		List<org.salgar.product.api.v1.model.Product> products = new ArrayList<org.salgar.product.api.v1.model.Product>();
		products.add(productInternal);
		orderContext.getOrder().setProducts(products);
		orderContext.getOrder().setCustomer(customerInternal);
		
		throw new RuntimeException("Fake exception to prove transaction feature!");
	
		
//		if (routeRestOrderV1) {
//			processFacade.executeFallBackSaveOrderV1(orderContext.getOrder());
//		} else {
//			processFacade.saveOrderV1(orderContext.getOrder());
//		}
	}
}

snippet 26

Above, you are seeing the ‘ProcessService’, our orchestration layer which will orchestrate Micro Services Customer, Product, Order (there is also a notion of Versioning but I will explain more explicitly in a further chapter).

First thing you have to pay attention here, we are injecting our JMX Beans here and not the RestFacades because of Transaction and Performance reasons, we want that orchestration of Micro Services happens over binary protocols and we also don’t want the costs of serialization to REST.

Please pay close attention to annotation ‘@Autowired(required = false)’, we are telling Spring to inject the JMX Bean of our Micro Service if it can find it. If not, it will silently continue, this provides the Plug and Playablity of our system. If we decide that for this Process/Composite Service, performance and transactions are not important and we will not install the JMX Bean to this JVM, it will just communicate over REST.

Now lets assume these JMX Beans are installed to the JVM, then we have to decide these JMX Beans are healthy or not. We realize this in ‘defineRoutes()’ method which is also annotated with ‘@PostConstruct’ which will guarantee that it will run after initialization of this Process/Composite service via Spring. That means if the JMX Beans are injected, we can do their health checks. This is achieved by calling ‘giveAlive()’ method of the ‘HealthCheck’ interface which all of our JMX Beans implements. If the response positive, all the calls will be routed to JMX Beans otherwise to a Rest Facade that we find over Eureka Discovery client. Please note that we are not setting state of our Process/Composite service DOWN or UP at Eureka (yes, as you will shortly see Process/Composite will also be registered to Eureka), we are assuming that there is a temporary problem with our JMX Beans and they can recover so we don’t need to take the Process/Composite service DOWN, it will internally route the calls to the healthy RestFacade instances that exist somewhere in the Eureka cluster.

Until now we looked to the the problem that concentrates on availability and performance, what about transactions, if we don’t have the JMX Beans we can’t guarantee that our Micro Service will be able to join a transaction, there comes into play my custom annotation ‘@TransactionalFanout( services = {“proxyProductServiceV1” , “proxyOrderServiceV1”, “proxyCustomerServiceV1”})’, if we place this annotation to a Process/Composite service, Eureka health check will behave differently (I have to customize Eureka/Spring Boot code to make this work which I will explain a further chapter), now it will execute a health check against all the three JMX Beans that are defined in the annotation and if any of them does not deliver healthy message, it will take this Process/Composite service DOWN status in Eureka until recovers and delivers healthy message. Is this mean, if we have other functionality not depending to these JMX Bean will be unavailable also? Yes, but if this is a problem for you, you have to design for Process/Composite services so that will not contain functionality unrelated to the transactional functionality, for us, protecting the transactional integrity has a higher priority.

Ok, now that we discussed the health checks lets look, how are we dealing with the Business Cases, let observe the methods that delivers us a Product.

        @RequestMapping("/product/v1/{productId}")
	@Transactional(readOnly = true)
	public org.salgar.product.api.v1.model.Product getProductV1(@PathVariable int productId)
			throws JsonParseException, JsonMappingException, IOException {
		if (routeRestProductV1) {
			return processFacade.executeFallBackProductV1(productId);
		}

		org.salgar.product.api.v1.model.Product result = processFacade.giveProductV1(productId);

		return result;
	}

snippet 27

First thing, that draw our attention how are we routing the requests, if our JMX Beans are not healthy we are routing requests to Fallback/RestFacade methodes. You can also observe that this RestController delegate all the business logic to a ProcessFacade, this is off course keeping the design clean in the perspective of Rest and Transaction annotations. We are not polluting the Business Case implementation but there is an another reason called Hystrix.

Hystrix is a Circuit Breaker framework, which means if your system unhealthy, causing errors and you are continuously calling these functionality, you will at some point deplete the resources of the system, memory and thread wise. Hystrix, if detects such a condition, will break the circuit so calls will not be queued in the failed interface and keep it open until the system self recovers (it will periodically send requests to test the system recovered or not). In the mean time, Hystrix has another mechanism called ‘fallbackMethod’, we can define in Hystrix annotation to call an alternative method if the one we are calling continuously failing. We will use this feature instead calling our failing JMX Bean Instance. To find a healthy instance from the Netflix Eureka via Rest Facade. This way we will again fulfill the High Availability principle of the Micro Services, instead of being stuck, the Process/Composite service will find a working instance in our Cluster/Cloud.

There are two points we have to be careful, one we can’t let Hystrix mechanism to engage for business errors, like user does not have permission to call Product Service, Hystrix should only react to technical errors, like connection timeout, etc….luckily there is way to configure this in Hystrix, if we will wrap all of our Business errors in a special Exception type (HystrixBadRequestException) and this will let Hystrix know that it should call the ‘fallbackMethod’ for these Exceptions(another method to guarantee that Hystrix does not react to the Business Exceptions is ‘ignoreException’ property in ‘@HystrixCommand’ annotation, all the exeptions declared in this property will be accepted as Business Exception and Hystrix will not call the fallback method).

Second sticking point, Hystrix can’t call the ‘fallbackMethod’ if the method is calling several Micro Services under one Transcation context, in this case we can’t guarantee transactional integrity. JTA Container can’t deal with the transactional integrity if they are called over the Rest Facades.

Now we can observe giveProductV1 method on ProcessFacade.

@Component
public class ProcessFacadeImpl implements ProcessFacade {
	private static final Logger LOG = LoggerFactory.getLogger(ProcessFacadeImpl.class);
	
	@Autowired(required = false)
	@Named("proxyProductServiceV1")
	private org.salgar.product.api.v1.ProductService productServiceV1;

	........-.

	@Autowired
	private LoadBalancerClient loadBalancerClient;

	private RestTemplate restTemplate = new RestTemplate();

	@Override
	@HystrixCommand(fallbackMethod = "executeFallBackProductV1", commandProperties = {
			@HystrixProperty(name = "execution.isolation.strategy", value = "SEMAPHORE"),
			@HystrixProperty(name = "circuitBreaker.requestVolumeThreshold", value = "1"),
			@HystrixProperty(name = "circuitBreaker.sleepWindowInMilliseconds", value = "10000") })
	public org.salgar.product.api.v1.model.Product giveProductV1(int productId)
			throws JsonParseException, JsonMappingException, IOException {
		org.salgar.product.api.v1.model.Product result = productServiceV1.giveProduct(productId);

		return result;
	}

        @Override
	public org.salgar.product.api.v1.model.Product executeFallBackProductV1(int productId)
			throws JsonParseException, JsonMappingException, IOException {
		ServiceInstance instance = loadBalancerClient.choose("product_v1_rest");

		URI uri = instance.getUri();
		String url = uri.toString() + "/product_v1_rest/product/" + productId;

		ResponseEntity<String> result = restTemplate.getForEntity(url, String.class);

		ObjectMapper mapper = new ObjectMapper();
		org.salgar.product.api.v1.model.Product product = mapper.readValue(result.getBody(),
				org.salgar.product.api.v1.model.Product.class);

		return product;
	}
.......
}

snippet 28

In its basics, this method does nothing other then calling JMX Bean where the whole business logic lies. What here interesting is the configuration of the @HystrixCommand annotation and the implementation of the ‘fallbackMethod’ ‘executeFallBackProductV1’.

@HystrixCommand get four configuration values from us, first what we already discussed the ‘fallbackMethod’. Second @HystrixProperty ‘execution.isolation.strategy’ which is very critical for us, Hystrix uses internally thread pool to be able to realize the circuit breaking functionality, this means when our execution thread calls any method annotated with a @HystrixCommand, it will stop there and the execution will continue with a Hystrix thread which as a consequence, any ThreadLocal value that we have with our execution Thread will not be here. As it is case here, we have a very important value that we keep in our execution thread while we used @Transactional annotation in our Process/Composite service. If we will follow the normal processing rules of the Hystrix, transactions will not work, but if we set the value “SEMAPHORE” in “execution.isolation.strategy” this will guarantee the ThreadLocal values from the execution thread will be transferred to the Hystrix thread. This will cause some performance penalties but we are developing Micro Services so we can always scale.

Third Hystrix property is ‘circuitBreaker.requestVolumeThreshold’, which is the number of the error before the Hystrix break the circuit, other use case of the @HytrixCommand the developers of the Hystrix wait number of fails before breaking the circuit but with our transactional system we want to break circuit directly and redirect the traffic immediately if we see errors on an interface.

Fourth Hystrix property is ‘circuitBreaker.sleepWindowInMilliseconds’ controls after how many miliseconds that Hystrix control that the interface self healed, in our case we set to 10s but it can be much lower.

One final point to express here, as I mentioned previously, we have to treat our Business Exception different then Technical Exception, we have to wrap our Business Exceptions with ‘HystrixBadRequestException’ if we don’t want that Hystrix calls the ‘fallbackMethod’ for a business exception.

Now lets look to the implementation of the ‘fallbackMethod’s. As I mention previously if for any reason the calls to the JMX Beans fails, we want that Hystrix detects this redirect these to RestFacades. So now the question is, how are we going to find healthy Rest Facades. There is an another Netflix component that can help us, Netflix Ribbon. Netflix Ribbon is load balancer that is with continuous contact with Netflix Eureka, and it keep tracks of the healthy Rest Facades.

So to be able to use the Netflix Ribbon we have inject the ‘LoadBalancerClient’ to ‘ProcessFacadeImp’ via autowiring. When Hystrix calls the ‘fallbackMethod’ it will contact with ‘LoadBalancerClient’ and ask for a healthy client with the client name.

ServiceInstance instance = loadBalancerClient.choose("product_v1_rest");

snippet 29

Name of the Rest Facade, here ‘product_v1_rest’ is the same name we use in the ‘application.yml’ in the ‘product_v1_rest’ Rest Facade.

Now that we explained the basics in the ‘ProcessService’ orchestration layers lets look to some problematic areas, like transaction behaviors over multiple Micro Services.

Please look to the below code snippet,

        @RequestMapping(path = "/saveOrderWProductWCustomer/v2", method = RequestMethod.POST)
        @Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
	public void saveOrderV2WithProductWithCustomer(@RequestBody org.salgar.process.context.v2.OrderContext orderContext) throws JsonParseException, JsonMappingException, IOException {
		org.salgar.customer.api.v2.model.Customer customerInternal = null;
		
		if (routeRestCustomerV2) {
			customerInternal = processFacade.executeFallBackGiveCustomerV2(orderContext.getCustomer().getId());
		} else {
			customerInternal = processFacade.giveCustomerV2(orderContext.getCustomer().getId());
		}
		org.salgar.product.api.v2.model.Product productInternal;
		if (routeRestProductV2) {
			productInternal = processFacade.executeFallBackProductV2(orderContext.getProduct().getProductId());
		} else {
			productInternal = processFacade.giveProductV2(orderContext.getProduct().getProductId());
		}
		
		List<org.salgar.product.api.v2.model.Product> products = new ArrayList<org.salgar.product.api.v2.model.Product>();
		products.add(productInternal);
		orderContext.getOrder().setProducts(products);
		orderContext.getOrder().setCustomer(customerInternal);
		
		if (routeRestOrderV2) {
			processFacade.executeFallBackSaveOrderV2(orderContext.getOrder());
		} else {
			processFacade.saveOrderV2(orderContext.getOrder());
		}
	}

snippet 30

As it stays, this method, even the things go wrong with one of the Micro Services that we orchestrate, will call the fallback methods/Rest Facade to accomplish the orchestration. If you observe closely you will discern that ‘processFacade.giveCustomerV2’ and ‘processFacade.giveProductV2’ are read methods, so the information read from SAR or Rest Facade does not have any negative effect on transaction, only thing that has an effect is ‘processFacade.saveOrderV2’ but even here, if this fail ‘Order’ object will be rolled back either SAR or in Rest Facade.

Ok, it is easy if we orchestrate several read with one write, what if we have several save operations like code below,

@RequestMapping(path = "/saveOrderAndProductAndCustomer/v2", method = RequestMethod.POST)
@Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
public void saveOrderV2AndProductAndCustomer(@RequestBody org.salgar.process.context.v2.OrderContext orderContext) throws JsonParseException, JsonMappingException, IOException {
		org.salgar.customer.api.v2.model.Customer customerInternal = null;
		
		customerInternal = processFacade.saveCustomerV2(orderContext.getCustomer());
		
		org.salgar.product.api.v2.model.Product productInternal;
		productInternal = processFacade.saveProductV2(orderContext.getProduct());
		
		
		List<org.salgar.product.api.v2.model.Product> products = new ArrayList<org.salgar.product.api.v2.model.Product>();
		products.add(productInternal);
		orderContext.getOrder().setProducts(products);
		orderContext.getOrder().setCustomer(customerInternal);
		
		processFacade.saveOrderV2(orderContext.getOrder());
	}

snippet 31

now in this method, we can’t delegate this call to Rest Facade and we can’t let the Hystrix call the fallback methods because we can’t guarantee the transactional integrity. So an alternative is too remove the @HystrixCommand annotation from the ‘saveCustomerV2’, ‘saveProductV2’ and ‘saveOrderV2’ methods but we don’t want that also because if these methods are not participating an orchestration is totally legit for them to call the fallback method.

What we need is different behavior from ‘@HystrixCommand’ annotation, if we are in the middle of orchestration it should not call the fallback methods. To achieve this I created the two new annotations and also patched the Hystrix libraries.

First of the annotations called ‘@TransactionalOrchestration’ which will used on the method in the ProductService to signal that we are orchestrating several Micro Services under these methods, every Hystrix annotation running under this context should behave differently.

This bring us to the second annotation, ‘@TransactionalHystrixCommand’ which is a modified version of the ‘@HystrixCommand’, for which I have to patch the Hystrix libraries.

Here is the problem place in Hystrix code in the GenericCommand class …

        Object process(AbstractHystrixCommand.Action action) throws Exception {
        try {
            Object result = action.execute();
            this.flushCache();
            return result;
        } catch (CommandActionExecutionException var5) {
            Throwable cause = var5.getCause();
            if(this.isIgnorable(cause)) {
                throw new HystrixBadRequestException(cause.getMessage(), cause);
            } else if(cause instanceof Exception) {
                throw (Exception)cause;
            } else {
                throw Throwables.propagate(cause);
            }
        }
    }

snippet 32

As you can see, in the case of an error, Hystrix will look the Exception is one of the exceptions defined in the ‘ignoreExceptions’ attribute which will let Hystrix to think this is a Business Exception and not to call the FallBack Method by wrapping this exception with ‘HystrixBadRequestException’. For our transactional orchestrations, we don’t want that a FallBack is called, so we catch and modify the exception if it is running under ‘@TransactionalOrchestration’ context.

So how Hystrix annotation will know this you ask, standard @HystrixCommand can’t know this for this we have to implement ‘@TransactionalHystrixCommand’, there are bunch of classes we have to modify which you can see in ‘hystrix_patch’ project. Basically we have to create a new Aspect that pointcuts ‘@TransactionalHystrixCommand’ and implements ‘TransactionalGenericCommand’ main difference being…

        } catch (CommandActionExecutionException var5) {
            Throwable cause = var5.getCause();
            if(this.isIgnorable(cause)) {
                throw new HystrixBadRequestException(cause.getMessage(), cause);
            } 
            Boolean isTransactionalOrchestration = TransactionalOrchestrationThreadLocal.getTransactionalOrchestrationThreadLocal();
            if(isTransactionalOrchestration != null && isTransactionalOrchestration.booleanValue()) {
            	throw new HystrixBadRequestException(cause.getMessage(), cause);
            } else if(cause instanceof Exception) {
                throw (Exception)cause;
            } else {
                throw Throwables.propagate(cause);
            }
        }

snippet 33

‘TransactionalOrchestration.getTransactionActive()’ here being an access to a ThreadLocal variable which we will set via ‘@TransactionalOrchestration’ annotation via an Aspect, which you can see in ‘annotation-api’ project.

@Aspect
public class TransactionOrchestrationAspect {
     public TransactionOrchestrationAspect() {
     }

     @Pointcut("@annotation(org.salgar.micro.annotation.TransactionalOrchestration)")
     public void TransactionOrchestrationPointcut()
     {
     }
     
     @Around
     public Object methodsAnnotatedWithTransactionalOrchestration(ProceedingJoinPoint joinPoint) throws Throwable {
          Object result = null;
          try {
              TransactionalOrchestration.setTransactionActive(Boolean.TRUE);
              result = joinPoint.proceed();
          } finally {
              TransactionalOrchestration.setTransactionActive(Boolean.FALSE);
          }
     }
}

snippet 34

So at the end, the method ‘saveOrderV2AndProductAndCustomer’ on ProcessService will look like…

@RequestMapping(path = "/saveOrderAndProductAndCustomer/v2", method = RequestMethod.POST)
@Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
@TransactionalOrchestration
public void saveOrderV2AndProductAndCustomer(@RequestBody org.salgar.process.context.v2.OrderContext orderContext) throws JsonParseException, JsonMappingException, IOException {
    .....
}

snippet 35

and for ex, the Method ‘saveOrderV2’ on ProcessFacadeImpl….

@Override
@TransactionalHystrixCommand(fallbackMethod = "executeFallBackSaveOrderV2", commandProperties = {
		@HystrixProperty(name = "execution.isolation.strategy", value = "SEMAPHORE"),
		@HystrixProperty(name = "circuitBreaker.requestVolumeThreshold", value = "1"),
		@HystrixProperty(name = "circuitBreaker.sleepWindowInMilliseconds", value = "10000") })
public org.salgar.order.api.v2.model.Order saveOrderV2(org.salgar.order.api.v2.model.Order order) throws JsonParseException, JsonMappingException, IOException {
	.........
}

snippet 36

– sar_utility
This project contains the necessary utility class to provide Spring initialization mechanisms to the JBoss Service Archives.

Personally I didn’t want to pull some JMX specific classes to the project and want to use ‘convention over configuration’, for this purpose JBoss need some Java interfaces created with a certain naming convention, like the following…

package org.salgar.mbean;

public interface SpringInitializerMBean {
	void test();
}

snippet 37

The ending ‘MBean’ is a must for JBoss to recognize it as a JMX Bean, then the implementation will look like the following…

public class SpringInitializer implements SpringInitializerMBean {
	private static final Log LOG = LogFactory.getLog(SpringInitializer.class);
	private static final String SPRING_CTX = "classpath:/META-INF/jboss-spring.xml";
	private Object /*ConfigurableApplicationContext*/ ctx;
	
	public SpringInitializer() {
		System.out.println("starting");
	}
	
	public void start() throws Exception {
		System.out.println("starting");
		installApplicationContext();
	}
	
	public void stop() throws Exception {
		closeApplicationContext();
	}
	
	@SuppressWarnings("rawtypes")
	private void installApplicationContext() {
        try {
            Class contextClass = Class.forName("org.springframework.context.support.ClassPathXmlApplicationContext");

            @SuppressWarnings("unchecked")
			Constructor constructor = contextClass.getConstructor(new Class[] {String.class});
            Object tmpCtx = constructor.newInstance(new Object[] { SPRING_CTX });
            
            //ConfigurableApplicationContext tmpCtx = new ClassPathXmlApplicationContext(SPRING_CTX);
            if (tmpCtx != null) {
                //log(this.serviceName+" activate new applicationContext");
                ctx = tmpCtx;
            }
        } catch (Throwable e) {
            LOG.error(" Unable to load applicationContext '" + SPRING_CTX + "'. keeping existing context. Reason: " + e.getMessage(), e);
        }
    }

    private void closeApplicationContext() {
        if (ctx != null) {
            try {
                Method close = ctx.getClass().getMethod("close", null);
                close.invoke(ctx, null); //ctx.close();
                //log("applicationContext closed.");
            } catch (Throwable e) {
                //log.error("Unable to close applicationContext '" + SPRING_CTX + "'. Reason: " + e
                //        + ". Restart jboss if possible.");
            }
        }
    }

	@Override
	public void test() {
		System.out.println("Starting...");
		
	}
}

snippet 38

Again as a convention, JBoss looks for ‘start’ and ‘stop’ methods. ‘start’ method, will search a specific file ‘classpath:/META-INF/jboss-spring.xml’ in the classpath which will indicate how the Spring Context will be started. This file must be in the classpath of the JBoss Service Archive that should have an access to Spring functionality.

-Annotation
This is the project we defined the annotation for the support our transactional requirements.

First of all is the ‘@TransactionalFanout’ annotation for the Process/Orchestration layer, containing the information indicating which JBoss SAR’s are necessary to declare this Process Service healthy and can join a transaction. If any of theses SAR’s are not in healthy state, this will the instance of Process/Service down in Eureka.

The annotation looks like the following…

@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
@Documented
public @interface TransactionalFanout {
	String[] services();
}

snippet 39

Now lets look to the annotation processor that is configuring the health checker for the Process/Orchestration layer…

@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class TransactionalFanoutProcessor implements BeanPostProcessor {
	private ConfigurableListableBeanFactory configurableListableBeanFactory;
	
	@Autowired
	public TransactionalFanoutProcessor(ConfigurableListableBeanFactory configurableListableBeanFactory) {
		this.configurableListableBeanFactory = configurableListableBeanFactory;
	}

	@Override
	public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
		boolean result = bean.getClass().isAnnotationPresent(TransactionalFanout.class);
		
		if(result) {
			TransactionalFanout transactionalFanout = bean.getClass().getAnnotation(TransactionalFanout.class);
			List<HealthCheck> services = new ArrayList<>();
			
			for (String  serviceName : transactionalFanout.services()) {
				HealthCheck healthCheck = (HealthCheck) configurableListableBeanFactory.getBean(serviceName);
				services.add(healthCheck);
			}
			
			ProcessHealthIndicatorImpl processHealthIndicator = new ProcessHealthIndicatorImpl();
			processHealthIndicator.setServices(services);
			configurableListableBeanFactory.registerSingleton(bean.getClass().getSimpleName() + "ProcessHealthIndicator", processHealthIndicator);
		}
		return bean;
	}

	@Override
	public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
		return bean;
	}
}

snippet 40

As you can see this class implements ‘BeanPostProcessor’ interface which signal Spring that this class is going to configure the health checks it the places that ‘@TransactionalFanout’ is present. After this discovery, it will locate the ‘HealtCheck’ beans for the service name indicated in the annotations (Healt Check beans are already created by JBoss SAR’s, we are only reusing those) and create a ‘ProcessHealthIndicatorImpl’ bean, populate it with Health Check beans for services and register it to Spring context as singleton, so that they can report health status of Process/Service layer to Eureka.

One additional annotation that we defined in this project called ‘@TransationalOrchestration’, which will mark for the Hystrix framework that transaction occuring in the context extends to the several micro services and Hystrix can’t decide with its normal operation mode for calling fallback methods or not. For this, in the places that this annotation is set, we will also set a ThreadLocal signaling that a transactional orchestration occuring.

Which will happen with following AspectJ aspect….

@Aspect
public class TransactionalOrchestrationAspect {
   @Pointcut("@annotation(org.salgar.annotation.TransactionalOrchestration)")
   public void transactionalOrchestrationAnnotationPointcut(){
   }

   @Around("transactionalOrchestrationAnnotationPointcut()") {
      Object result = null;
      try {
          TransactionalOrchestrationThreadLocal.setTransactionalOrchestrationThreadLocal(Boolean.TRUE);
          result = joinPoint.proceed();
      } finally {
          TransactionalOrchestrationThreadLocal.setTransactionalOrchestrationThreadLocal(Boolean.FALSE);
      }
      return result;
   }
}

snippet 41

As you might see, this annotation will only ensure the ThreadLocal is set to true before method execution and false after the method execution.

-eureka_patch
In the annotation project, you saw that we are using the ‘@TransactionalFanout’ annotation to signal that Process/Orchestration bean should have health checks and register themselves to Eureka then we register ‘ProcessHealthIndicatorImpl’ class for health checks, normally Spring Boot has a configuration file that turns on health check functionality when there is class of instance ‘ProcessHealthIndicator’ in spring context. Unfortunateley because we can only insert this bean when annotation processor executed and this is too late for the Spring Boot configuration class.

The original ‘EurekaDiscoverClientConfiguration’ class contains the following inner class ‘EurekaHealthCheckHandlerConfiguration’ which marked with following ‘@ConditionalOnClass’ annotation, which cause the health check not the activate while this class is not in the Spring context when the ‘EurekaDiscoverClientConfiguration’ initialize (it happens earlier then the execution of ‘BeanPostProcessor’), so I have to modify the class and create a new ‘CompositeEurekaDiscoverClientConfiguration’ and remove this annotation.

@Configuration
@ConditionalOnProperty(value = "eureka.client.healthcheck.enabled", matchIfMissing = false)
protected static class EurekaHealthCheckHandlerConfiguration {

		@Autowired(required = false)
		private HealthAggregator healthAggregator = new OrderedHealthAggregator();

		@Bean
		@ConditionalOnMissingBean(HealthCheckHandler.class)
		public EurekaHealthCheckHandler eurekaHealthCheckHandler() {
			return new EurekaHealthCheckHandler(this.healthAggregator);
		}
}

snippet 42

As you see, only relevant thing is that ‘eureka.client.healthcheck.enabled’ property set to true in ‘application.yml’ or not.

-hystrix-patch
I already mentioned little bit the functionality of this project in detailed explanation of ‘process-service’ project. For our Transactional Orchestrations, we need that Hystrix ‘@HystrixCommand’ to behave differently. Normally in the case of none Business Exception will call the fallback methods, which in our case will cause a call to Rest Facade but in this case we can’t guarantee transactional integrity, so we have to change the behavior of the ‘@HystrixCommand’.

For this purpose, I had implemented the ‘@TransactionalHystrixCommand’ which will use another aspect and another command (which are also defined in this project) that prevents Hystrix to call the fallback methods.

Most of the classes in this project are necessary to be able to modify the Aspect defining the behavior of ‘@HystrixCommand’, you will discern I have to subclass the ‘TransactionalGenericCommand’ with the same package as Netflix classes because it uses a ‘package’ visibility in one of its constructors, it will be nice of Netflix developers to remove this obstacle, otherwise all the classes are slight modification of Netflix classes only to be able to use ‘TransactionalGenericCommand’ for the following purpose.

        } catch (CommandActionExecutionException var5) {
            Throwable cause = var5.getCause();
            if(this.isIgnorable(cause)) {
                throw new HystrixBadRequestException(cause.getMessage(), cause);
            }
            Boolean isTransactionalOrchestration = TransactionalOrchestrationThreadLocal.getTransactionalOrchestrationThreadLocal();
            if(isTransactionalOrchestration != null && isTransactionalOrchestration.booleanValue()) {
               throw new HystrixBadRequestException(cause.getMessage(), cause);
            } else if(cause instanceof Exception) {
                throw (Exception)cause;
            } else {
                throw Throwables.propagate(cause);
            }
        }

snippet 43

the block…

            if(TransactionalOrchestrationThreadLocal.getTransactionalOrchestrationThreadLocal()) {
            	throw new HystrixBadRequestException(cause.getMessage(), cause);
            }

snippet 44

now ensures that if HystrixCommand runs under ‘Transactional Orchestration’ it will not trigger Fallback Methods.

‘TransactionalHystrixAspect’ now has a ‘pointcut’ to intercept the calls marked with the ‘@TransactionalHystrixCommand’ annotation.

    @Pointcut("@annotation(org.salgar.hystrix.transaction.annotation.TransactionalHystrixCommand)")
    public void hystrixCommandAnnotationPointcut() {
    }

snippet 45

and only other major modification in the class is the following part in the ‘methodsAnnotatedWithHystrixCommand’ method…

HystrixInvokable invokable = TransactionalHystrixCommandFactory.getInstance().create(metaHolder);

snippet 46

which using ‘TransactionalHystrixCommandFactory’ instead of the ‘HystrixCommandFactory’.

The implementation of ‘TransactionalHystrixCommandFactory’ is also only changing….

   executable = new TransactionalGenericCommand(HystrixCommandBuilderFactory.getInstance().create(metaHolder));

snippet 47

return type of the factory to ‘TransactionalGenericCommand’.

-health-check
This project contains the classes that are necessary for the Health Check feature of the Netflix Eureka, when Netflix/Spring Boot detects classes that are implementing the ‘org.springframework.boot.actuate.health.HealthIndicator’ interfaces it automatically uses it to perform health checks. If the results of these checks are negative, then these services are set to ‘DOWN’ state in Netflix Eureka.

In this project you will see to implementation of this interfaces, ‘RestHealthIndicator’ which will check the health of the Rest Facades (one single Micro Service) and ‘ProcessHealthIndicatorImpl’ for the process/orchestration services (which will check the health of the all services that this process/orchestration service depends on it).

For ex, the implementation of ‘RestHealthIndicator’ looks like the following….

package org.salgar.healthcheck;

import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;

public class RestHealthIndicator<T extends HealthCheck> implements HealthIndicator {
	private T service;
	
	public RestHealthIndicator(T service) {
		this.service = service;
	}
	
	@Override
	public Health health() {
		try {
			String result = service.giveAlive();
			if(result == null || "".equals(result)) {
				return Health.down().withDetail("result", result).build();
			}
		} catch (Throwable t) {
			return Health.down().withDetail("" + t.getMessage(), t).build();
		}
		return Health.up().build();
	}
}

snippet 48

In this feasibility study, this health check is extremely simple and just checks that JMX beans returns a certain string or not. In a real life application, this can be done as complex as necessary, like checking database connection there or not, services returns a reference data set successfully or not, etc… This health check response is available from every JMX Bean because they all have to implement ‘HealthCheck’ interface in our system.

package org.salgar.healthcheck;

public interface HealthCheck {
	final static String alive_signal = "We are alive!";
	
	public String giveAlive();
}

snippet 49

This will method must return a value that is signaling the health of our application.

package org.salgar.healthcheck;

import java.util.List;

import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;

public class ProcessHealthIndicatorImpl implements HealthIndicator {
	List<? extends HealthCheck> services;
	
	public ProcessHealthIndicatorImpl() {
	}
	
	public ProcessHealthIndicatorImpl(List<? extends HealthCheck> services) {
		this.services = services;
	}

	@Override
	public Health health() {
		for (HealthCheck service : services) {
			if(service == null) {
				return Health.down().withDetail("Service is null!", null).build();
			}
			try {
				String result = service.giveAlive();
				if(result == null || "".equals(result)) {
					return Health.down().withDetail("result", result).build();
				}
			} catch (Throwable t) {
				return Health.down().withDetail("" + t.getMessage(), t).build();
			}
		}
		
		return Health.up().build();
	}

	public void setServices(List<? extends HealthCheck> services) {
		this.services = services;
	}
}

snippet 50

‘ProcessHealthIndicatorImpl’ fulfills the same requirements for Process/Orchestration services, it iterates over all the Services cited in the ‘@TransactionalFanout’ and check their health, if anyone of them faild, this class will set the Processs\Orchestation service marked with ‘@TransactionalFanout’ annotation in DOWN state in Eureka.

-support/eureka
This project contains the necessary configuration for the start of the Spring Boot Eureka Server. There are 3 things to pay attention, first is the startup/configuration class for Spring Boot and 2 YAML configuration files, ‘bootstrap.yml’ and ‘application.yml’.

package org.salgar.eureka;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
@EnableDiscoveryClient
public class EurekaApplication {
	public static void main(String[] args) {
		SpringApplication.run(EurekaApplication.class, args);
	}
}

snippet 51

As I said, it is extremely easy, but Spring Boot does lots of magic for us at behind the scenes naturally. ‘@SpringBootApplication’ annotation signals that this class should initialize Spring Boot components, ‘@EnableEurekaServer’ annotation enables the configuration of the EurekaServer via ‘EurekaServerConfiguration’ and ‘@EnableDiscoveryClient’ annotation that configures the Eureka Discovery Client via ‘EurekaDiscoverClientConfiguration’ class.

‘bootstrap.yml’ ….

spring:
  application:
    name: eureka
  cloud:
    config:
      uri: ${vcap.services.${PREFIX:}configserver.credentials.uri:http://user:password@localhost:8888}

snippet 52

defines the Spring Boot application name and the configuration uri.

‘application.yml’…

server:
  port: 8761
security:
  user:
    password: ${eureka.password} # Don't use a default password in a real app

eureka:
  client:
    registerWithEureka: false
    fetchRegistry: false
  server:
    waitTimeInMsWhenSyncEmpty: 0
  password: ${SECURITY_USER_PASSWORD:password}

snippet 53

defining the port that EurekaServer will listen and Eureka Client registry values.

One final trick here, to start to Eureka Server via Spring Boot, I didn’t want to fight with managing all that dependencies for the classpath so I used ‘spring-boot-maven-plugin’ plugin to create a single executable via the following configuration in the ‘pom’ file.

<plugin>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-maven-plugin</artifactId>
			<executions>
				<execution>
					<goals>
						<goal>repackage</goal>
					</goals>
				</execution>
			</executions>
		</plugin>

snippet 54

then the ‘jar’ that is created can be run with a simple command like the following, ‘java -jar eureka-1.0-SNAPSHOT.jar’.

-support/zuul
Netflix Zuul acts like Gateway/Edge Server for our solution, normally we don’t want that our Micro Services accessed uncontrolled over intranet/internet, we want the communication occurs only over Process\Orchestration layer. For this purpose we will start Netflix Zuul as Spring Boot application and we need to do some configuration for Gateway functionality.

@SpringBootApplication
@EnableZuulProxy
public class ZuulApplication {
	public static void main(String[] args) {
		SpringApplication.run(ZuulApplication.class, args);
	}
}

snippet 55

For the configuration, first we need two annotations, ‘@SpringBootApplication’ annotation signals that this application will be started as Spring Boot Application and ‘@EnableZuulProxy’ annotation is a signal for Spring Boot to trigger the class ‘ZuulProxyConfiguration’ to configure the Zuul Proxy. These configuration values provided from two YAML configuration ‘bootstrap.yml’ and ‘application.yml’.

spring:
  application:
    name: zuul
  cloud:
    config:
      uri: ${vcap.services.${PREFIX:}configserver.credentials.uri:http://user:password@localhost:8888}

snippet 56

‘bootstrap.yml’ is mainly for Spring Boot and it does not have any additional configuration values other then, Spring Boot application name and config url.

info:
  component: Zuul Server

endpoints:
  restart:
    enabled: true
  shutdown:
    enabled: true
  health:
    sensitive: false

zuul:
  ignoredServices: "*"
  routes:
    product_process-1_0:
      path: /product-process-1.0-SNAPSHOT/**
      serviceId: product-process_1.0-SNAPSHOT
      stripPrefix: false

server:
  port: 8765

logging:
  level:
    ROOT: INFO
    org.springframework.web: INFO
    com.netflix: DEBUG 

snippet 57

‘application.yml’ is much more interesting, it first contains configuration information over endpoint lifecycles, server port and most importantly Zuul Gateway configurations, mainly ‘ignoredServices: “*”‘ which will prevent any access to our Micro Services\Rest Facades and then routes, which only access to the services, we choose to, in this case Process\Orchestration services.

  routes:
    product_process-1_0:
      path: /product-process-1.0-SNAPSHOT/**
      serviceId: product-process_1.0-SNAPSHOT
      stripPrefix: false

snippet 58

the route we defined here accept the requests on the path ‘/product-process-1.0-SNAPSHOT/**’ and forward those to service ‘product-process_1.0-SNAPSHOT’ as it is defined in the Netflix Eureka(via Spring Boot ‘application.yml’ in ‘product-process’ project).

With this configuration, the JBoss’es should not be open to Internet/Intranet, all traffic from these sources should come to Zuul Server and it will direct it to the Process/Orchestration services. This feature is important for two reasons, first security, nobody should access our interfaces uncontrolled, secondly, if you open your interfaces uncontrolled, you will have lots of integration problems, lets say you are changing somethings in your Micro Services, who you are going to let know about these changes so they can adapt their applications also. Trust me, this will help you a lot for change management and so, to have the Zuul Gateway, you can track from a central place who are using your services.

I also used in this project as ‘support\eureka’ ‘spring-boot-maven-plugin’ to create a single executable jar file to start the Zuul via the following command ‘java -jar zuul-1.0-SNAPSHOT.jar’

-assembler
This project is really relevant for our deployment, certainly you discerned that we are not using Enterprise Application Archive(EAR), because we want to be able to decide how many and where we will deploy our Micro Service. If we use EAR deployments this will push us again in the direction of the Monolith. We also don’t want to deploy every artifact manually, in this point Maven Assembly plugin is really useful tool and will help us.

With Maven Assembly plugin we can use description file like the following to put our application together…

<assembly>
	<id>package</id>
	<formats>
		<format>dir</format>
	</formats>
	<includeBaseDirectory>false</includeBaseDirectory>
	<baseDirectory>${project.artifactId}</baseDirectory>
	<dependencySets>
		<dependencySet>
			<includes>
				<include>org.salgar:product_api</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>modules/system/layers/base/org/salgar/product/api/1_0/main</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:order_api</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>modules/system/layers/base/org/salgar/order/api/1_0/main</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:customer_api</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>modules/system/layers/base/org/salgar/customer/api/1_0/main</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:product_sar</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:order_sar</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:customer_sar</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:product_rest</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<!-- outputFileNameMapping>${artifact.artifactId}.${artifact.extension}</outputFileNameMapping-->
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:order_rest</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<!-- outputFileNameMapping>${artifact.artifactId}.${artifact.extension}</outputFileNameMapping-->
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:customer_rest</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<!-- outputFileNameMapping>${artifact.artifactId}.${artifact.extension}</outputFileNameMapping-->
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:product-process</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>standalone/deployments</outputDirectory>
			<!-- outputFileNameMapping>${artifact.artifactId}.${artifact.extension}</outputFileNameMapping-->
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:eureka</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>utility</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
		<dependencySet>
			<includes>
				<include>org.salgar:zuul</include>
			</includes>
			<useTransitiveDependencies>false</useTransitiveDependencies>
			<outputDirectory>utility</outputDirectory>
			<unpack>false</unpack>
		</dependencySet>
	</dependencySets>
</assembly>

snippet 59

this is the assembly description file for Maven Assembly plugin, basically what it does, is to place Maven Artifacts that we defined as dependencies in ‘pom.xml’ to certain directories, like…

<dependencySet>
        <includes>
	       <include>org.salgar:product-process</include>
	</includes>
	<useTransitiveDependencies>false</useTransitiveDependencies>
	<outputDirectory>standalone/deployments</outputDirectory>
	<unpack>false</unpack>
</dependencySet>

snippet 60

in this case dependency, ‘org.salgar:product-process’, is copied to directory ‘standalone/deployments’ which will be deployment directory for JBoss. This way all the dependencies are copied to the necessary locations like, ‘api’ classes going to ‘modules/system/layers/base’ for the modular classloading of JBoss, support projects like ‘Eureka’, ‘Zuul’ to utility directory and nearly all the rest to ‘deployments’ directory.

Final structure can be copied as a package to destination.

General Topics
Now that we dicussed in a detailed fashion project structure and the functionality that every project provides, I want to say somethings about some general topics, mainly Performance\Transaction Optimization and the Versioning Concept.

Performance/Transaction, Optimizations and Design
Until now I tried to define a template for you to build a Transactional Micro Services application but we are now coming to a very important part, your experience. You can not blindly apply the concepts of this feasiblity study, it doesn’t make sense to make every Micro Service transaction capable or communicate over JMX. Your Business Case must decide this, if you will have a Micro Service that will only read values from a database, it does not make too much sense to implement it as transaction capable.

Or if it is too important for your Business Case to have fail over security then performance and transaction capablity, then implement them as plain REST instead of JMX and so. As I said, you have to decide your System depending on your Business Case and your experience but this template here will support you whatever direction you want to go.

The Plug and Play nature of this template will ensure that we can decide in our application independently where preformance/transaction is important and where is scalablity/failover is important. For ex, you want failover security, don’t install the JMX Beans of your Micro Service to the JVM that your Process\Orchestration lies, this way Process\Orchestration Services will always get their instances for the other Micro Services from the Netflix Eureka. You need performance and failover is secondary, then go for installing JMX Bean in the same JVM of Process\Orchestration Micro Service, key point is you don’t have to configure anything, you just have to install it.

When you are designing your Micro Services which might need Transaction functionality, try to group Process\Orchestration Micro Service to only do read operation or write operations, this way you can achieve more optimisation (In the case of failure, read operations can find a healthy instance in Netflix Eureka and continue to function but for save operations, transactional orchestration, it is not good idea\possible to go for failover over Netflix Eureka).

Many of the projects that I am involved trying to go from Monolith to Micro Services, they are struggling in the design part, which has two aspects, one, how can men divide and concur the Monolith and the necessary design choices for Performance\Fallback and Transaction.

I strongly encourage you divide your Monolith so that Functional Columns constructed via Process\Orchestration level and Individual Columns for the Micro Services that will be used from several of these Process\Orchestration layer, like we did here, for Functional Columns like ProcessService or AccountingService or AuthorizationService dealing with other Micro Services like Customer, Product, Account, Order, Authorization, etc….

Versioning Concept
There is one more topic that I want to point out here, Versioning which is an unseperable part of the Micro Services, which you will undoubtedly encounter in your projects. Normally people solve this problem in artificial ways, like creating java packages for the every new version of the Micro Service or with URL conventions, the reason mostly dictated via Classloader mechanisms of Java Application Service.

I can think of four possible ways of organizing Versioning for Micro Service which I will point out here. One of them you already saw in picture 1.

Scenario 1:
In this scenario, we are handling the versioning with the naming convention of the API, SAR, RestFacade projects, then Process\Orchestration layer ports these versions to the outside world. This solution prototype is build in ‘scenario_1’ branch in ‘GitHub’.

scenario_1
picture 7

Scenario 2:
This scenario approach the nearly as same as ‘scenario 1’, only major difference, instead of having separate Rest Facade projects for different versions, it also uses one single Rest Facade project and expose different versions with naming conventions like, ‘giveProductV1’, ‘giveProdcutV2’, ‘saveProductV1’, ‘saveProdcutV2’, etc… the implementation change from ‘scenario 1’ is trivial, so I didn’t placed in GitHub, if you are interested you can see as exercise to the reader.

scenario_2
picture 8

Scenario 3:
This is my favorite option, I personally dislike artificial ways of versioning Micro Services. Versioning of Java Package names, versioning of Class names or versioning Project names, they are all things that are imposed to us because of the limitations of Java or partially out laziness. It is a limitation of Java because of the classloading problems, think about that you have 1.0-SNAPHOT of a project and 2.0-SNAPHOT of a project, which contains exactly the same java package and class structures, under normal conditions you can’t deploy this because Java can’t decide what is the correct version under the circumstances but we already discussed above that this is no problem to us because of the JBoss’s Modular Classloading or the incoming JSR-376 with Java 9. We can define exactly in JBoss what will be loaded by the classloader and for which SAR’s and WAR’s would it be available. The other argumentation you hear in the projects, having two versions of the project under GitHub and managing them is too difficult, I mean common.

scenario_3
picture 9

You will see under GitHub two branches, ‘scenario3_1’ is the ‘scenario 3’ with version ‘1.0-SNAPSHOT of the project’ and the ‘scenario3_2’ is the version ‘2.0-SNAPSHOT’, when the ‘assembly’ project executed, it will produce the version ‘1.0-SNAPSHOT’ and ‘2-0-SNAPHOT’ of the project to deploy in JBoss side-by-side. If you look in the github, the Java Projects contains exactly the same Java Package names, classes and so, the classloader and configurations in ‘jboss-deployment-structure.xml’ decides which class will be available to which project.

In the next Screenshot you also see that same Project Names are used with different versions in JBoss Deployment directory.

scenario_3_deployment
picture 10

This version scheme will also effect the API project and how they will configured before we start configuring those I advice you to copy the existing JBoss to another directory and make the changes there if you plan to make further test Scenario 1. We have to adapt ‘module.xml’ to reflect the versioning changes.

For ex, ‘module.xml’ for Order API 2.0-SNAPSHOT,

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="org.salgar.order.api.2_0">
    <resources>
        <resource-root path="order_api-2.0-SNAPSHOT.jar"/>
    </resources>
 
    <dependencies>
        <module name="org.salgar.healthcheck.api" export="true"/>
		<module name="javax.persistence.api" export="true"/>
		<module name="org.salgar.product.api.2_0" />
		<module name="org.salgar.customer.api.2_0" />
		<module name="org.hibernate" />
		<module name="org.javassist" />
    </dependencies>
</module>

snippet 61

while now the artifact name is ‘order_api-2.0-SNAPSHOT’ we have to change this in ‘module.xml’.

One of the consequence of this change will cause, now we can’t have one Product-Process Project to serve the several version of the project, we have to have separate ones for every project. This also means some changes in the Zuul Route configurations which you can see under Github.

Scenario 4:
This scenario, from Versioning point of view is quite similar to ‘Scenario 2 and 3’ the major difference is we transferred Process\Orchestration layer to another application server (mainly to another JVM) if it is problem for some people to have all JBoss technology stack, in this solution application server can be anything, Weblogic, Websphere, Jboss even non application servers like Tomcat, Spring Boot. Of course, if you have to keep in mind when Process\Orchestration layers communicates with JMX Beans remotely, the cost of serialization will be there. I am only mentioning this scenario for completeness reason but serialization costs will make this solution a no go . Again the implementation you can accept it as an exercise to the user.

scenario_4
picture 11

Project Environment Preparation
Now lets talk little bit to figure out what do you need to run the feasibility application, how do we get the code from GitHub, how do we build the application with Maven and what do we have to prepare in JBoss and deploy the application.

Source Code
To be able get to source code of the feasibility study first of all you have to install the ‘Git’ version control application, which can be downloaded from the following URL Git Download.

You can download the source code from GitHub, first switch to the directory that you want to download the source code and then execute the command

git clone https://github.com/mehmetsalgar/micro_tran.git

snippet 62

Building the Project
Now that we downloaded the project, we can build it via Maven, if you need it, you can download the Maven from Maven Download.

You can execute the following command, from the root directory of the project ‘micro_tran’, ‘mvn clean install’ a successful execution will produce a log out as the following.

maven_build
picture 12

JBoss Installation
You can download the JBoss from the following link JBoss Download. After you extract the files from the ZIP file you should have the following directory structure.

jboss
picture 13

Mostly JBoss does not need any additional configuration but you can set some parameters in file ‘$JBOSS_HOME\bin\standalone.conf’ (in Windows standalone.conf.bat) like…

set "JAVA_HOME=C:\Java\1.8.0_51"

snippet 63

or memory parameters

rem # JVM memory allocation pool parameters - modify as appropriate.
set "JAVA_OPTS=-Xms1024M -Xmx2048M -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512M"

snippet 64

or enabling remote debugging….

rem # Sample JPDA settings for remote socket debugging
set "JAVA_OPTS=%JAVA_OPTS% -agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=n"

snippet 65

Modular Classloading
As we discussed before we are going to use the modular classloading of the JBoss, we have to prepare something, I actually wrote a Maven Plugin to do this for us and I will publish that also but at the moment we have to do it manually. Feasibility study need the declaration of 4 JBoss Modules, ‘health-check-api’, ‘customer-api’, ‘product-api’ and ‘order-api’. For these we have to write the module descriptors and place them in a special directory.

Module definition for ‘health-check-api’ looks like the following….

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="org.salgar.healthcheck.api">
    <resources>
        <resource-root path="health-check-api-1.0-SNAPSHOT.jar"/>
    </resources>
</module>

snippet 66

for ‘customer-api’ for version ‘v1’ and ‘v2’…..(I am only stating here the ‘v1’ you can modify the parameters yourself for ‘v2’)

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="org.salgar.customer.api.1_0">
    <resources>
        <resource-root path="customer_api_v1-1.0-SNAPSHOT.jar"/>
    </resources>
 
    <dependencies>
        <module name="org.salgar.healthcheck.api" export="true"/>
		<module name="javax.persistence.api" export="true"/>
		<module name="org.hibernate" />
		<module name="org.javassist" />
    </dependencies>
</module>

snippet 67

for ‘product-api’ for version ‘v1’ and ‘v2’…

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="org.salgar.product.api.1_0">
    <resources>
        <resource-root path="product_api_v1-1.0-SNAPSHOT.jar"/>
    </resources>
 
    <dependencies>
        <module name="org.salgar.healthcheck.api" export="true"/>
		<module name="javax.persistence.api" export="true"/>
		<module name="org.hibernate" />
		<module name="org.javassist" />
    </dependencies>
</module>

snippet 68

and for ‘order-api’ for version ‘v1’ and ‘v2’….

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="org.salgar.order.api.1_0">
    <resources>
        <resource-root path="order_api_v1-1.0-SNAPSHOT.jar"/>
    </resources>
 
    <dependencies>
        <module name="org.salgar.healthcheck.api" export="true"/>
		<module name="javax.persistence.api" export="true"/>
		<module name="org.salgar.product.api.1_0" />
		<module name="org.salgar.customer.api.1_0" />
		<module name="org.hibernate" />
		<module name="org.javassist" />
    </dependencies>
</module>

snippet 69

as you can see all the modules have as dependency ‘health-check-api’ while we are using it in all our micro service to report health status to Netflix Eureka, ‘javax.persistence.api’, ‘org.hibernate’, ‘org.javaassist’ while we are using Hibernate\JPA annotation for persistence (I placed these on api classes because I don’t want to complicate the feasibility project too much, if you don’t want to pollute your api classes with Hibernate\JPA you can only define here only interfaces and concrete implementation in another layer) for ‘customer-api’ and ‘product-api’ but in ‘order-api’ also ‘prodcut-api’ and ‘customer-api’ while we have a JPA Relation definition to ‘Customer’ and ‘Product’ from ‘Order’.

Datasources
While we are discussing here a feasibility study over Micro Services and Transactions, we need a database to work with. At this point I will explain you how to setup a Hypersonic database for JBoss while this will be the easiest for you to setup but while I will configure the Hypersonic as in memory database, it will not be possible to test real life scenarios like multiple JBoss accessing to database and so. I have another branch in the GitHub, ‘scenario_1_mysql’ which contains the project configured for MySQL Database, setup details for this I will explain in the appendix.

For a Hypersonic database we have to configure a ‘Datasource’ in JBoss, which happens with the following file.

<?xml version="1.0" encoding="UTF-8"?>

<datasources xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.jboss.org/ironjacamar/schema/datasources_1_0.xsd">
  <datasource jndi-name="java:jboss/datasources/microDS" pool-name="microDSPool" enabled="true" use-java-context="true">
    <connection-url>jdbc:h2:${jboss.server.data.dir}/micro/micro</connection-url>
    <driver>h2</driver>
    <security>
      <user-name>sa</user-name>
      <password>${org.salgar.ds.pass}</password>
    </security>
  </datasource>
  <drivers>
	<driver name="h2" module="com.h2database.h2"> 
	       <xa-datasource-class>org.h2.jdbc.JdbcDataSource</xa-datasource-class>
	</driver>
  </drivers>
</datasources>

snippet 70

This is standard configuration for a Hypersonic database, there are only two interesting parts first is getting the password for the connection from JVM parameter, you should start JBoss with following parameter ‘-Dorg.salgar.ds.pass=YourPassword’ and second is the configuration where Hypersonic information is saved for this we are using JBoss internal property ‘jboss.server.data.dir’ to point where also JBoss is saving its local files.

Now we have one task left for completing the database configuration, we have to create necessary database objects to run the feasibility study. Necessary SQL file called ‘create.sql’ can be found under the directory ‘micro_tran’. You can see the SQL it contains below.

CREATE TABLE ORDER_TA (ID INT PRIMARY KEY, CUSTOMERID INT, COMMITDATE NUMBER(15), STATUS INT);
CREATE TABLE PRODUCT_TA (PRODUCTID INT PRIMARY KEY,  NAME VARCHAR(255));
CREATE TABLE ORDER_PRODUCT_TA (PRODUCTID INT, ORDERID INT);
CREATE TABLE CUSTOMER_TA (ID INT PRIMARY KEY,   NAME VARCHAR(255), FIRSTNAME VARCHAR(255), BIRTHDATE NUMBER(12));
CREATE SEQUENCE HIBERNATE_SEQUENCE;
INSERT INTO ORDER_TA (ID,   COMMITDATE , STATUS) VALUES (1, 345345345, 5);
INSERT INTO PRODUCT_TA(PRODUCTID, NAME  ) VALUES(9999, 'Test');

snippet 71

To run this script in JBoss is little bit tricky, we have to install a Web Application in JBoss to access to Hypersonic database, the application can be download from the following URL Hypersonic Console.

You can reach the application with the following “http://localhost:8080/h2console&#8221;, you will be greeted with a login screen, enter the identification information from the datasource, which will look for the following.

hypersonic_login
picture 14

After the login, you can execute the content of the ‘create.sql’ in the following form as you can see below.


picture 15

Netflix Eureka
As we spoke about it a lot in the previous chapter our application heavily depends on the services provided be the Netflix Eureka so we have to start that also. Luckily, thanks to Spring Boot it is a trivial task, calling the following command will be enough ‘java -jar eureka-1.0-SNAPSHOT.jar’ in the directory that jar file lies.

I also made a little trick to make the life easier for you so that you don’t have to fight with the necessary dependencies for Eureka, this snippet when place in pom.xml will package also the all necessary dependencies for the Eureka.

<plugin>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-maven-plugin</artifactId>
	<executions>
		<execution>
			<goals>
				<goal>repackage</goal>
			</goals>
		</execution>
	</executions>
</plugin>

snippet 72

When you take you take Eureka up and also the JBoss Application Server you should see the following picture in Netflix console.

eureka
picture 16

In which you see that all of our services are at UP state, (one thing I didn’t mentioned we will use Zuul as gateway and Turbine as monitoring tool, we need the messaging tool RabbitMQ in our project for messaging between these components, if you don’t configure the RabbitMQ and start the application, you will see ‘Product_Process’ in DOWN state if you didn’t started the RabbitMQ so don’t be alarmed).

This bring us a point that I specially want to mention, which is called ‘red\green’ pattern. Now think about your conventional applications, when you want to deploy a new version of your application, you always have a down time in your application, dependent on the size and complexity of your system it can take 5-6 hours or may be more. This can be acceptable if you are deploying your application every 3 months or so, but this is against Agile Agenda isn’t, we want to go our customers more often and that is a very important point for micro services. It is also not acceptable to have 5-6 hours down times every month, so what should we do?

Netflix Eureka and Micro Services helps us in this area with ‘red\green’ pattern, ‘green’ will be in this case will be current working copy of your Micro Services, when you deploy the new version of your Micro Services, they will be deployed to ‘red’ zone marked as ‘OUT_OF_SERVICE’ until you check\test and be sure that everything operating correctly. For this behavior, we need the following configuration parameters in ‘application.yml’ for ex in ‘product_process’ Micro Service.

eureka:
  instance:
    initialStatus: OUT_OF_SERVICE
    instanceEnabledOnit: false

snippet 73

These parameters will signal Eureka that the service will not be available for traffic initially and marked as ‘OUT_OF_SERVICE’ in Eureka which can be seen in the following screen shot.

eureka_out_of_service
picture 17

Now you tested your Micro Services and now you believe it is time to make accessible, so you have to switch from ‘green’ zone to ‘red’ zone, how do we do that. Luckily Eureka provide a REST interface to manipulate the state of the Micro Services its controls, which you can see in the following API definition Eureka REST API.

We can manipulate with this API state of the every single Micro Service in Eureka, for ex, with the following command(the command must be executed as a HTTP POST that is the reason we use the ‘curl’).

curl --noproxy localhost -X PUT http://localhost:8761/eureka/apps/PRODUCT_PROCESS/micro.salgar.org:product_process:8080/status?value=UP

snippet 74

Voila, you updated your Micro Service to a new version with zero downtime….

Netflix Zuul
Until now we allowed direct access to our Micro Services and actually this is definitely not a good idea. First reason is security, it is quite risky to give complete open access to our Micro Service and also we should keep track of who using our APIs, when the number of client increase things get quite fast chaotic (for ex, you will make a change that nullify the API contract, how are you going to notify your client if you don’t know who they are), if we provide a single point of access we will have better control over this topics (I will not explore this blog the subject but we can also use OAuth2 for security purpose if we implement this single point of access). Second point is again the topic that we discussed in the previous chapter, if want to implement red\green pattern, it is better that our clients don’t have a direct access to our Micro Services.

Fortunately, Netflix provides with Zuul library again a solution to the problems we discussed above.

Again the maven plugin ‘spring-boot-maven-plugin’ packs all our dependencies to one single executable jar and we only have to go to the directory ‘micro_tran\support\target’ and execute the following command ‘java -jar zuul-1.0-SNAPSHOT.jar’ and now out Micro Services only accessible over the routes we defined in the Zuul via ‘localhost:8765’ address.

JBoss Deployment and Startup
Now that all the perquisites are done we are ready to deploy and start the JBoss.

The artifacts that must be copied from ‘micro_tran\assembler\target\assembler-1.0-SNAPSHOT-package’ directory, we have to copy them to our JBoss installation directory which is ‘$JBOSS_HOME’.

Now that we configured the JBoss, Modular Classload, Datasources, Netflix Eureka and Zuul we can start the JBoss Application Server, calling ‘$JBOSS_HOME\bin\standalone.sh’ (‘standalone.bat’ in Windows) will start the JBoss Application Server, after a successful start you should see something similar the following output in the logs.

2016-12-13 13:31:42,162 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "product_v2_rest.war" (runtime-name : "product_v2_rest.war")
2016-12-13 13:31:42,162 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "product_v1_rest.war" (runtime-name : "product_v1_rest.war")
2016-12-13 13:31:42,163 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "product_sar_v2-1.0-SNAPSHOT.sar" (runtime-name : "product_sar_v2-1.0-SNAPSHOT.sar")
2016-12-13 13:31:42,163 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "product_sar_v1-1.0-SNAPSHOT.sar" (runtime-name : "product_sar_v1-1.0-SNAPSHOT.sar")
2016-12-13 13:31:42,164 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "product-process.war" (runtime-name : "product-process.war")
2016-12-13 13:31:42,165 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "order_v2_rest.war" (runtime-name : "order_v2_rest.war")
2016-12-13 13:31:42,167 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "order_v1_rest.war" (runtime-name : "order_v1_rest.war")
2016-12-13 13:31:42,168 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "order_sar_v2-1.0-SNAPSHOT.sar" (runtime-name : "order_sar_v2-1.0-SNAPSHOT.sar")
2016-12-13 13:31:42,169 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "order_sar_v1-1.0-SNAPSHOT.sar" (runtime-name : "order_sar_v1-1.0-SNAPSHOT.sar")
2016-12-13 13:31:42,169 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "micro-ds.xml" (runtime-name : "micro-ds.xml")
2016-12-13 13:31:42,171 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "customer_v2_rest.war" (runtime-name : "customer_v2_rest.war")
2016-12-13 13:31:42,172 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "customer_v1_rest.war" (runtime-name : "customer_v1_rest.war")
2016-12-13 13:31:42,173 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "customer_sar_v2-1.0-SNAPSHOT.sar" (runtime-name : "customer_sar_v2-1.0-SNAPSHOT.sar")
2016-12-13 13:31:42,174 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 34) WFLYSRV0010: Deployed "customer_sar_v1-1.0-SNAPSHOT.sar" (runtime-name : "customer_sar_v1-1.0-SNAPSHOT.sar")
2016-12-13 13:31:42,329 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management
2016-12-13 13:31:42,329 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
2016-12-13 13:31:42,330 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 10.1.0.Final (WildFly Core 2.2.0.Final) started in 58131ms - Started 1905 of 2231 services (527 services are lazy, passive or on-demand)

snippet 75

Testing
Now that our application started, we can test it. For shakedown tests while we know the URLs of our Micro Services, we can access those directly and bypass the Zuul (this will make the testing easier but external clients of our Micro Service should only access the Zuul). A simple test request might look like the following. I personally use Firefox Plugin HttpRequester for the quick testing of the REST Services. Following request for ex, will create an Order object.

URL:

http://localhost:8080/product-process/saveOrder/v1

snippet 76

Payload:

{"id":null,"name":"Snow","firstName":"John","birthDate":986756767}

snippet 77

In the file ‘test_request.txt’, you will find additional test request to test other Micro Services, I will only show the ones which will proves the transactional functionality of the application but you are free to use the other ones also.

If Firefox HttpRequester is not a possibility for you, you can also use the ‘curl’, following is a sample request to ‘saveOrder\v1’….

curl --noproxy localhost -i -X POST -H "Content-Type: application/json" http://localhost:8080/product-process/saveOrder/v1 -d '{"id":null,"name":"Snow","firstName":"John","birthDate":986756767}'

snippet 78

Conclusion:
At the end you will see that with technologies and paradigms that we are using, we can develop reliable transactional micro services without dealing the complexities of the messaging/streaming system and concentrate our business cases instead of technology puzzles. Off course, when performance or other requirement dictates we can still fallback to messaging/streaming solutions but why to increase the complexity and deal technological problems when we can solve our business problems instead.
Appendix
Appendix A
Hypersonic database is good solution for a local development but not exactly ideal solution production similar environment. First of we have to download and install the MySQL database which you can do from this URL.

Installation is with the help of the wizards quite simple then we have to restart MySQL Server, we have to start the server with administration right, otherwise we will see some errors during the startup.

In the installation directory of MySQL, you find a directory called “bin” and in this directory the file “mysqld.exe”.

We should execute the following command in this directory, this will start the server.

mysqld --console

snippet 79

The installation will also the ‘MySQL Workbench’ which we can install the database structure for the MySQL which is contained in the ‘create_mysql.sql’, which contains the following.

CREATE TABLE micro_tran.ORDER_TA (ID INT PRIMARY KEY AUTO_INCREMENT,  CUSTOMERID INT, COMMITDATE INT(15), STATUS INT);
CREATE TABLE micro_tran.PRODUCT_TA (PRODUCTID INT PRIMARY KEY AUTO_INCREMENT,  NAME VARCHAR(255), QUALITY VARCHAR(255));
CREATE TABLE micro_tran.ORDER_PRODUCT_TA (PRODUCTID INT, ORDERID INT, PRIORITY INT, VOLUME VARCHAR(255));
CREATE TABLE micro_tran.CUSTOMER_TA (ID INT PRIMARY KEY AUTO_INCREMENT,   NAME VARCHAR(255), FIRSTNAME VARCHAR(255), BIRTHDATE INT(12), SEGMENT varchar(255), STATUS INT);

INSERT INTO micro_tran.ORDER_TA (COMMITDATE , STATUS) VALUES (345345345, 5);
INSERT INTO micro_tran.PRODUCT_TA(NAME  ) VALUES('Test');

snippet 80

Finally we have to modify the datasource definition.

<?xml version="1.0" encoding="UTF-8"?>
 
<datasources xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.jboss.org/ironjacamar/schema/datasources_1_0.xsd">
  <datasource jndi-name="java:jboss/datasources/microMySqlDS" pool-name="microMySqlDSPool" enabled="true" use-java-context="true">
    <connection-url>jdbc:mysql://localhost:3306/micro_tran?useUnicode=true&amp;characterEncoding=utf8</connection-url>
    <driver>mysql</driver>
    <security>
      <user-name>root</user-name>
      <password>${org.salgar.ds.pass}</password>
    </security>
  </datasource>
  <drivers>
    <driver name="mysql" module="com.mysql.driver"> 
           <xa-datasource-class>com.mysql.jdbc.Driver</xa-datasource-class>
    </driver>
  </drivers>
</datasources>

snippet 81

and the “persistence.xml” example for MySQL.

<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.0"
	xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
	<persistence-unit name="PRODUCT_V1_SAR">
		<provider>org.hibernate.ejb.HibernatePersistence</provider>
		<class>org.salgar.product.api.v1.model.Product</class>

		<properties>
			<property name="hibernate.bytecode.use_reflection_optimizer"
				value="false" />
			<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver" />
			<property name="hibernate.connection.password" value="${org.salgar.ds.pass}" />
			<!-- property name="hibernate.connection.url"
				value="jdbc:h2:data/micro" /-->
			<property name="hibernate.connection.username" value="root" />
			<property name="hibernate.show_sql" value="true" />
			<property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect" />
			<property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform"/>
		</properties>
	</persistence-unit>
</persistence>

snippet 82

and “applicationContext-dao.xml” to get the datasource for MySQL.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jee="http://www.springframework.org/schema/jee"
	xmlns:context="http://www.springframework.org/schema/context"
	xmlns:tx="http://www.springframework.org/schema/tx""
	xsi:schemaLocation="
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
        http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd">
        
        <jee:jndi-lookup id="datasource" jndi-name="java:jboss/datasources/microMySqlDS" />
        
        <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
        	<property name="jtaDataSource" ref="datasource" />
        </bean>
        
        <!-- bean id="txManager" class="org.springframework.orm.jpa.JpaTransactionManager">
        	<property name="entityManagerFactory" ref="entityManagerFactory" />
        </bean-->
        <bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager"/>
        
        <tx:annotation-driven transaction-manager="txManager" />
</beans>

snippet 83

Appendix B
Netflix Eureka – Zuul – Hystrix are using RabbitMQ message broker to communicate with each other, to prevent excessive load that will be caused via a synchronous communication. The ‘product-process’ project used these libraries excessive for the orchestration layer and for this reason check the health of the RabbitMQ also. In the case that RabbitMQ is not installed or not running, ‘product-process’ will mark itself down in the Netflix Eureka because of the health, for this reason we have to install the RabbitMQ.

You can download RabbitMQ from the following URL.

After the installation, you can start the RabbitMQ in ‘sbin’ directory from the installation location with the following command.

rabbitmq-server.bat

snippet 84

Advertisements

About Mehmet Salgar

Mehmet Salgar
This entry was posted in AspectJ, Eureka, Hystrix, JBoss, JSR376, Maven, Netflix, Ribbon, Software Development, Spring, Zuul. Bookmark the permalink.

5 Responses to Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka

  1. William says:

    Hi Salgar, this is an excellent article. It seems that a very good solution to solve the distributed transaction issue and provide certain level of feasibility.
    But I’m not quite understand what kind of problem will be encountered if not using Jboss? e.g. using Websphere. According to your solution, Mbean , Eureka etc are those standard , off-the shelves library/framework , so isn’t it your solution should be easily achieved when porting to other application server ?
    Thanks for your explanation.

    • You can use all application servers because they all support JMX\MBeans, it is no problem. The thing with JBoss it is more natural with its Micro Kernel architecture and it deployment units SAR (Service Archives). If you want to follow this concept with WebSphere, Weblogic, etc….every Micro Service has to be deployed as WARs(Web Archives) or EAR(Enterprise Archives), is it the end of the world? No but it is an annoyance.

      • William says:

        Thanks for your prompt reply. But I don’t understand the difference. It seems to me that
        1) if it is packaged as SAR, each microservice are packed into separate SAR. This is similar to the case when using WAR/EAR. am I correct ?
        2) Does it mean that if follow this concept in Websphere and packed as EAR, it don’t have issue of ClassCastException ? Is this a problem caused by SAR in JBOSS only ? In case we use EAR in JBOSS, should be able to avoid this exception as well ?

  2. 1) Fundamentally yes, JBoss SAR has some additional deployment structure definitions which let you define explicitly which libraries you like to load, plus you can define which SAR depends on other SAR’s. Lets say, you have order-xxxx,SAR and customer-xxxx.SAR, you define order depends on customer, so Jboss guarantees that customer-xxxx.SAR is deployed to deploy order-xxxx.SAR. I don’t know such a functionality for WAR and EARs, which can be an annoyance.

    2) At the moment, only application server that I know has Modular Classloading is JBoss, sure with the release of Java 9 every one of them would have it, but in Java 8 world and application servers like Websphere, Weblogic, etc… you would not have any other choice then placing, for ex, API projects into server lib directory, it will work but I don’t find this very elegant….

    I hope this helps…

  3. Pingback: Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka | Mehmet Salgar’s Blog | Microservices

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s