Enterprise Application Monitoring in production with OverOps

OverOps                                     In this article we will discuss OverOps which will monitor application and provides insights about the exceptions with code and the variable state which causes the exception. In most of the traditional logging which we do with Splunk or ELK or any other log aggregation tool we capture the exception stack trace to troubleshoot the issue. But with exception stack trace alone, finding out the root cause behind and fixing it is hard and time consuming. If you attach OverOps agent to the application along with exception, it will provide the source code where exactly the exception happened and the variable state and JVM state during that time. OverOps supports the below platforms.

  • Java
  • Scala
  • Closure
  • .Net

It also provides integration with existing log and performance monitoring tools like Splunk, ELK, NewRelic, AppDynamics etc…

In this article I will show you how to configure OverOps for Java standalone application. You can get the trail version of OverOps agent by registering based on the operating system. We can go either with on premise or SaaS based solution. To demonstrate this I have created sample Spring Boot application and the jar is launched with OverOps agent to monitor the exceptions thrown  based on certain business rule. But in the enterprise application the business logic will be critical and the run time exceptions will be raised unpredictably.

java -agentlib:TakipiAgent -jar Sample_Over_Ops-0.0.1-SNAPSHOT.jar

With the above application when I accessed the REST end point which generated exceptions and the same is captured in OverOps dashboard as shown below.

OverOps Dashboard

The sample application is available here.

Happy Monitoring!!

Tagged with: ,
Posted in DevOps

Distributed Logging Architecture for Microservices

Micro Services

               In this article we will see what are the best practices we need to follow while logging micro services and the architecture to handle distributed logging in micro services world. As we all know micro services runs on multiple hosts. To fulfill a single business requirement, we might need to talk to multiple services running on different machines. So, the log messages generated by the micro services are distributed across multiple hosts. As a developer or administrator, if you want to troubleshoot any issue you are clueless. Because you don’t know micro service running on which host served your request. Even if you know which hosts served your request, going to different hosts and grepping the logs and correlating them across all the micro services requests  is a cumbersome process. If your environment is auto scaled, then troubleshooting an issue is unimaginable. Here are some practices which will make our life easy to troubleshoot the issue in the micro services world.

  • Centralize and externalize storage of your logs

    As the micro services are running on multiple hosts, if you send all the logs generated across the hosts to an external centralized place. From there you can easily get the log information from one place. It might be another physical system which is highly available or S3 bucket or another storage. If you are hosting your environment on AWS  you can very well leverage CloudWatch or any other cloud provider then you can find appropriate service.

  • Log structured data

    Generally we put the log messages which will be raw text output in log files. There are different log encoders available which will emit the JSON log messages. Add all the necessary fields to log. Hence we will have right data available in the logs to troubleshoot any issue. Below are some of the useful links to configure JSON appenders.



 If you are using Logstash as the log aggregation tool, then there are encoders which you can configure to output the JSON  log messages .


  • Generate correlation Id and pass the same correlation Id to the downstream     services and  return the correlation Id as part of response

 Generate a correlation Id when we are making the first micro service call and pass the  same correlation id  to the down stream services. Log the correlation Id across all the micro service calls. Hence we can use the  correlation Id coming from the response to trace out the logs. 

If you are using Spring Cloud to develop micro services you can use Spring Sleuth module along with Zipkin

  • Allow to change the logging level dynamically and use Asynchronous logging

We will be using different log levels in the code and have enough logging statements in the code. We should  have liberty to change the log level dynamically, then it is very helpful to enable the appropriate log level. This  way we no need to enable the least logging level to print all the logs during server startup and avoids the overhead of  excessive  logging. Add  asynchronous log appenders. So that the logger thread will not be blocked the  request thread. If you are using Spring Cloud, then use Spring Boot admin to achieve the log level change dynamically..

  • Make logs are searchable

Make all the fields available in the logs are searchable. For example, If you get hold of correlation Id you can  search all the logs based on the correlation Id to find out the request flow.

                Now we will see the architecture of log management in micro services world. This solution uses ELK stack. Generally we will have different log configurations for different environments. For development environment we will go with console appenders or file appenders which will output the logs in the local host. This is easy and convenient during development. For other environments we will send the logs to centralized place. The architecture which we are going to discuss is for QA and higher environments.

Distributed Logging Architecture

          In the above architecture we configured Kafka log appender to  output the log messages to Kafka cluster. From the Kafka cluster the message will be ingested to Logstash. While ingesting the log messages to Logstash we can transform the information as we required. The output of Logstash will be stashed to Elastic search. Using Kibana visualization tool we can search the indexed logs with the parameters we logged. Remember we can use Rabbit MQ/Active MQ etc.. message brokers instead of Kafka. Below are some of the useful links on appenders.





In the second option given below, we will write the log messages using Logstash appender to the file on the host machines. The Filebeat agent will watch the log files and ingests the log information to the Logstash cluster.Distributed Logging Architecture

Among the first and second options, my choice goes to first option. Below are my justifications.

  • If the system is highly scalable with auto scaling feature the instances will be created and destroyed based on the need. In that case if you go with second option, there might be loss of log files if the host is destroyed. But with first option as and when we log, the message will come to middleware. It is perfect suit for auto scaling environments.
  • With second option we are installing Filebeat or similar file watchers on the host machine. For some reason if those agents stops working we may not get the logs from that hosts. Again we are losing the log information.

In the coming articles we will discuss some more articles on micro services. Till then stay tuned!!!

Tagged with: , ,
Posted in Microservices

Spring Boot Admin – Admin UI for administration of spring boot applications

               As part of micro services development many of us are using Spring Boot along with Spring Cloud features. In micro services world we will have many Spring Boot applications which will be running on same/different hosts. If we add Spring Actuator to the Spring Boot applications, we will get a lot of out of the box end points to monitor and interact with Spring Boot applications. The list is given below.

ID Description Sensitive Default
actuator Provides a hypermedia-based “discovery page” for the other endpoints. Requires Spring HATEOAS to be on the classpath. true
auditevents Exposes audit events information for the current application. true
autoconfig Displays an auto-configuration report showing all auto-configuration candidates and the reason why they ‘were’ or ‘were not’ applied. true
beans Displays a complete list of all the Spring beans in your application. true
configprops Displays a collated list of all @ConfigurationProperties. true
dump Performs a thread dump. true
env Exposes properties from Spring’s ConfigurableEnvironment. true
flyway Shows any Flyway database migrations that have been applied. true
health Shows application health information (when the application is secure, a simple ‘status’ when accessed over an unauthenticated connection or full message details when authenticated). false
info Displays arbitrary application info. false
loggers Shows and modifies the configuration of loggers in the application. true
liquibase Shows any Liquibase database migrations that have been applied. true
metrics Shows ‘metrics’ information for the current application. true
mappings Displays a collated list of all @RequestMapping paths. true
shutdown Allows the application to be gracefully shutdown (not enabled by default). true
trace Displays trace information (by default the last 100 HTTP requests). true

The above end points provides a lot of insights about Spring Boot application. But If you have many applications running then monitoring each application by hitting the end points and inspecting the JSON response is tedious process. To avoid this hassle Code Centric team came up with Spring Boot Admin module which will provide us Admin UI Dash board to administer  Spring Boot applications. This module crunches the data from Actuator end points and provides insights about all the registered applications in single dash-board. Now we will demonstrate the Spring Boot Admin features in the following sections.

As a first step, create a Spring Boot application which we will  make  as Spring Boot Admin server module by adding the below maven dependencies.

Add Spring Boot Admin Server configuration via adding @EnableAdminServer to your configuration.

Let us create more Spring Boot applications to monitor via Spring Boot Admin server created in above steps. All the Spring Boot applications which will create now will be acted as Spring Boot Admin clients. To make application as Admin client, add the below dependency along with actuator dependency. In this demo I have created three applications like Eureka Server, Customer Service and Order Service.

Add below property to application.properties file. This property tells that where the Spring Boot Admin server is running. Hence the clients will register with server.

Now If we start the Admin Server and other Spring Boot applications we can able to see all the admin clients information in the Admin server dashboard. As we started our admin server on 1111 port in this example we can see dash-board at http ://<host_name>:1111. Below is the screenshot of the Admin Server UI.

Detailed view of an application is given below. In this view we can see the tail of the log file, metrics, environment variables, log configuration where we can dynamically switch the log levels at the component level, root level or package level and other information.

Now we will see another feature called notifications from Spring Boot Admin. This will notify the administrators when the application status is  DOWN or application status is coming UP. Spring Boot admin supports the below channels to notify the user.

  • Email Notifications
  • Pagerduty Notifications
  • Hipchat Notifications
  • Slack Notifications
  • Let’s Chat Notifications

In this article we will configure Slack notifications. Add the below properties to the Spring Boot Admin Server’s application.properties file.

With Spring Boot Admin we are managing all the applications. So we need to secure Spring Boot Admin UI with login feature. Let us enable login feature to Spring Boot Admin server. Here I am going with basic authentication. Add below maven dependencies to the Admin Server module.

Add the below properties to the application.properties file.

As we added security to the Admin Server, Admin clients should be able to connect to server by authenticating. Hence add the below properties to the Admin client’s application.properties files.

There are additional UI features like Hystrix, Turbine UI which we can enable to the dash-board. You can find more details here. The sample code created for this demonstration is available on Github.

Tagged with:
Posted in Spring, Spring Boot

Http/2 multiplexing and server push

Smart Techie

                 In this article we will see the main features for Http/2 specification. Till Http/1 the request and response processing between the client and server is simplex. That is, the client sends the request and server processes that , sends response back to the client. Then, client sends another request to server. If any of the request is blocked, then all other requests will have the performance impact. This biggest issue is tackled by introducing the request pipeline in Http/1.1. As part of request pipeline, the request will be sent in an order to the server. Server processes the multiple requests and sends the response back to the client in the same order. Again here the client and server communication is simplex. The below diagram depicts the client server communication happening with Http/1.0 and Http/1.1.

http/1 request processing

                 Till Http/1.1 the request and response are composed in text format and uses multiple TCP connections per origin. The issues like opening multiple TCP connections per origin, Text format, simplex communication is handled in Http/ 2. Now we will see how Http 2 processes request and responses.

http2 request processing

                    The Http/2 uses binary protocol to exchange the data. Http/2 opens single connection per origin and the same TCP connection is used to process multiple requests. Each request will be associated to a stream and the request will be divided into multiple frames. Each frame will have the stream identifier to which it belongs to. The client will send multiple frames belongs to multiple streams to the server asynchronously and the server will process the frames belongs to multiple streams and sends the response asynchronously to the client. The client will arrange the response based on the stream identifier. Here the communication is happening between the client and server simultaneously with out blocking.

              Another Http/2 feature is server push. When client requests for a resource from server, it pushes the additional resources along with the requested resources to the client to cache the data at the client side. This enhances the performance as the client cache is warmed up by the content.

http/2 server push

To know further about Http/2 go through the below links.




Tagged with: , ,
Posted in General

Java 9 : Convenience Factory Methods to create immutable Collections

Java 9

                             In this article we will see another JDK 9 feature to create immutable collections. Till Java 8, If we want to create immutable collections we use to call unmodifiableXXX() methods on java.util.Collections class. For example,  To create unmodifiable list, we should write below code.

The above code is too verbose to create a simple unmodifiable List. As Java is adopting functional programming style Java 9 came up with convenience, more compacted factory methods to create unmodifiable collections with JEP 269. Let us see how  that works.

Create Empty List:

Create Non-Empty List:

Create Non-Empty Map:

If you look at the above Java 9 factory method, the code is simple one liner to create immutable collections. In the coming article we will see another Java 9 feature. Till then, Stay Tuned!!!

Tagged with: ,
Posted in Java

jshell: The Java Shell (Read-Eval-Print Loop)

Java 9

                         In this article we will discuss about jshell(Java Shell) Java 9 feature. We can explore jShell with JDK 9 Early Access Release.  As of now the general availability of JDK9 is scheduled to 27th July, 2017. The jShell feature is proposed as part of JEP 222. The motivation behind jshell is to provide interactive command line tools to explore the features of java quickly. It is very useful tool to get the glimpse of Java features very quickly for the new learners. Already Java is incorporating functional programming features from Scala. In the same direction they want REPL(Read-Eval-Print Loop) interactive shell for Java as Scala, Ruby, JavaScript, Haskell, Clojure, and Python.

                       The jshell tool will be a command-line tool with features like, a history of statements with editing, tab completion, automatic addition of needed terminal semicolons, and configurable predefined imports.

After downloading the JDK 9, set the PATH variable to access jshell. Now, we will see how to use jshell. Below is the simple program using jshell. We no need to write a class with public static void main(String[] args) method to run simple hello world application.

Now we will write method which will add two variables and invoke method via jshell.

Now, we will create a static method with StringBuilder class without importing it, as jshell does that for you.

I hope you enjoyed jshell feature. In the next article we  will see another JDK 9 feature. Till then, Stay Tuned!!!

Tagged with: , ,
Posted in Java

Software Architecture – Hexagonal Architecture Pattern


                     In this article we will see “Hexagonal Architectural Pattern” also known as “Ports and Adapters” pattern. As developers so far we have created applications with tiered architecture styles like MVC (Model View Controller). With this architectural styles, up to certain extent we were able to decouple the domain logic with other functionalities. At times the domain logic use to leak into UI or other functionalities. As the core logic is getting leaked into other layers , the impact of the code change will have ripple effect on other modules. To avoid this we can go with “Ports and Adapters” architecture style.

                      The “Ports and Adapters” style will have domain logic as Core and the Adapters will have the application logic specific  to translate an interface from outside request  into a compatible one. The core and the adapters sits as inner layer. The ports sits on outer layer to interact with the external services. The below diagram depicts the above said architectural style.


                        The advantages of this style is the core logic abstracted from the out side world. The code is decoupled via the adapters. We can add or remove the new functionalities easily. We can perform testing of core logic and the adapters in isolation mode.

                      In the coming article we will see what is micro services architectural style? and what are the driving factors behind it? Till then Stay Tune!!!

Tagged with: ,
Posted in Design Patterns


Java Code Geeks
Java Code Geeks