Let’s think Kafka cluster without Zookeeper with KIP-500

Right now, Apache Kafka® utilizes Apache ZooKeeper™ to store its metadata. Information such as the partitions, configuration of topics, access control lists, etc. metadata stored in a ZooKeeper cluster. Managing a Zookeeper cluster creates an additional burden on the infrastructure and the admins. With KIP-500, we are going to see a Kafka cluster without the Zookeeper cluster where the metadata management will be done with Kafka itself.

Before KIP-500, our Kafka setup looks like depicted below. Here we have a 3 node Zookeeper cluster and a 4 node Kafka cluster. This setup is a minimum for sustaining 1 Kafka broker failure. The orange Kafka node is a controller node.

Let us see what issues we have with the above setup with the involvement of Zookeeper.

  • Making the Zookeeper cluster highly available is an issue as without the Zookeeper cluster the Kafka cluster is DEAD.
  • Availability of the Kafka cluster if the controller dies. Electing another Kafka broker as a controller requires pulling the metadata from the Zookeeper which leads to the Kafka cluster unavailability. If the number of topics and the partitions is more per topic, the failover Kafka controller time increases.
  • Kafka supports intra-cluster replication to support higher availability and durability. There should be multiple replicas of a partition, each stored in a different broker. One of the replicas is designated as a leader and the rest of the replicas are followers. If a broker fails, partitions on that broker with a leader temporarily become inaccessible. To continue serving the client requests, Kafka will automatically transfer the leader of those inaccessible partitions to some other replicas. This process is done by the Kafka broker who is acting as a controller. The controller broker should get metadata from the Zookeeper for each of the affected partition. The communication between the controller broker and the Zookeeper happens in a serial manner which leads to unavailability of the partition if the leader broker dies.
  • When we delete or create a topic, the Kafka cluster needs to talk to Zookeeper to get the updated list of topics. To see the impact of topic deletion or creation with the Kafka cluster will take time.
  • The major issue we see is the SCALABILITY issue.

Let’s see how the Kafka cluster looks like post KIP-500. Below is the Kafka cluster setup.

If you look at the post KIP-500, the metadata is stored in the Kafka cluster itself. Consider that cluster as a controller cluster. The controller marked in orange color is an active controller and the other nodes are standby controllers. All the brokers in the cluster will be in sync. So, during the failure of the active controller node, electing the standby node as a controller is very quick as it doesn’t require syncing the metadata. The brokers in the Kafka cluster will periodically pull the metadata from the controller. This design means that when a new controller is elected, we never need to go through a lengthy metadata loading process.

Post KIP-500 will speed up the topic creation and deletion. Currently, the topic creation or deletion requires to get the full list of topics in the cluster from the Zookeeper metadata. Post KIP-500, just the entry needs to add to the metadata partition. This speeds up the topic creation and deletion. Post KIP-500, the metadata scalability increases which eventually improves the SCALABILITY of Kafka.

In the future, I want to see the elimination of the second Kafka cluster for controllers and eventually, we should be able to manage the metadata within the actual Kafka cluster. That reduces the burden on the infrastructure and the administrator’s job to the next level. We will meet with another topic. Until then, Happy Messaging!!

Tagged with: ,
Posted in Apache Kafka

Building a 12-factor principle application with AWS and Microsoft Azure

                                                                                        Photo by Krisztian Tabori on Unsplash

                        In this article, I want to provide the services available to build the 12-factor applications on AWS and Microsoft Azure.

12-Factor Principles Amazon Web Services Microsoft Azure
Codebase
One codebase tracked in revision control, many deploys
AWS CodeCommit Azure Repos
Dependencies
Explicitly declare and isolate dependencies
AWS S3 Azure Artifacts
Config
Store config in the environment
AWS AppConfig App Configuration
Backing services
Treat backing services as attached resources
Amazon RDS, DynamoDB, S3, EFS and RedShift, messaging/queueing system (SNS/SQS, Kinesis), SMTP services (SES) and caching systems (Elasticache) Azure Cosmos DB, SQL databases, Storage accounts,messaging/queueing system(Service Bus/Event Hubs), SMTP services and caching systems (Azure Cache for Redis)
Build, release, run
Strictly separate build and run stages
AWS CodeBuild
AWS CodePipeline
Azure Pipelines
Processes
Execute the app as one or more stateless processes
Amazon ECS services
Amazon Elastic Kubernetes Service
Container services
Azure Kubernetes Service (AKS)
Port binding
Export services via port binding
Amazon ECS services
Amazon Elastic Kubernetes Service
Container services
Azure Kubernetes Service (AKS)
Concurrency
Scale out via the process model
Amazon ECS services
Amazon Elastic Kubernetes Service
Application Auto Scaling
Amazon ECS services
Amazon Elastic Kubernetes Service
Application Auto Scaling
Disposability
Maximize robustness with fast startup and graceful shutdown
Amazon ECS services
Amazon Elastic Kubernetes Service
Application Auto Scaling
Amazon ECS services
Amazon Elastic Kubernetes Service
Application Auto Scaling
Dev/prod parity
Keep development, staging, and production as similar as possible
AWS Cloud​Formation Azure Resource Manager
Logs
Treat logs as event streams
Amazon CloudWatch
AWS CloudTrail
Azure Monitor
Admin processes
Run admin/management tasks as one-off processes
Amazon Simple Workflow Service (SWF) Logic Apps
Tagged with: , ,
Posted in Cloud Computing

Containerizing Spring Boot Applications with Buildpacks

Spring Boot Buildpacks

                           In this article, we will see how to containerize the Spring Boot applications with Buildpacks. In one of the previous articles, I discussed Jib. Jib allows us to build any Java application as the docker image without Dockerfile. Now, starting with Spring Boot 2.3, we can directly containerize the Spring Boot application as a Docker image as Buildpacks support is natively added to the Spring Boot. With Buildpacks support any Spring Boot 2.3 and above applications can be containerized without the Dockerfile. I will show you how to do that with a sample Spring Boot application by following the below steps.

Step 1: Make sure that you have installed Docker.

Step 2: Create a Spring Boot application using version Spring Boot 2.3 and above. Below is the Maven configuration of the application.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.3.0.RELEASE</version>
<relativePath/> <!– lookup parent from repository –>
</parent>
<groupId>org.smarttechie</groupId>
<artifactId>spingboot-demo-buildpacks</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>spingboot-demo-buildpacks</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<!– Configuration to push the image to our own Dockerhub repository–>
<configuration>

</configuration>
</plugin>
</plugins>
</build>
</project>

If you want to use Gradle, here is the Spring Boot Gradle plugin.

Step 3: I have added a simple controller to test the application once we run the docker container of our Spring Boot app. Below is the controller code.

package org.smarttechie.controller;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class DemoController {
@GetMapping
public String hello() {
return "Welcome to the Springboot Buildpacks!!. Get rid of Dockerfile hassels.";
}
}
view raw DemoController.java hosted with ❤ by GitHub

Step 4: Go to the root folder of the application and run the below command to generate the Docker image. Buildpacks uses the artifact id and version from the pom.xml to choose the Docker image name.

./mvnw spring-boot:build-image

Step 5: Let’s run the created Docker container image and test our rest endpoint.

docker run -d -p 8080:8080 –name springbootcontainer spingboot-demo-buildpacks:0.0.1-SNAPSHOT

Below is the output of the REST endpoint.

Step 6: Now you can publish the Docker image to Dockerhub by using the below command.

docker push docker.io/2013techsmarts/spingboot-demo-buildpacks

Here are some of the references if you want to deep dive into this topic.

  1. Cloud Native Buildpacks Platform Specification.
  2. Buildpacks.io
  3. Spring Boot 2.3.0.RELEASE Maven plugin documentation
  4. Spring Boot 2.3.0.RELEASE Gradle plugin documentation

That’s it. We have created a Spring Boot application as a Docker image with Maven/Gradle configuration. The source code of this article is available on GitHub. We will connect with another topic. Till then, Happy Learning!!

Tagged with: , , , , ,
Posted in Containers, Java, Spring, Spring Boot

Build Reactive REST APIs with Spring WebFlux – Part3

Reactive Streams, Project Reactor, Spring WebFlux, Spring, Spring Boot Photo by Chris Ried on Unsplash

In continuation of the last article, we will see an application to expose reactive REST APIs. In this application, we used,

  • Spring Boot with WebFlux
  • Spring Data for Cassandra with Reactive Support
  • Cassandra Database

Below is the high-level architecture of the application.

Reactive Streams, Project Reactor, Spring WebFlux, Spring, Spring Boot

Let us look at the build.gradle file to see what dependencies are included to work with the Spring WebFlux.

plugins {
id 'org.springframework.boot' version '2.2.6.RELEASE'
id 'io.spring.dependency-management' version '1.0.9.RELEASE'
id 'java'
}
group = 'org.smarttechie'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '1.8'
repositories {
mavenCentral()
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-data-cassandra-reactive'
implementation 'org.springframework.boot:spring-boot-starter-webflux'
testImplementation('org.springframework.boot:spring-boot-starter-test') {
exclude group: 'org.junit.vintage', module: 'junit-vintage-engine'
}
testImplementation 'io.projectreactor:reactor-test'
}
test {
useJUnitPlatform()
}

view raw
build.gradle
hosted with ❤ by GitHub

In this application, I have exposed the below mentioned APIs. You can download the source code from GitHub.

Endpoint URI Response
Create a Product /product Created product as Mono
All products /products returns all products as Flux
Delate a product /product/{id} Mono
Update a product /product/{id} Updated product as Mono

The product controller code with all the above endpoints is given below.

package org.smarttechie.controller;
import org.smarttechie.model.Product;
import org.smarttechie.repository.ProductRepository;
import org.smarttechie.service.ProductService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
@RestController
public class ProductController {
@Autowired
private ProductService productService;
/**
* This endpoint allows to create a product.
* @param product – to create
* @return – the created product
*/
@PostMapping("/product")
@ResponseStatus(HttpStatus.CREATED)
public Mono<Product> createProduct(@RequestBody Product product){
return productService.save(product);
}
/**
* This endpoint gives all the products
* @return – the list of products available
*/
@GetMapping("/products")
public Flux<Product> getAllProducts(){
return productService.getAllProducts();
}
/**
* This endpoint allows to delete a product
* @param id – to delete
* @return
*/
@DeleteMapping("/product/{id}")
public Mono<Void> deleteProduct(@PathVariable int id){
return productService.deleteProduct(id);
}
/**
* This endpoint allows to update a product
* @param product – to update
* @return – the updated product
*/
@PutMapping("product/{id}")
public Mono<ResponseEntity<Product>> updateProduct(@RequestBody Product product){
return productService.update(product);
}
}

view raw
ProductController
hosted with ❤ by GitHub

As we are building reactive APIs, we can build APIs with a functional style programming model without using RestController. In this case, we need to have a router and a handler component as shown below.

package org.smarttechie.router;
import org.smarttechie.handler.ProductHandler;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.MediaType;
import org.springframework.web.reactive.function.server.RouterFunction;
import org.springframework.web.reactive.function.server.RouterFunctions;
import org.springframework.web.reactive.function.server.ServerResponse;
import static org.springframework.web.reactive.function.server.RequestPredicates.*;
@Configuration
public class ProductRouter {
/**
* The router configuration for the product handler.
* @param productHandler
* @return
*/
@Bean
public RouterFunction<ServerResponse> productsRoute(ProductHandler productHandler){
return RouterFunctions
.route(GET("/products").and(accept(MediaType.APPLICATION_JSON))
,productHandler::getAllProducts)
.andRoute(POST("/product").and(accept(MediaType.APPLICATION_JSON))
,productHandler::createProduct)
.andRoute(DELETE("/product/{id}").and(accept(MediaType.APPLICATION_JSON))
,productHandler::deleteProduct)
.andRoute(PUT("/product/{id}").and(accept(MediaType.APPLICATION_JSON))
,productHandler::updateProduct);
}
}

view raw
ProductRouter
hosted with ❤ by GitHub

package org.smarttechie.handler;
import org.smarttechie.model.Product;
import org.smarttechie.service.ProductService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.MediaType;
import org.springframework.stereotype.Component;
import org.springframework.web.reactive.function.server.ServerRequest;
import org.springframework.web.reactive.function.server.ServerResponse;
import reactor.core.publisher.Mono;
import static org.springframework.web.reactive.function.BodyInserters.fromObject;
@Component
public class ProductHandler {
@Autowired
private ProductService productService;
static Mono<ServerResponse> notFound = ServerResponse.notFound().build();
/**
* The handler to get all the available products.
* @param serverRequest
* @return – all the products info as part of ServerResponse
*/
public Mono<ServerResponse> getAllProducts(ServerRequest serverRequest) {
return ServerResponse.ok()
.contentType(MediaType.APPLICATION_JSON)
.body(productService.getAllProducts(), Product.class);
}
/**
* The handler to create a product
* @param serverRequest
* @return – return the created product as part of ServerResponse
*/
public Mono<ServerResponse> createProduct(ServerRequest serverRequest) {
Mono<Product> productToSave = serverRequest.bodyToMono(Product.class);
return productToSave.flatMap(product ->
ServerResponse.ok()
.contentType(MediaType.APPLICATION_JSON)
.body(productService.save(product), Product.class));
}
/**
* The handler to delete a product based on the product id.
* @param serverRequest
* @return – return the deleted product as part of ServerResponse
*/
public Mono<ServerResponse> deleteProduct(ServerRequest serverRequest) {
String id = serverRequest.pathVariable("id");
Mono<Void> deleteItem = productService.deleteProduct(Integer.parseInt(id));
return ServerResponse.ok()
.contentType(MediaType.APPLICATION_JSON)
.body(deleteItem, Void.class);
}
/**
* The handler to update a product.
* @param serverRequest
* @return – The updated product as part of ServerResponse
*/
public Mono<ServerResponse> updateProduct(ServerRequest serverRequest) {
return productService.update(serverRequest.bodyToMono(Product.class)).flatMap(product ->
ServerResponse.ok()
.contentType(MediaType.APPLICATION_JSON)
.body(fromObject(product)))
.switchIfEmpty(notFound);
}
}

view raw
ProductHandler
hosted with ❤ by GitHub

So far, we have seen how to expose reactive REST APIs. With this implementation, I have done a simple benchmarking on reactive APIs versus the non-reactive APIs (built non-reactive APIs using Spring RestController) using Gatling. Below are the comparison metrics between the reactive and non-reactive APIs. This is not an extensive benchmarking. So, before adopting please make sure to do extensive benchmarking for your use case. 

Load Test Results Comparison (Non-Reactive vs Reactive APIs)

The Gatling load test scripts are also available on GitHub for your reference. With this, I conclude the series of “Build Reactive REST APIs with Spring WebFlux“. We will meet on another topic. Till then, Happy Learning!!

Tagged with: , , , , ,
Posted in Java, Microservices, Reactive Programming, Spring, Spring Boot

Build Reactive REST APIs with Spring WebFlux – Part2

Reactive Streams, Project Reactor, Spring WebFlux, Spring, Spring Boot
Photo by Edgar Chaparro on Unsplash

                            In continuation of the last post, in this article, we will see the reactive streams specification and one of its implementation called Project Reactor. Reactive Streams specification has the following interfaces defined. Let us see the details of those interfaces.

  • Publisher → A Publisher is a provider of a potentially unlimited number of sequenced elements, publishing them as requested by its Subscriber(s)

public interface Publisher<T> {
public void subscribe(Subscriber<? super T> s);
}
view raw Publisher hosted with ❤ by GitHub

  • Subscriber A Subscriber is a consumer of a potentially unbounded number of sequenced elements.

public interface Subscriber<T> {
public void onSubscribe(Subscription s);
public void onNext(T t);
public void onError(Throwable t);
public void onComplete();
}
view raw Subscriber hosted with ❤ by GitHub

  • Subscription A Subscription represents a one-to-one lifecycle of a Subscriber subscribing to a Publisher.

public interface Subscription {
public void request(long n);
public void cancel();
}
view raw Subscription hosted with ❤ by GitHub

  • Processor A Processor represents a processing stage — which is both a Subscriber and a Publisher and obeys the contracts of both.

The class diagram of the reactive streams specification is given below.

Class digram of Reactive Streams

The reactive streams specification has many implementations. Project Reactor is one of the implementations. The Reactor is fully non-blocking and provides efficient demand management. The Reactor offers two reactive and composable APIs, Flux [N] and Mono [0|1], which extensively implement Reactive Extensions. Reactor offers Non-Blocking, backpressure-ready network engines for HTTP (including Websockets), TCP, and UDP. It is well-suited for a microservices architecture.

  • Flux → It is a Reactive Streams Publisher with rx operators that emits 0 to N elements, and then complete (successfully or with an error). The marble diagram of the Flux is represented below.
  • Mono It is a Reactive Streams Publisher with basic rx operators that completes successfully by emitting 0 to 1 element, or with an error. The marble diagram of the Mono is represented below.

As Spring 5.x comes with Reactor implementation, if we want to build REST APIs using imperative style programming with Spring servlet stack, it still supports. Below is the diagram which explains how Spring supports both reactive and servlet stack implementations.

Spring MVC vs Spring Reactive
Image Credit: spring.io

In the coming article, we will see an example application with reactive APIs. Until then, Happy Learning!!

Tagged with: , , , , ,
Posted in Java, Microservices, Reactive Programming, Spring, Spring Boot

Build Reactive REST APIs with Spring WebFlux – Part1

Reactive Programming

Photo by Michael Dziedzic on Unsplash

                                        In this article, we will see how to build reactive REST APIs with Spring WebFlux. Before jumping into the reactive APIs, let us see how the systems evolved, what problems we see with the traditional REST implementations, and the demands from the modern APIs.

If you look at the expectations from legacy systems to modern systems described below,

Reactive Programming, Spring WebFlux, Reactive REST APIs, Back PressureThe expectations from the modern systems are, the applications should be distributed, Cloud Native, embracing for high availability, and scalability. So the efficient usage of system resources is essential. Before jumping into Why reactive programming to build REST APIs? Let us see how the traditional REST APIs request processing works.

Traditional REST API Request/Response Model

Below are the issues what we have with the traditional REST APIs,

  • Blocking and Synchronous The request is blocking and synchronous. The request thread will be waiting for any blocking I/O and the thread is not freed to return the response to the caller until the I/O wait is over.
  • Thread per request → The web container uses thread per request model. This limits the number of concurrent requests to handle. Beyond certain requests, the container queues the requests that eventually impacts the performance of the APIs.
  • Limitations to handle high concurrent users → As the web container uses thread per request model, we cannot handle high concurrent requests.
  • No better utilization of system resources The threads will be blocking for I/O and sitting idle. But, the web container cannot accept more requests. During this scenario, we are not able to utilize the system resources efficiently.
  • No backpressure support → We cannot apply backpressure from the client or the server. If there is a sudden surge of requests the server or client outages may happen. After that, the application will not be accessible to the users. If we have backpressure support, the application should sustain during the heavy load rather than the unavailability.

Let us see how we can solve the above issues using reactive programming. Below are the advantages we will get with reactive APIs.

  • Asynchronous and Non-Blocking Reactive programming gives the flexibility to write asynchronous and Non-Blocking applications.
  • Event/Message Driven The system will generate events or messages for any activity. For example, the data coming from the database is treated as a stream of events.
  • Support for backpressure Gracefully we can handle the pressure from one system to on to the other system by applying backpressure to avoid denial of service.
  • Predictable application response time As the threads are asynchronous and non-blocking, the application response time is predictable under the load.
  • Better utilization of system resources As the threads are asynchronous and non-blocking, the threads will not be hogged for the I/O. With fewer threads, we could able to support more user requests.
  • Scale based on the load
  • Move away from thread per request With the reactive APIs, we are moving away from thread per request model as the threads are asynchronous and non-blocking. Once the request is made, it creates an event with the server and the request thread will be released to handle other requests.

Now let us see how the Reactive Programming works. In the below example, once the application makes a call to get the data from a data source, the thread will be returned immediately and the data from the data source will come as a data/event stream. Here the application is a subscriber and the data source is a publisher. Upon the completion of the data stream, the onComplete event will be triggered.
Data flow as an Event/Message Driven stream

Below is another scenario where the publisher will trigger onError event if any exception happens.

In some cases, there might not be any items to deliver from the publisher. For example, deleting an item from the database. In that case, the publisher will trigger the onComplete/onError event immediately without calling onNext event as there is no data to return.

Now, let us see what is backpressure? and how we can apply backpressure to the reactive streams? For example, we have a client application that is requesting data from another service. The service is able to publish the events at the rate of 1000TPS but the client application is able to process the events at the rate of 200TPS. In this case, the client application should buffer the rest of the data to process. Over the subsequent calls, the client application may buffer more data and eventually run out of memory. This causes the cascading effect on the other applications which depends on the client application. To avoid this the client application can ask the service to buffer the events at their end and push the events at the rate of the client application. This is called backpressure. The below diagram depicts the same.

Back Pressure on Reactive Streams

In the coming article, we will see the reactive streams specification and one of its implementation Project Reactor with some example applications. Till then, Happy Learning!!

Tagged with: , , , , , ,
Posted in Java, Reactive Programming

Jib – Containerize Your Java Application

                           Building containerized applications require a lot of configurations. If you are building a Java application and planning to use Docker, you might need to consider Jib. Jib is an opensource plugin for Maven and Gradle. It uses the build information to build a Docker image without requiring a Dockerfile and Docker daemon. In this article, we will build a simple Spring Boot application with Jib Maven configuration to see Jib in action. The pom.xml configuration with Jib is given below.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.2.5.RELEASE</version>
<relativePath />
<!– lookup parent from repository –>
</parent>
<groupId>org.smarttechie</groupId>
<artifactId>jib-demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>jib-demo</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<!– The below configuration is for Jib –>
<build>
<plugins>
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>2.1.0</version>
<configuration>
<to>
<!– I configured Docker Image to be pushed to DockerHub –>

</to>
<auth>
<!– Used simple Auth mechanism to authorize DockerHub Push –>
<username>xxxxxxxxx</username>
<password>xxxxxxxxx</password>
</auth>
</configuration>
</plugin>
</plugins>
</build>
</project>

view raw
pom.xml
hosted with ❤ by GitHub

After the above change, use the below Maven command to build the image and push that image to DockerHub. If you face any authentication issues with DockerHub, refer https://github.com/GoogleContainerTools/jib/tree/master/jib-maven-plugin#authentication-methods

mvn compile jib:build

view raw
Maven Build
hosted with ❤ by GitHub

Now, pull the image which we created with the above command to run it.

docker image pull 2013techsmarts/jib-demo
docker run -p 8080:8080 2013techsmarts/jib-demo

view raw
Docker Jib Run
hosted with ❤ by GitHub

That’s it. No more additional skill is required to create a Docker Image.

Tagged with: , , ,
Posted in Containers, Java

Are you ready to adopt Serverless Computing?

                      FaaS (Function as a service), one of the newer types of services offered to the industry, came along with the advancement of cloud architectures. This is known as Serverless Computing/Serverless Architecture. In simple terms, serverless architecture abstracts all layers except the application’s development. So, the developers can concentrate only on the business requirement development. Serverless offers event-driven services to trigger functions that are created in development environments and hosted by the cloud provider, usually holding a piece of business logic.

                    The serverless architecture allows for a particular kind of pricing. Cloud providers normally charge only for the execution of the code, providing a more cost-effective platform than their counterparts like Infrastructure as a Service(IaaS), Platform as a Service(PaaS). The low operating costs, a short time to market, increased process agility are the main reasons for serverless computing to become a popular architectural paradigm. From a developer perspective, faster development environment set up, easier operational management, and zero system administration are the boosting benefits. Serverless has cost-effective pricing based on resource consumption, usually divided as 

  1. allocated resources such as memory, CPU or network
  2. the time the application functions run (including in millisecond intervals) and the number of executions. 

The benefits of serverless architectures and their capabilities are explained below.

Scalability: A provider can guarantee that the functions deployed are available and that their services are resilient. so that users do not have to be concerned about scalability concerns.

High Availability: Through serverless computing, the ‘ servers ‘ themselves are deployed automatically in several availability zones, so that they are always available for the requests coming from across the globe.

Rapid Development & Deployment: Adopting serverless computing solutions such as AWS Lambda or Azure Functions can greatly reduce the cost of development. You can get rid of buying infrastructure, setting it up, capacity planning, deployment environment setup and maintaining the infrastructure, etc, tasks. Developers only need to focus on the business requirement to develop and deliver.

Agile Friendly: FaaS systems allow developers to focus on the code and make their product feature-rich through agile cycles of build, testing and releasing.

Finally, although serverless has many benefits, it also has some disadvantages as discussed below. 

Cold Start: In the world of serverless computing, the functions are run when needed and thrown away when not needed. When needed the code of the function will be downloaded, starts a new container, bootstrap the code to start responding to the events. This phenomenon is called “Cold Start”. The length of the cold start introduces latency to your application.

Vendor lock-in: The cloud provider manages everything, so the developer or the client has no complete control of resource use and management. Lock-in can apply at the cloud providers API level. The way you call AWS Lambda or Google Functions into your serverless code may vary. Coupling your code too tightly to the APIs of a serverless vendor may make moving the code to another platform more challenging. There is another vendor lock-in where we are bounded to use only the services available with that cloud vendor. For example, AWS Lambda’s use of Kinesis as a source trigger for streaming the data. An organization that wants to use another streaming technology, like Apache Kafka, can’t use it on AWS Lambda.

Challenges with Unit testing and Integration testing: Serverless applications are hard to test due to the distributed nature. In a traditional development environment, developers tend to mock the services to perform the unit testing and have control of the services to perform integration testing. But, the same is not that easy with serverless applications. Due to this, it is important to invest time and effort during the serverless application design.

According to technology research firm Gartner, by 2022, serverless technology will be used by 10 percent of IT organizations. Based on Grand View Research’s perspective, the compound annual serverless growth rate is expected to be 26 percent ($19.84 billion) by 2025 in the long run.

Recently Cloud Native Computing Foundation initiated CloudEvents which is organized via the CNCF’s Serverless Working Group .  With CloudEvents we can majorly address Vendor Lock-in issue.  As per that, CloudEvents, a specification (spec) for describing event data in a common way, eases event declaration and delivery across services, platforms, and beyond. In reaching v1.0 and moving to Incubation, the spec defines the common attributes of an event that facilitate interoperability, as well as how those attributes are transported from producer to consumer via some popular protocols. It also creates a stable foundation on top of which the community can build better tools for developing, running, and operating serverless and event-driven architectures.  An increasing number of industry stakeholders have been actively contributing to the project, including AWS, Google, Microsoft, IBM, SAP, Red Hat, VMware, and more, cementing CNCF as the home of serverless collaboration for the leading technology companies around the world.”

In the coming article, we will see some Serverless examples. Till then, Happy Learning!!!

Tagged with: , , ,
Posted in Cloud Computing

How the Financial Industry is getting disrupted with AI?

Tagged with: ,
Posted in Papers

Are you ready to see GraphQL in action?

GraphQL                      In the last article, we have discussed GraphQL advantages over REST. In this article, we will see GraphQL in action. I have created a sample application to showcase differences between REST and GraphQL. First, we will see the REST implementation of simple product detail endpoint. I have used Spring Boot to demonstrate REST. Download sample project and follow the steps outlined in README to set up the project. I am not discussing setup details here as it is out of scope for this article. Assuming that your project is up and running to make a call to http://localhost:8080/product/{product_id} endpoint to get product detail JSON as shown below.

rest.gif

If you observe above JSON, we are getting entire product JSON including reviews and technical specifications though we are not interested in all the elements of a given product.

                   Now we will see GraphQL in action by getting product details in a selective manner. To demonstrate GraphQL again I used Spring Boot. Download sample project and follow the steps outlined in README to set up the project. I am not discussing setup details here as it is out of scope for this article. Assuming that your project is up and running to see GraphQL in action. In this case, I am interested to get only product id, title, short description and list price of a given product. Let us see how we can query to get interesting details.

grapgql.gif

Now as a service consumer I am interested to get product id, title, short description, list price, and reviews. In this case, GraphQL gives the flexibility to query what we want. See below query and response when we use GraphQL.

graphql2.gif

To demonstrate GraphQL I have used GUI based plugin GraphiQL. For consuming from other applications we can configure endpoint in application.properties.

graphql.servlet.mapping=/graphql
graphql.servlet.enabled=true
graphql.servlet.corsEnabled=true

Now we can make a call to the above endpoint by passing  URL encoded query parameter as shown below. You can learn more about query and mutations https://graphql.org/learn/queries/

GraphQL_Query

Hope you enjoyed this article. I will come back with another article. Till then, Happy Learning!!!

Tagged with: , , ,
Posted in GraphQL, Spring Boot
DZone

DZone MVB

Java Code Geeks
Java Code Geeks