APIGEE – An Introduction to API Gateway

Apigee

In this article I want to give brief introduction about APIGEE. Mainly APIGEE offers below functionalities out of the box as api gateway.

  1. Protocol Transformation
    Transform from or to any protocol including SOAP, REST, XML binary, or custom
  2. Traffic Management
    Flexible, distributed quota management, rate limiting, and spike arrest policies out-of-the-box
  3. API Security
    Built-in support for address filtering, JSON and XML schema validation, and bot detection
  4. Data Access & Security
    Two-way SSL/TLS, API key validation, OAuth1, OAuth2, SAML, CORS, encrypted store, and HIPAA and PCI compliance
  5. API Products
    Create different tiers by packaging APIs with varying rate limits and pricing
  6. API Analytics
    Fine-grained performance monitoring, including anomaly tracing and drill-down and usage metrics related to apps, developers, and APIs
  7. API Monetization
    Flexible rate plans, international billing, and usage tracking
  8. Global Policy Management
    Enforce consistent security and governance policies across all APIs
  9. Developer Portal
    A customizable portal for API providers to manage developers, APIs, and API documentation and versioning

APIGEE sits between service consumers and backend services. As an API gateway, APIGEE takes care of common functionalities required for the APIs. Hence backend services can concentrate only on the core business logic. The below digram  depicts where exactly APIGEE fits.

APIGEE

Now we will see how the request and response goes through APIGEE. Every client request will go through proxy and target endpoints where we will attach APIGEE policies. Policy is nothing but a simple module which will provide common API functionality which we can configure using XML. In each flow APIGEE exposes lot of flow variables.

APIGEE (2).png

Below are list of policies available out of the box which you can attach to proxy and target flows.

APIGEE (3)

If you want to experience APIGEE edge, follow below steps to create trail account.

  1. Goto https://login.apigee.com/sign_upEnter the required information to create account with APIGEE.
  2. After activating APIGEE account, login to https://login.apigee.com/login access Apigee Edge management UI

Below are some of the reference materials to deep dive into APIGEE

 

Advertisements
Tagged with:
Posted in APIGEE, Uncategorized

APIGEE – Can we configure parameters for Message Logging Policy?

Apigee

Apigee MessageLogging policy has limitation to use configurable parameters such as syslog server host, port etc… details. If we are not going to configure these parameters we may get into trouble while moving the proxy from one environment to other environment. To achieve the portability the approach is to have MessageLogging policy for each environment. Based on the environment in which the proxy is running the policy will be applied. Below is the sample proxy with message logging policy for each environment. The proxy definition is given below.

The test and prod environment MessageLogging policy configuration is given below.

The proxy demonstrated is available on GitHub to download and play with it.

Apigee Message Logging Policy Demo

Tagged with:
Posted in APIGEE, Uncategorized

APIGEE – How To Handle Base64 Encoding Decoding?

Apigee

In this article we will see how to encode and decode base64 strings while building APIGEE proxies. As part of APIGEE we have BasicAuthentication policy which deals with base64 encoded authorization header. But if we want to deal with any base64 encoded string other than Authorization header  we should go with JavaScript policy or JavaCallout policy or PythonScript policy custom implementation. In this article I will show you how to achieve base64 encode and decode using JavaScript policy.

Let me create a simple proxy with JavaScript policy to decode base64 encoded string. Below is the JavaScript policy configuration.

In the JavaScript policy I have included the Base64EncodeDecode js file which perform the encode and decode. Below is the JavaScript to decode base64 encoded string.

The JavaScript which does base 64 encode and decode is available here.

The sample proxy created to demonstrate base64 encode and decode is available on GitHub. Download the sample proxy bundle and import to APIGEE Edge to play with it.

APIGEE

In next article we will discuss another use case. Till then Happy Coding!!!

 

Tagged with: ,
Posted in APIGEE, Uncategorized

Java 11 Features – Java Flight Recorder

Java 11

                            In this article we will see how we can leverage Java Flight Recorder feature as part of Java 11. Earlier it was one of the commercial feature. But with Java 11 with JEP 328 this is open sourced to OpenJDK from OracleJDK. The Java Flight Recorder records the OS and JVM events to a file which can be inspected using Java Mission Control (JMC). Enabling JFR puts minimal overhead on the JVM performance. Hence this can be enabled for production deployments too. Now we will see some of the JVM arguments to enable JFR.

  • Time Based

  • Continuous with dump on demand

  • Continuous with dump on exit

As the JFR is built in available with Java 11, this excites the developer community. We can reduce the dependency on 3rd party profilers as well.

As part of Java 11 we are getting jdk.jfr module. This API allows programmers to produce custom JFR events and consume the JFR events stored in a file to troubleshoot the issue.

You can download the Java11 early access from http://jdk.java.net/11/ to explore the features.

Tagged with: ,
Posted in Java

Java 10 – Local Variable Type Inference

Java10
In this article we will see Java10 feature called Local Variable Type Inference proposed as part of JEP 286. From the first version of Java it is strongly typed language where we need to mention each variable data type. We all were feeling Java is verbose language and expecting precise, compact way of writing Java code. Java 8 addressed this concern some what. Java 10 added Local Variable Type Inference with initializer to eliminate verbosity. For example,

jshell> Map<String,String> map = new HashMap<>();
jshell> var map = new HashMap<>(); //This is valid with Java10

Here LHS variable datatype will be determined by RHS statement. For example,

jshell> var i = 3;
i ==> 3 //based on RHS, the LHS datatype is int.
jshell>int i=3,j=4; //Valid Declaration
but,
jshell> var j=4,k=5; //Not a Valid Declaration
| Error:
|'var' is not allowed in a compound declaration
| var j=4,k=5;
|^

You can use this feature for enhanced for loop and for loop as well.

jshell> List names = Arrays.asList("ABC","123","XYZ");
names ==> [ABC, 123, XYZ]
jshell> for(var name : names){
...> System.out.println("Name = "+ name);
...> }

Name = ABC
Name = 123
Name = XYZ

We can use Local Variable Type Inference in the for loop as well.


jshell> int[] arr = {1,2,3,4};
arr ==> int[4] { 1, 2, 3, 4 }

jshell> for (var i=0;i<arr.length;i++){
   ...> System.out.println("Value = "+i);
   ...> }
Value = 0
Value = 1
Value = 2
Value = 3

There are certain scenarios where this feature is not valid to use. For example,

  • Not valid for constructor variables
  • Not valid for instance variables
  • Not valid for method parameters
  • Not valid to assign NULL value
  • Not valid as return type

Let us see examples for above statements.


jshell> public class Sample {
   ...>    private var name = "xyz";
   ...>    public Sample(var name) {
   ...>     this.name=name;
   ...>    }
   ...>    public void printName(var name){
   ...>      System.out.println(name);
   ...>    }
   ...>    public var add(int a, int b) {
   ...>     return a+b;
   ...>    }
   ...> }
|  Error:
|  'var' is not allowed here
|     private var name = "xyz"; //Instance variable
|             ^-^
|  Error:
|  'var' is not allowed here
|     public Sample(var name) { //Constructor variable
|                   ^-^
|  Error:
|  'var' is not allowed here
|     public void printName(var name){ //Method parameter
|                           ^-^
|  Error:
|  'var' is not allowed here
|     public var add(int a, int b) { //Method return type
|            ^-^


jshell> public class Sample {
   ...>    
   ...>    public static void main(String[] args) {
   ...>     var s = null;
   ...>    }
   ...> }
|  Error:
|  cannot infer type for local variable s
|    (variable initializer is 'null')
|      var s = null;
|      ^-----------^

When we migrate the code from lower versions to Java10, we no need to worry about the Local Variable Type Inference as this has the backward compatibility.

In the coming post we will learn another topic. Till then stay tuned!

Posted in Java

Introduction to Apache Kafka

Apahe Kafka

What is Apache Kafka?

Apache Kafka is a distributed streaming system with publish and subscribe the stream of records. In another aspect it is an enterprise messaging system. It is highly fast, horizontally scalable and fault tolerant system. Kafka has four core APIs called,

Producer API: 

This API allows the clients to connect to Kafka servers running in cluster and publish the stream of records to one or more Kafka topics .

Consumer API:

This API allows the clients to connect to Kafka servers running in cluster and consume the streams of records from one or more Kafka topics. Kafka consumers PULLS the messages from Kafka topics.

Streams API:

This API allows the clients to act as stream processors by consuming streams from one or more topics and producing the streams to other output topics. This allows to transform the input and output streams.

Connector API:

This API allows to write reusable producer and consumer code. For example, if we want to read data from any RDBMS to publish the data to topic and consume data from topic and write that to RDBMS. With connector API we can create reusable source and sink connector components for various data sources.

What use cases Kafka used for?

Kafka is used for the below use cases,

Messaging System:

Kafka used as an enterprise messaging system to decouple the source and target systems to exchange the data. Kafka provides high throughput with partitions and fault tolerance with replication compared to JMS.


Apache Kafka Messaging System

Web Activity Tracking:

To track the user journey events on the website for analytics and offline data processing.

Log Aggregation:

To process the log from various systems. Especially in the distributed environments, with micro services architectures where the systems are deployed on various hosts. We need to aggregate the logs from various systems and make the logs available in a central place for analysis. Go through the article on distributed logging architecture where Kafka is used https://smarttechie.org/2017/07/31/distributed-logging-architecture-for-micro-services/

Metrics Collector:

Kafka is used to collect the metrics from various systems and networks for operations monitoring. There are Kafka metrics reporters available for monitoring tools like Ganglia, Graphite etc…

Some references on this https://github.com/stealthly/metrics-kafka

What is broker?

An instance in a Kafka cluster is called as broker. In a Kafka cluster if you connect to any one broker you will be able to access entire cluster. The broker instance which we connect to access cluster is also known as bootstrap server. Each broker is identified by a numeric id in the cluster. To start with Kafka cluster three brokers is a good number. But there are clusters which has hundreds of brokers in it.

What is Topic?

A topic is a logical name to which the records are published. Internally the topic is divided into partitions to which the data is published. These partitions are distributed across the brokers in cluster. For example if a topic has three partitions with 3 brokers in cluster each broker has one partition. The published data to partition is append only with the offset increment.

Topic Partitions

Below are the couple of points we need to remember while working with partitions.

  • Topics are identified by its name. We can have many topics in a cluster.
  • The order of the messages is maintained at the partition level, not across topic.
  • Once the data written to partition is not overridden. This is called immutability.
  • The message in partitions are stored with key, value and timestamp. Kafka ensures to publish the message to same partition for a given key.
  • From the Kafka cluster, each partition will have a leader which will take read/write operations to that partition.

Apache Kafka Partitions

In the above example, I have created a topic with three partitions with replication factor 3. In this case as the cluster is having 3 brokers, the three partitions are evenly distributed and the replicas of each partition is replicated over to another 2 brokers. As the replication factor is 3, there is no data loss even 2 brokers goes down. Always keep replication factor is greater than 1 and less than or equal to number of brokers in the cluster. You can not create topic with replication factor more then the number of brokers in a cluster.

In the above diagram, for each partition there is a leader(glowing partition) and other in-sync replicas(gray out partitions) are followers. For partition 0, the broker-1 is leader and broker-2, broker-3 are followers. All the reads/writes to partition 0 will go to broker-1 and the same will be copied to broker-2 and broker-3.

Now let us create Kafka cluster with 3 brokers by following the below steps.

Step 1:

Download the Apache Kafka latest version. In this example I am using 1.0 which is latest. Extract the folder and move into the bin folder. Start the Zookeeper which is essential to start with Kafka cluster. Zookeeper is the coordination service to manage the brokers, leader election for partitions and alerting the Kafka during the changes to topic ( delete topic, create topic etc…) or brokers( add broker, broker dies etc …). In this example I have started only one Zookeeper instance. In production environments we should have more Zookeeper instances to manage fail-over. With out Zookeeper Kafka cluster cannot work.

Step 2:

Now start Kafka brokers. In this example we are going to start three brokers. Goto the config folder under Kafka root and copy the server.properties file 3 times and name it as server_1.properties, server_2.properties and server_3.properties. Change the below properties in those files.

Now run the 3 brokers with the below commands.

Step 3:

Create topic with below command.

Step 4:

Produce some messages to the topic created in above step by using Kafka console producer. For console producer mention any one of the broker address. That will be the bootstrap server to gain access to the entire cluster.

Step 5:

Consume the messages using Kafka console consumer. For Kafka consumer mention any one of the broker address as bootstrap server. Remember while reading the messages you may not see the order. As the order is maintained at the partition level, not at the topic level.

If you want you can describe the topic to see how partitions are distributed and the the leader’s of each partition using below command.

In the above description, broker-1 is the leader for partition:0 and broker-1, broker-2 and broker-3 has replicas of each partition.

In the next article we will see producer and consumer JAVA API. Till then, Happy Messaging!!!

Tagged with:
Posted in Apache Kafka

Enterprise Application Monitoring in production with OverOps

OverOps                                     In this article we will discuss OverOps which will monitor application and provides insights about the exceptions with code and the variable state which causes the exception. In most of the traditional logging which we do with Splunk or ELK or any other log aggregation tool we capture the exception stack trace to troubleshoot the issue. But with exception stack trace alone, finding out the root cause behind and fixing it is hard and time consuming. If you attach OverOps agent to the application along with exception, it will provide the source code where exactly the exception happened and the variable state and JVM state during that time. OverOps supports the below platforms.

  • Java
  • Scala
  • Closure
  • .Net

It also provides integration with existing log and performance monitoring tools like Splunk, ELK, NewRelic, AppDynamics etc…

In this article I will show you how to configure OverOps for Java standalone application. You can get the trail version of OverOps agent by registering based on the operating system. We can go either with on premise or SaaS based solution. To demonstrate this I have created sample Spring Boot application and the jar is launched with OverOps agent to monitor the exceptions thrown  based on certain business rule. But in the enterprise application the business logic will be critical and the run time exceptions will be raised unpredictably.


java -agentlib:TakipiAgent -jar Sample_Over_Ops-0.0.1-SNAPSHOT.jar

With the above application when I accessed the REST end point which generated exceptions and the same is captured in OverOps dashboard as shown below.

OverOps Dashboard

The sample application is available here.

Happy Monitoring!!

Tagged with: ,
Posted in DevOps
DZone

DZone MVB

Java Code Geeks
Java Code Geeks