Saturday, December 3, 2016

Static Keyword in Java


The main purpose of static keyword in java is  memory management.
 The  static keyword can be used  in each of the  five cases.

  1.  static variables
  2.  static methods
  3.  static block
  4.  static nested class
  5.  Interface  static method(java 8 onward)

  • STATIC VARIABLES


In Java Variables can be declared with the “static” keyword.When a variable is declared with the keyword static, its called a class variable. All instances share the same copy of the variable. A class variable can be accessed directly with the class, without the need to create an instance.

Example: static int i = 0;

ADVANTAGE OF STATIC VARIABLE

It makes your program memory efficient (i.e it saves memory).
now let us execute a program without static variable.
Suppose there are 1500 students in a college, now all instance data members will get memory each time when object is created.All student have its unique rollno and name so instance data member is good.Here, college refers to the common property of all objects.if we don't  use static keyword at the variable college  then for all 1500 students the huge amount of memory space will be used which is not a good programming practice.
    class Student{
         int rollno;
         String name;
         String college="XYZ";
    }
program using static variable

package com.brainatjava.test;

public class Student {

   static String college ="XYZ";   int rollno;   String name;
  Student(int r,String n){   rollno = r;   name = n;   }
void display (){
System.out.println(rollno+" "+name+" "+college);}

public static void main(String args[]){ Student s1 = new Student(10,"Rabi"); Student s2 = new Student(20,"Rohit");
s1.display(); s2.display(); } }
OUTPUT
10 Rabi XYZ
20 Rohit XYZ


  •  STATIC METHOD


Static Methods can access class variables without using object of the class. It can access non-static methods and non-static variables by using objects. Static methods can be accessed directly in static and non-static methods.

EXAMPLE OF STATIC METHOD
public class Test1 {
static int i =10;

 //Static method
 static void display()
 {
    //Its a Static method
    System.out.println("i:"+Test1.i);
 }

 void foo()
 {
     //Static method called in non-static method
     display();
 }
 public static void main(String args[]) //Its a Static Method
 {
     //Static method called in another static method
     display();
     Test1 t1=new Test1();
      t1.foo();
  }
}

OUTPUT
i:10
i:10

STATIC BLOCK


   It is used to initialize the static data member.It is executed before main method at the time of classloading.A class can have multiple Static blocks, which will execute in the same sequence in which they have been written in the program.

EXAMPLE  OF SINGLE STATIC BLOCK

public class ExampleOfStaticBlock {
static int i;
  static String str;
  static{
     i =30;
     str = "welcome to BrainAtJava";
  }
  public static void main(String args[])
  {
     System.out.println("Value of i="+i);
     System.out.println("str="+str);
  }
}
OUTPUT
Value of i=30
str=welcome to BrainAtJava

EXAMPLE OF MULTIPLE STATIC BLOCK


public class ExampleOfMultipleStaticBlock {
static int i1;
static int i2;

  static String str1;
  static String str2;

  //First Static block
  static{
     i1 = 70;
     str1 = "Hello";
 }
 //Second static block
 static{
     i2= 55;
     str2 = "java";
 }
 public static void main(String args[])
 {
 System.out.println("Value of i1="+i1);
 System.out.println("Value of str1="+str1);

          System.out.println("Value of i2="+i2);
     System.out.println("Value of str2="+str2);
 
   
  }
}
OUTPUT
Value of i1=70
Value of str1=Hello
Value of i2=55
Value of str2=java

Static Nested Class:

A static nested class in Java is simply a class scoped within another class.
We can think of it as the static members of the enclosing class.
We can access it without creating an instance of the outer class.Simple it can be accessed by outerclass.innerclass.We can follow the below example.

A static nested class in Java serves a great advantage to namespace resolution.  For example, if we have a class with an  common name, and in a large project, it is quite possible that some other programmer has the same idea, and has a class with the same name you had, then we  can solve this  name clash by making our class a public static nested class. And our class will be written as outer class, followed by a period (.) and then followed by static nested class name.

Let's take an example

 class Outernormal
{
    private int var = 20;
    private static int staticVar = 50;
    public static void staticmethod(){
   System.out.println(var);//Error: Cannot make a static reference to the non-static field mem
   System.out.println(staticVar);
    }
    static class InnerStatic
    {
        public void getFields ()
        {
            System.out.println(var); //Error: Cannot make a static reference to the non-static field mem
            System.out.println(staticVar);
        }
    }
}

public class StaticClassDemo
{
    public static void main(String[] args)
    {
        OuterStatic.InnerStatic is = new OuterStatic.InnerStatic();
        is.getFields();
    }
}

Interface  static method(java 8 onward):

 In Java8 onwards , we can define static methods in interface ,but we can’t override them in the implementation classes.

This  helps us in avoiding undesired results in case of wrong implementation of interface.

It is good for providing utility methods,for example any precondition  check.

It provides security by not allowing implementing classes to override them.

Let's see code sample below.
public interface MyInf {

 static boolean checkIfNull(String str) {
  System.out.println("Interface Null Check");

  return str == null ? true : false;
 }
}

Sunday, October 23, 2016

Bloom Filters By Example

In this post we will discuss about bloom filter and its use case.Let's create a scenario like this first.
Assume there is cycle stand in our college.And the stand has 1000 slots for parking the cycles.And usually  one slot can have 4 cycles.So definitely that stand has capacity to have 4000 cycles.And it is very well known  that Mr. Akash keeps his cycle in slot no 1 every day.

So if we want to know if Akash is present in college today,we just check slot no 1 and if there is any cycle available there , we say yes Akash is present in college.But it is not hundred percent correct.As we said above each slot can have four cycles,it may be possible that cycle present in slot no 1 may not belongs to Akash.

So here a case arrises, which is false positive.But if no cycle is there in slot no 1, we say that definitely Akash is absent today.So there is no chance of false negative.That is we never say that Akash is absent today in case of his presence in college.

Bloom filter is a simple hash based  filter works  on the same principle.It allows to store elements and help us to quickly  identify many (not all) elements those are not present.Sometimes we can say that Akash is not in the college(if no cycle is there in slot 1).


Use Case:

Suppose we are going to create an anti virus software, which will maintain the list of malicious site and a list of know viruses.A naive approach is to maintain data structure to hold the details of all the malicious programmes.A problem with this approach is that it may consume a considerable amount of memory. If you know of a million malicious programmes, and programmes  need  an average of 10 bytes to store, then you need 10 megabytes of storage. That’s definitely an overhead . Is there any efficient way?Yes there  is.

Implementation Details:


We will do two things with bloom filter
1.insert an element in the filter.
2.Test an element if it is a member of bloom filter.

Bloom filter is a probabilistic data structure,that is not deterministic.We will came to know the reason in a while.

Let's take a bit array of size m.Initialize each position to zero.The idea here is that we choose k hash functions whose max value will be within the range 0 to m.Here k is constant such that k
To test for an element (whether it is in the set), feed it to each of the k hash functions to get k array positions. If any of the bits at these positions is 0, the element is definitely not in the set. if it were, then all the bits would have been set to 1 when it was inserted. If all are 1, then either the element is in the set, or the bits have been set by chance to 1 during the insertion of other elements, which results  false positive.

Deletion of elements are not allowed due to obvious reasons.Suppose we want to delete an element then defintely we need to set any of the  k bit positions generated by k hash functions for that element to 0.But there will be a chance that bit which we are going to set to 0 may be the result of some other elements.


To add an element in the filter simply pass that element to the k hash functions.Now we will get k different values between 0 to m-1 i.e we ll get k different array positions.Mark these positions to 1.Note that here we are not putting the exact hash value in array we are simply marking that position to 1.
As false negatives are not allowed in Bloom filter, we can't delete the elements from it.


Applications of Bloom Filter:


1.Cassandra uses bloom filters to save IO when performing a key lookup: each SSTable has a bloom filter associated with it that Cassandra checks before doing any disk seeks.
2.The Google Chrome web browser use a Bloom filter to identify malicious URLs. Any URL was first checked against a local Bloom filter, and only if the Bloom filter returned a positive result was a full check of the URL performed (and the user warned, if that too returned a positive result).


Although the Bloom Filter is a data structure ,it is called a filter because because it often used as a first pass to filter out elements of a data set that dont match a certain criteria.

Please refer wikipedia  for more applications and detailed explanations.

Wednesday, October 19, 2016

Least Recently Used (LRU) cache implementation in Java

I was working on a requirement where the requirement was to cache urls coming continuously from a source.And our cache has limited size.We can't afford to store all the urls.So here we decided to store 5 lakh most recently used urls in our cache.And those urls are not in use since a while(least recently used urls) will be evicted and new urls coming from the source will be added in the cache.If the new url is already present in cache we will mark the url as most recently used.
   
For more clarity on caches,please refer the blog post caches.  Now let's decide what data structure we will use.We can simply think of storing the urls in an LinkedList of size 5 lakh.But there are some limitations of using LinkedList.

Limitation 1:

What will happen if we want to find the least recently used urls.It is our frequent requirement as we want to evict the least recently used url from the cache to keep it free for other most recently used urls.

But the complexity for this is of order n i.e. O(n),which is not desired.

Limitation 2:

But what will happen if a url comes from the source and we want to check whether the url    already exist in the cache or not?For this we have to traverse the whole list.
  
But the complexity for this is of order n ie. O(n), which is not desired.
   
From this we conclude that LinkedList is not the correct choice for this.



To solve the first problem we use a doublyLinkedList,where the least recently used elements will be available in the tail of the list and can be accessed in O(1) time.And most recently used elements will be in the head of the list.

 To Solve the second problem we use a hash map,so that we  can check whether   an url is available in cache or not in O(1) time.

So to create a LRU cache, we have to take the help of two data structure namely a DoublyLinkedList and a HashMap.

Please see the implementation below.It is straight forward.




package com.brainatjava.lru;



import java.util.HashMap;

import java.util.Map;



public class LRUCache {

     

    private DoublyLinkedList urlList;

    private Map urleMap;
     
    public LRUCache(int cacheSize) {
      urlList = new DoublyLinkedList(4);
      urleMap = new HashMap();
    }
     
    public void accessURL(String url ) {
        Node pageNode = null;
        if(urleMap.containsKey(url)) {
            // If url is present in the cache, move the page to the head of list
            pageNode = urleMap.get(url);
            urlList.takeURLToHead(pageNode);
        } else {
            // If the page is not present in the urlcache, add the page to the urlcache
            if(urlList.getCurrSize() == urlList.getSize()) {
                // If cache is full, we will remove the tail from the cache 
                // and  remove it from urlmap.
              urleMap.remove(urlList.getTail().getURL());
            }
            pageNode = urlList.addPageToList(url);
            urleMap.put(url, pageNode);
        }
    }
     
    public void printCacheState() {
      urlList.printList();
        System.out.println();
    }

    public static void main(String[] args) {
        int cacheSize = 4;
        LRUCache cache = new LRUCache(cacheSize);
        cache.accessURL("http://a");
        cache.printCacheState();
        cache.accessURL("http://b");
        cache.printCacheState();
        cache.accessURL("http://a");
        cache.printCacheState();
        cache.accessURL("http://a");
        cache.printCacheState();
        cache.accessURL("http://d");
        cache.printCacheState();
        cache.accessURL("http://c");
        cache.printCacheState();
        cache.accessURL("http://g");
        cache.printCacheState();
        cache.accessURL("http://h");
        cache.printCacheState();
        cache.accessURL("http://c");
        cache.printCacheState();
    }
}

class DoublyLinkedList {
     
    private final int size;
    private int currSize;
    private Node head;
    private Node tail;

    public DoublyLinkedList(int size) {
        this.size = size;
        currSize = 0;
    }

    public Node getTail() {
        return tail;
    }

    public void printList() {
        if(head == null) {
            return;
        }
        Node tmp = head;
        while(tmp != null) {
            System.out.print(tmp);
            tmp = tmp.getNext();
        }
    }

    public Node addPageToList(String url) {
        Node pageNode = new Node(url);       
        if(head == null) {
            head = pageNode;
            tail = pageNode; 
            currSize = 1;
            return pageNode;
        } else if(currSize < size) {
            currSize++;
        } else {
            tail = tail.getPrev();
            tail.setNext(null);
        }
        pageNode.setNext(head);
        head.setPrev(pageNode);
        head = pageNode;
        return pageNode;
    }

    public void takeURLToHead(Node node) {
        if(node == null || node == head) {
            return;
        }

        if(node == tail) {
            tail = tail.getPrev();
            tail.setNext(null);
        }
         
        Node prev = node.getPrev();
        Node next = node.getNext();
        prev.setNext(next);

        if(next != null) {
            next.setPrev(prev);
        }

        node.setPrev(null);
        node.setNext(head);
        head.setPrev(node);
        head = node;    
    }

    public int getCurrSize() {
        return currSize;
    }

    public void setCurrSize(int currSize) {
        this.currSize = currSize;
    }

    public Node getHead() {
        return head;
    }

    public void setHead(Node head) {
        this.head = head;
    }

    public int getSize() {
        return size;
    }   
}

class Node {
     
    private String url;
    private Node prev;
    private Node next;
     
    public Node(String url) {
        this.url = url;
    }

    public String getURL() {
        return url;
    }

    public void setURL(String url) {
        this.url = url;
    }
     
    public Node getPrev() {
        return prev;
    }

    public void setPrev(Node prev) {
        this.prev = prev;
    }

    public Node getNext() {
        return next;
    }

    public void setNext(Node next) {
        this.next = next;
    }
     
    public String toString() {
        return url + "  ";
    }
}
 

 The method we applied here uses a DoublyLinkedList and HashMap.Double linked list is to maintain insertion order and finding the tail of the cache in O(1) time.And HashMap to check if an url is already exist in cache in O(1) time.

But Java has a lesser known  data structure know as LinkedHashMap ,which provides both the features of doublly linked list and hashmap.

  But here to remember that by default the LinkedHashMap  order is the insertion order, not access order.But there is a constructor of it to provide the access order for the same.We should use the below constructor LinkedHashMap(int initialCapacity, float loadFactor, boolean accessOrder) to solve the purpose.
 
  //Stright from the java doc
 A special constructor is provided to create a linked hash map whose order of iteration is the order in which its entries were last accessed, from least-recently accessed to most-recently (access-order). This kind of map is well-suited to building LRU caches. Invoking the put or get method results in an access to the corresponding entry (assuming it exists after the invocation completes). The putAll method generates one entry access for each mapping in the specified map, in the order that key-value mappings are provided by the specified map's entry set iterator. No other methods generate entry accesses. In particular, operations on collection-views do not affect the order of iteration of the backing map.
 link http://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashMap.html

Let's see the implementation


package com.brainatjava.lru;

import java.util.LinkedHashMap;

public class LRUCache extends LinkedHashMap {
 private int size;

 private LRUCache(int size){
     super(size, 0.75f, true);
     this.size =size;
 }
  
 @Override
    protected boolean removeEldestEntry(java.util.Map.Entry eldest) {
        // TODO Auto-generated method stub
    return size() > size;
    }

 @Override
    public V get(Object key) {
        // TODO Auto-generated method stub
        return super.get(key);
    }

 public static  LRUCache newInstance(int size) {
     return new LRUCache(size);
 }

}

Thursday, October 13, 2016

Caches

First of all, let’s understand what is a cache? In plain computer science terms, a cache is small buffer of pages OS maintains in order to avoid more expensive main memory accesses.

Cache is usually present on the CPU chip itself.Main memory(RAM) is placed on the motherboard and is connected to the CPU.
Because cache is closer to the CPU, it is much faster than RAM. Each read access on the main memory has to travel to CPU while the CPU cache is right there.

Cache is more expensive than primary memory.

Why to have another temporary memory when we already have cheap and large main memory?


It is mainly to improve speed.The cache is there to reduce the average memory access time for the CPU.

When the CPU needs some data from the memory, the cache is checked first and if data is available in the cache it gets it from there. There is no need to perform a memory read.

Caches are faster than main memory, however, they smaller in size compared to main memory.  Therefore, there is probability pages are swapped in and out of cache. If a page is not found in cache and main memory is accessed to fetch that page, it’s a cache miss. Page is brought in cache and next time when it is accessed, it is served from cache.

What if there is no space left in cache when a cache miss occurs? New page as to swapped with one of already existing pages in cache. How to decide which page goes out of cache, so that there is minimum increase in cache miss? There are many approaches (eviction methods) to decide which page goes out of cache to make space for new page like First In First Out approach, Least Recently Used, Least Frequently Used etc.
What is least recently used cache ?

In ‘First In First Out’ approach, OS selects page which is oldest in cache and swaps that it with new page. In ‘Least Recently Used’ approach, OS selects the page which was not accessed for longest period of time. In ‘Least Frequently Used’ approach, OS selects the page which is accessed least number of time till a given point of time.

In this post, we would concentrate on Least Recently Used approach and implement it.

LRU cache is similar to first in first out (FIFO) storage where pages which came first are evicted first. Difference between FIFO storage and LRU cache comes from the fact that when a page is accessed again, that page is move to the top.

If a page is entered in cache first, it is first candidate to go out if it not accessed again in before cache is full and cache miss happens.

Here we will describe the implementation of LRU Cache in Java.

Friday, July 1, 2016

Determine the class name of an object

Sometimes it is required at runtime to get the class name of an object.Before we need to do any introspection on an object , we need to find its java.lang.Class object.All types in java like all object types,all primitive types,all array types etc have an associated  java.lang.Class object. Please note that I am using  Upper Case C for java.lang.Class and Lower Case c for class in general. If we know the name of a class at compile time we can find the Class(java.lang.Class) object of that class by using below syntax
 Class myclassObj = MyClass.class 
But in runtime to find the Class(java.lang.Class) object of the given object below syntax is used.
 Class myclassObj = myobj.getClass() 
Now to find the String representation of the name of the  class of we use the getName() method on the Class object.

String className=myClass.getName();
getName() will give us the fully qualified class name i.e class name along with package name.

If we  want to get only class name without package name prefixed with ,we can use getSimpleName() method on the Class object.
Let's see a working example

package com.brainatjava.test;

import java.util.HashMap;
import java.util.Map;


public class ClassTest {
    public static void main(String[] args) throws InterruptedException {

        String s ="good";
        System.out.println("class name is: " + s.getClass().getSimpleName());

        Map<String, String> m = new HashMap<>();
        System.out.println("class name is: " + m.getClass().getName());        

        Boolean b = new Boolean(true);
        System.out.println("class name is: " + b.getClass().getName());

        StringBuilder sb = new StringBuilder();
        Class c = sb.getClass();
        System.out.println("class name is: " + c.getName());

        int[] a=new int[3];
        System.out.println("class name is: " + a.getClass().getName());

        Integer[] in=new Integer[3];
        System.out.println("class name is: " + in.getClass().getName());

        double[] du=new double[3];
        System.out.println("class name is: " + du.getClass().getName());

        Double[] d=new Double[3];
        System.out.println("class name is: " + d.getClass().getName());
       
    }
    }

And we get the below output

class name is: String

class name is: java.util.HashMap

class name is: java.lang.Boolean

class name is: java.lang.StringBuilder

class name is: [I

class name is: [Ljava.lang.Integer;

class name is: [D

class name is: [Ljava.lang.Double;
If you are curious about the last four lines of the output,Then please go through the below explanation.

What is [I , [D,[Ljava.lang.Integer,[Ljava.lang.Double
As we saw in the above paragraph we can get the java.lang.Class object by calling getClass() method on the object.

 If this class object represents a reference type that is not an array type then the binary name of the class is returned, as specified by The Java™ Language Specification.

If this class object represents a primitive type or void, then the name returned is a String equal to the Java language keyword corresponding to the primitive type or void.

If this class object represents a class of arrays, then the internal form of the name consists of the name of the element type preceded by one or more '[' characters representing the depth of the array nesting. The encoding of element type names is as follows:

    Element Type             Encoding
    boolean                              Z
    byte                                    B
    char                                   C
    class or interface             Lclassname;
    double                                D
    float                                    F
    int                                       I
    long                                    J
    short                                  S
For more details please refer oracle official doc for method getName() .

Monday, June 27, 2016

operations on Java Streams -- continued

Filter Operation:

We can apply filter operation in an input stream  to produce another filtered stream.Suppose we have a finite stream of natural numbers but we want to filter the even numbers only,so we can apply filter operation here.Please note that ,unlike the map operation the elements in the filtered stream are  of the same type as the elements in the input stream.

The filter operation takes a functional interface Predicate as it's argument.Since the Predicate interface has a public method test which returns a boolean value, so we can pass a Lambda expression here as the argument to filter operation, which evaluates to a boolean value. 


 The size of the input stream is less than or equal to the size of the output stream.Please refer the below example.


Stream.of(1, 2, 3, 4, 5,6).filter(n->n%2==0).forEach(System.out::println);

Reduce Operation:

This combines all elements of a stream to generate a single result by applying a combining function repeatedly.Computing the sum, maximum, average, count etc.  are examples of the reduce operation.

The reduce operation takes two parameters  an initial value and an accumulator. The accumulator is the combining function. If the stream is empty, the initial value is the result. 


The initial value and an element are passed to the accumulator, which returns a partial result. This repeats until all elements in the stream are finished. The last value returned from the accumulator is the result of the reduce operation. 

 The Stream interface contains a reduce() method to perform the reduce operation. The method has three overloaded versions:  

1.Let's take the example for the first one

T reduce(T identity, BinaryOperator accumulator)


List<integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
int sum = numbers.stream()
.reduce(0, Integer::sum);
System.out.println(sum);

Here notice that 0 is the initial value and  Integer::sum is the accumulator ie. the combining function.

2.Let's take example for the second one

U reduce(U identity, BiFunction accumulator,BinaryOperator combiner) 

Note that the second argument, which is the accumulator, takes an argument whose type may be different from the type of the stream. This is used for the accumulating the partial results. The third argument is used for combining the partial results when the reduce operation is performed in parallel.Then the result of all the different threads will be combined.But if we are not doing it in parallel , the combiner  has no use. 


int result = List<Employee>
.stream()
.reduce(0, (intermediteSum, employee) ->intermediateSum + employee.getSalary(), Integer::sum);
System.out.println(sum);


The above code shows how to calculate the sum of salary of all  the   employees by using the reduce operation.Here 0 is the initial value  and sec
ond argument is the accumulator and Integer::sum is the combiner

3.Let's take the example for the third one

Optional reduce(BinaryOperator accumulator)

Sometimes we cannot specify an initial value for a reduce operation.Let's assume we get a list of numbers on the fly.We have no idea whether the list is empty or it has sum elements and we want to get maximum integer
value from a the List of numbers. If the underlaying stream is empty, we cannot initialize the maximum value. In such a case, the result is not defined.This version of the reduce method returns an Optional object that contains the result. If  the stream contains only one element, that element is the result. 

The following snippet of code computes the maximum of integers in a stream:


Optional<integer> maxValue = Stream.of(1, 2, 3, 4, 5)
.reduce(Integer::max);
if (maxValue.isPresent()) {
System.out.println("max = " + maxValue.get());
}
else {
System.out.println("max is not available.");
}

Collect Operation:

We saw in case of reduce operation we get a single value as the result.But sometimes we want to collect a set of values as the result of stream pipeline operations.

Let's take an example.We have a map of users having user name as key and user Account No as value.This map contains all the users those are active and non active.And we have another List of names which contains only active users.And our requirement is to get all the active  Account numbers.So my result here will be  itself a List of active Account numbers.


Set<string> activeUserList= new HashSet<>();
Map<String,String> completeUserMap=new HashMap<>();

List<String> keys =completeUserMap.entrySet().stream().
filter(e->activeUserList.contains(e.getKey())).
map(Map.Entry::getKey).collect(Collectors.toList());


Here notice that first we are filtering completeUserMap with activeUserList and then using the map operation to get user Account number from the Map entry and then collecting the result in a List.

Here let's collect the same result in a map,ie. we will collect the map of active users and their Account numbers.


Map<String,String> values = completeUserMap.entrySet().stream().
filter(e->activeUserList.contains(e.getKey())).
collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));


We will learn  about  Collectors,parallel stream and operation reordering in details in our next series.

Wednesday, June 15, 2016

Usage of Java streams

In part1 of these series we saw basics of java stream.Now we discuss how to use the streams along with some important operations on it. Now let's see different ways to create streams.

Create streams from existing values:

There are two methods in stream interface to create stream from a single value and multiple values.

Stream stream = Stream.of("test");
Stream stream = Stream.of("test1", "test2", "test3", "test4");

Create empty stream :


Stream stream = Stream.empty();

Create Stream from function:

We can generate an infinite stream from a function that can produce infinite number of elements if required.There are two static methods iterate and generate in Stream interface to produce infinite stream.

 Stream iterate(T seed, UnaryOperator f)
 Stream generate(Supplier s);
The iterator() method takes two arguments: a seed and a function. The first argument is a seed that is the first element of the stream. The second element is generated by applying the function to the first element. The third element is generated by applying the function on the second element and so on.

The below example creates an infinite stream of natural numbers starting with 1.


Stream<Integer> naturalNumbers = Stream.iterate(1, n -> n + 1);
The generate(Supplier<T> s) method uses the specified Supplier to generate an infinite sequential unordered stream.Here Supplier is a functional interface, so we can use lambda expressions here.Lets see the below example to
generate an infinite stream of random numbers.Here we use method reference to generate random numbers.Please follow the series method reference( double colon perator) if you are not aware about it.

Stream.generate(Math::random).limit(5).forEach(System.out::println);

Create Stream from Collections:

Collection is the data-source we usually use for creating streams.The Collection interface contains the stream() and parallelStream() methods that create sequential and parallel streams from a Collection.

Example

Set nameSet = new HashSet<>();

//add some elements to the set

nameSet.add("name1");

nameSet.add("tes2");

//create a sequential stream from the nameSet

Stream sequentialStream = nameSet.stream();

// Create a parallel stream from the  nameSet

Stream parallelStream = nameSet.parallelStream(); 

Create Streams from Files:

Many methods are added to classes in java.io and java.nio.file  package in java 8 to facilitate IO operations by using streams.Let's see the example to read the content of the file using stream.

Path path = Paths.get(filePath);

Stream lines = Files.lines(path);

lines.forEach(System.out::println);
the method lines() added in Files class in java  1.8.Read all lines from a file as a Stream.

Stream Operations:

Now we will go through with some commonly used stream operations and their usage.
  1. Distinct
  2. filter
  3. flatMap
  4. limit
  5. map
  6. skip
  7. peek
  8. sorted
  9. allMatch
  10. anyMatch
  11. findAny
  12. findFirst
  13. noneMatch
  14. forEach
  15. reduce
Operations 1 to 8 are intermediate operations and 9 to 15 are terminal operations.  As some of the operations are self explanatory , so we discuss about those which are not trivial.

 

Map Operation:

                                                              

A map operation applies a function to each element of the input stream to produce another stream ( output stream ).The number of elements in the input and output streams are same. So this is a one to one mapping.The above figure shows the mapping.It take the element e1 and  apply function f on it to get f(e1) and so on.But  the type of elements in the  output stream may be different from  the type of elements in the input stream.Let's take an example.                                                                                                                         


Suppose  we have 1000 keys with values in redis data store and we want to fetch all the values of those keys and then we will perform some operation on them.We want to do it with future object,So how will we do it parallely with java  Stream.We will use thread pool service here to fetch the data from redis.Suppose our uniqueItemids List contains the list of keys.             

HashOperations redisHash=redisTemplate.opsForHash();

ExecutorService threadPoolService=Executors.newFixedThreadPool(10);

uniqueItemIds.

stream().

parallel().


map(itemId-> threadPoolService.submit(new Callable()) .forEach(future->{


try {


return future.get();


} catch (Exception e) {


e.printStackTrace();


return null;


}
Here the code in the callable's call method will be to fetch the data from redis with the specified item id.As we know the submit will return us the future object ,so map operation here takes an itemid which is of type long and return us an object of type future.Here I am emphasizing the point that  "the type of elements in the  output stream returned by the map operation may be different from the  type of elements in the input stream"

flatMap Operation:     

Unlike the map operation ,the Streams API  supports one-to-many mapping   through the flatMap.The mapping function takes an element from the input stream and maps the  element to a stream. The type of input element and the elements in the mapped    stream may be  different.This step produces a stream of streams.If the input stream is a Stream<T>  then the
mapped stream will be  Stream<Stream<R>> But which is  not   desired.Assume we have a map with below structure.

Map>> itemsMap = new ConcurrentHashMap<>()

//Now let's fill the map with some values.

itemsMap.put(2,  new ConcurrentHashMap<>());

itemsMap.put(3, new ConcurrentHashMap<>());

itemsMap.get(2).put(1L, Arrays.asList("abc","cde","def","rty"));

itemsMap.get(2).put(2L, Arrays.asList("2abc","2cde","2def","2rty"));

itemsMap.get(2).put(3L, Arrays.asList("3abc","3cde","3def","3rty"));

itemsMap.get(3).put(1L, Arrays.asList("abc3","cde3","def3","rty3"));

Now our aim is to get all the lists of strings in a stream.How can we achieve it?

A immediate solution comes to mind   is to write like below.

itemsMap.values().stream().parallel().map(m->m.values().stream()).forEach(System.out::println);

 Now we get the output as follows

java.util.stream.ReferencePipeline$Head@4eec7777
java.util.stream.ReferencePipeline$Head@3b07d329

We are expected to see list of strings in the output.But we don't find that.This is because of inside the map Stream of String is produced and we give Stream<Stream<String>> as the input to foreach.So we get the result.

Now our next attempt like this


itemsMap.

values().

stream().

parallel().

map(m->m.values().

stream()).

forEach(e->e.forEach(System.out::println));
And the output is   :

[abc, cde, def, rty]
[2abc, 2cde, 2def, 2rty]
[3abc, 3cde, 3def, 3rty]
[abc3, cde3, def3, rty3]

We are able to find all our Strings together but observe that they are still Stream<Stream<String>>  .Just we managed to write it in a different way in the for each loop.

The correct approach to our problem is


itemsMap.

values().

stream().

parallel().

flatMap(m->m.values().

stream()).

forEach(System.out::println);
So here comes the flatMap to the rescue.It flattens the Stream<Stream<String>>  and convert it into Stream<String>    .So make sure to use flatMap when you get Stream<Stream<T>>                     

We will discuss some other important operation in series 3.

Monday, June 13, 2016

Java URL Connection Timeout (http,ftp,scp etc) setting in system level

Sometimes we face issues like the thread , which trying to connect the url ,hangs for infinite time. The connection may be over http,ftp or scp protocol.But really it is painful to debug the issue.But there are some system level configuration provided by java, so that we can  we can solve this problem.

So lets start with some simple definitions.

ConnectionTimeOut:


The timeout (in milliseconds) to establish the connection to the host.For example for http connections it is the timeout when establishing the connection to the http server. For ftp connection it is the timeout when establishing the connection to ftp servers.For scp connection it is the time out for establishing the scp connection.

The property provided by sun for connectionTimeOut is

sun.net.client.defaultConnectTimeout (default: -1)

Note that here -1 means infinite timeout.

ReadTimeOut:


The timeout (in milliseconds) when reading from input stream when a connection is established to a resource.It is the timeout between two consecutive packets from the socket.

The property provided by sun for readTimeOut is

sun.net.client.defaultReadTimeout (default: -1)


Retry If Post Fails:


It determines if an unsuccessful HTTP POST request will be automatically resent to the server. Unsuccessful post means  in this case  the server did not send a valid HTTP response or an IOException occurred.And it defaults to true in system level.

The property provided by sun for retry post fails is  
sun.net.http.retryPost (default: true)
 
 
 We can use it in system level to configure a global connection time out or  read time out setting.We can give the time out in the client , which is used to make the http call.For example apache http client.But it is important to note  that , these are sun implementation specific properties and these properties may not be supported in future releases.

We can set the property like

-Dsun.net.client.defaultReadTimeout =timeinmiliseconds
-Dsun.net.client.defaultReadTimeout =timeinmiliseconds
-Dsun.net.http.retryPost =false

For more details please follow the oracle doc.

Sunday, June 12, 2016

An itroduction to Java 8 Streams

As now a days CPUs are getting more and more cheaper because of huge amount  of development  in hardware front, Java8 exploits these features of multi core CPU and is in process of continuously  introducing more and more support for parallelism.As a result we are seeing new features like fork-join framework and java streams etc.Here we discuss what is java stream and it's usefulness in parallel processing.

There are 3 series of posts about java stream in my blog.Here we will cover in detail with real world examples of java stream.Also I have mentioned some the  code segments the same way as I have used in one of my projects.when I started to study about Java stream I was confused it with java inputstream and outputstream, but they are completely different.

In this series you will learn what is java stream and where to use it and different kind of stream operations.We will learn about sequential  and parallel stream and important operations like map,flatmap,reduce,collect etc. on it.

If you are not familiar with java 8 Lambda expression ,functional interface and method  reference I request to visit my series Java 8 Lambda expression first.
Let's start our discussion with java8 streams.

Aggregate operations:

But first let's define what is aggregate operations.
An aggregate operation computes a single value from a set of values.Common aggregate functions are:
  • Average() (i.e., arithmetic mean)
  • Count()
  • Maximum()
  • Median()
  • Minimum()
  • Mode()
  • Sum()
 Here observe that this functions are applied on a set of values and give us a single value.The result of aggregate functions may be an object or a primitive type or empty.

Stream:


A stream is a sequence of elements which supports  sequential and parallel  aggregate operations.Now let's discuss some of the features of stream.Here we use lambda expression extensively.So if you have no idea about Lambda expression Please visit the Lambda Expression series.

Features Of Stream:


  • 1.A stream has no storage; it does not store elements. A stream pulls elements from a data source on-demand(lazily) and passes them to a aggregate operation for processing.Now if you are thinking what is the source of the stream?The answer is it can be collection but not always.It can be any data source like a generator function or an I/O channel or a data structure.

  • 2.A stream can represent a group of infinite elements.A stream pulls its elements from a data source that can be a collection, a function that generates data, an I/O channel, etc. Because a function can generate an infinite number of elements and a stream can pull data from it on demand, it is possible to have a stream representing a sequence of infinite data elements.

  • 3.A stream support  internal iteration, so no need to use for each loop or an iterator for accessing elements.External iteration like for each loop or an iterator usually gives the elements in sequential or insertion order.That is only single thread can consume the elements.But always it is not desired.Stream is there to help us. They are designed to process their elements in parallel without our notice. It does not mean streams automatically decide on our behalf when to process  elements in sequential or parallel.We have to tell a stream that we want  parallel processing and the stream will take care of it. In background stream uses Fork/Join framework to achieve parallel processing.Although stream support internal iteration ,but still they provide an iterator() method that returns an Iterator to be used for external iteration of its elements if necessary,but rarely used.

  • 4.Two types of operations are supported by stream.We call it as lazy (intermediate) operation and eager(terminal) operation.When a eager operation is called on the stream ,then only lazy operation process the elements of the stream.Let's take a problem to solve using java stream.Suppose we have a list of 10 integers and we want the sum of the square of even integers. Let's see the digram below.We write the code following the below digram.

Datasource----------Stream---------Filter-----Stream -----map------Stream-------reduce

List numberList = Arrays.asList(1, 2, 3, 4, 5,6,7,8,9,10);
int sum = numbers.stream()
.filter(n -> n % 2 == 0)
.map(n -> n * n)
.reduce(0, Integer::sum);


Here numberList is our data source.Then we apply stream on it,it gives us a stream.Then we apply Filter on it to find only even numbers.The filter operation provides us another stream of even numbers.Then we apply map on that stream.Then it gives us a stream of square of numbers.Then we apply reduce on this stream and find the sum of the numbers on the stream.
Here reduce is the terminal operation and filter and map are intermediate operations.This process in which one stream provides another stream and this stream provides a next stream etc. is called stream pipelining.

  • 5.Stream does not remove the elements from the data source  it only reads them.

  • 6.A sequential stream can be transformed into a parallel stream by calling the parallel() method on the created stream.

  • 7.We can find the stream related interfaces and classes are in the java.util.stream package.Please follow the digram for stream  related interfaces.

                                 AutoClosable
                                           |
                                           |
                                           |
BaseSteam   <T, S extends BaseStream<T, S>>
     |                           |                   |                        |
     |                           |                   |                        |
     |                           |                   |                        |
IntStream    LongStream    DoubleStream  Stream

To work with elements of reference type stream interface is there ,but to work elements of primitive types IntStream,LongStream and DoubleStream interfaces are available.

Methods common to all types of streams are declared in the BaseStream interface.Like sequential,parallel,isParallel and unordered etc.And methods related to intermediate and terminal operations  are declared in Stream interface.

In the second part of this series we will see the use of the stream operations with example.










Saturday, June 4, 2016

NIO.2 Asynchronous file I/O with Future and CompletionHandler

Recently I had a requirement to work with lot of I/O type of work and at the same time a lot of computation.So what Immediately a solution comes to mind that we will start a configurable number of threads.And the job of each thread will do the I/O independently and then after start the computation.But here one thing to observe that my computation has a very little to do with  the I/O.But in this design  my thread is not doing any  useful when it is doing the I/O and after the completion of the I/O the thread is going for computation.But it would be great if my thread can be free once it starts the I/O and without waiting to complete the I/O  jumps to the computation part.And some one  informs my thread once the I/O  completes.Till that my thread is busy with doing some useful calculation.

So here we get the two benefits.My thread is not waiting for I/O to complete and at the same time , it is doing some useful calculation.Here note that the job of my thread is both I/O bound and CPU bound.

Asynchronous  I/O :


NIO.2 provides support for  asynchronous  I/O(connecting, reading, and writing). In a synchronous  I/O, the thread that requests the I/O operation waits until the I/O operation  completes.In an asynchronous  I/O, the  application requests the system for an I/O operation and the operation is performed by the system asynchronously. When the system is performing the  I/O operation, the application continues doing some other useful computation  work. When the system finishes the  I/O, it notifies the application about the completion of I/O operation.


Four asynchronous channels are added in NIO.2 (java 7) to the java.nio.channels package:

  •     AsynchronousSocketChannel
  •     AsynchronousServerSocketChannel
  •     AsynchronousFileChannel
  •     AsynchronousDatagramChannel
       
Here  we take AsynchronousFileChannel  as our example and try to understand the asynchronous I/O.

The AsynchronousFileChannel provides us two different ways for monitoring and controlling the initiated asynchronous operations.

 The first one is by returning a java.util.concurrent.Future object, which poses a Future object and can be used to enquire its state and obtain the result.It follows a poll type approach.

The second is by passing to the  I/O operation an object of a new class, java.nio.channels.CompletionHandler, which defines handler methods that are executed after the operation is completed.It follows a push type approach.

Each method of the AsynchronousFileChannel class that supports asynchronous file I/O operation has two versions.One for Future object and another for CompletionHandler object.

Example of poll approach using Future object:



package com.brainatjava.test;
import static java.nio.file.StandardOpenOption.CREATE;
import static java.nio.file.StandardOpenOption.WRITE;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.AsynchronousFileChannel;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.concurrent.Future;

public class AshyncronousIOWithFuture {
static String str="write some meaning full text to file,which is desired for your applications.";
        public static void main(String[] args) {
        long startPosition=0;
        Path path = Paths.get("/home/brainatjava/mytest");
        try (AsynchronousFileChannel asyncFileChannel =
         AsynchronousFileChannel.open(path, WRITE, CREATE)) {
        ByteBuffer dataBuffer = ByteBuffer.wrap(str.getBytes());
        Future result = asyncFileChannel.write(dataBuffer, startPosition);
        while (!result.isDone()) {
        try {
        //remember in real life scenario the initiating thread will not sleep but it will  do some useful work.
        System.out.println("Sleeping for one seconds before the next pooling.We will continue to keep pooling in each one second.");
        Thread.sleep(1000);
        }
        catch (InterruptedException e) {
        e.printStackTrace();
        }
        }
       
        System.out.println("Now I/O operation is complete and we are going to get the result.");
        try {
        int resultbytewritten = result.get();
        System.out.format("%s bytes written to %s%n",
        resultbytewritten, path.toAbsolutePath());
        }
        catch (Exception e) {
        e.printStackTrace();
        }
        }
        catch (IOException e) {
        e.printStackTrace();
        }
        }
        }

In the example above first we create an AsynchronousFileChannel for writting. Then we use the write method to write some data,which return a Future object. Once we get a Future object, we  use a polling method method to handle the result of the asynchronous file I/O, where it keeps calling the isDone() method of the Future object to check if the I/O operation is finished or not.And rest of the code is self explanatory.But note that while checking the result of future object we are taking a 1 second sleep,    but in real life we  we will do some useful calculation there.

Example of push approach using CompletionHandler object:


 This version of the write method of the AsynchronousFileChannel class  allows us pass a CompletionHandler object whose methods are called when the requested asynchronous I/O operation completes or fails.

CompletionHandler interface is defined in the java.nio.channels package.

The type parameters:

    V – The result type of the I/O operation
    A – The type of the object attached to the I/O operation

The CompletionHandler interface has two methods: completed() and failed(). The completed() method is called when the requested I/O operation completes successfully. the failed() method is called ,when the requested I/O operation fails. The API allows  us to  pass an object of any type to the completed() and failed() methods. Such an object is called an attachment.We may want to pass an attachment such as the ByteBuffer or the reference to the channel or an reference to the I/O source etc. to these methods such that we can perform additional actions  inside these methods.For example we want to close the AsynchronousFileChannel once the async I/O operation completes successfully or fails due to any reason.We can also pass  null as an attachment , if we don't want to do anything usefull.
Lets create an Attachment object first

public class Attachment {

private Path filesource;
private AsynchronousFileChannel asyncChannel;


//getters and setters goes here.

}

Now let's define the CompletionHandler

private static class MyWriteCompletionHandler
implements CompletionHandler {
@Override
public void completed(Integer result, Attachment attachment) {
System.out.format("%s bytes written to %s%n",
result, attachment.path.toAbsolutePath());
try {
attachment.asyncChannel.close();
}
catch (IOException e) {
e.printStackTrace();
}
}
@Override
public void failed(Throwable e, Attachment attachment) {
System.out.format("I/O operation on %s file failed." +
"with  error is: %s", attachment.path, e.getMessage());
try {
attachment.asyncChannel.close();
}
catch (IOException e1) {
e1.printStackTrace();
}
}
}


public class ASyncIOWithCompletionHandler{

 public static void main(String[] args) {
static String str="write some meaning full text to file,which is desired for your applications.";
 Path path = Paths.get("/home/brainatjava/mytest");
 try {
AsynchronousFileChannel asyncfileChannel =
AsynchronousFileChannel.open(path, WRITE,CREATE);
MyWriteCompletionHandler handler = new MyWriteCompletionHandler();
ByteBuffer dataBuffer = ByteBuffer.wrap(str.getBytes());
Attachment attachment = new Attachment();
attachment.setAsyncChannel(asyncfileChannel);
attachment.setPath(path);
asyncfileChannel.write(dataBuffer, 0, attachment, handler);

try {
System.out.println("Sleeping for 10 seconds...");
Thread.sleep(10000);
}
catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Completed");
}
catch (IOException e) {
e.printStackTrace();
}
 }
}

Here the main thread is  sleeping for 10 seconds ,but in real life scenario , the main thread will do some useful calculation rather than sleeping.

Sunday, May 8, 2016

Remote debugging Tomcat with Eclipse

Sometimes it is required to debug a remote  application that is deployed in tomcat in the local network.We can configure eclipse to  debug the remote application locally  for a running tomcat instance that is configured with JPDA support.JPDA is a multi-tiered debugging architecture.Please visit the oracle docs for more about JPDA.

First let's set up tomcat for remote debugging using JPDA

Setting Up Tomcat For Remote Debugging

 Tomcat can be configured to allow a program such as eclipse to connect remotely using JPDA and see debugging information.We can do the configuration change in catalina.sh.Now open the catalina.sh or catalina.bat and find the line JPDA_TRANSPORT.Chnage the value of the parameter as  

JPDA_TRANSPORT=dt_socket
JPDA_ADDRESS="ipwhereyourapplicationisdeployed:5001" (port 5001 is not mandatory, you can give any unused port)

Here also we can write  JPDA_ADDRESS="*:5001" ,but it has some security issue,so it is good practice to specify the ip.

Then we start tomcat with the command ./catalina.sh jpda run.And now we are ready to go.

Also without changing anything in catalina.sh or catalina.bat , we can write the same parameter in startup.sh or startup.bat  

In Unix
export JPDA_ADDRESS=ip:5001
export JPDA_TRANSPORT=dt_socket
bin/catalina.sh jpda run
In Windows
set JPDA_ADDRESS=ip:5001
set JPDA_TRANSPORT=dt_socket
bin/catalina.bat jpda start
 Once we  have tomcat configured , now we start tomcat manually from command line.


Now we configure our eclipse to listen the tomcat running remotely.

Setting Up Eclipse:

1.Set break point in eclipse.
2.Then Debug As Debug Configurations...
3.Then double click on the heading Remote Java Application.A window like this will appear

4.In the host write the ip address you mentioned in the JPDA_ADRESS and in port write the port you mentioned in JPDA_ADDRESS.Here in our case it is 5001.
5.Now click apply and debug.

Now the  eclipse is ready for debugging.

The above scenario is tested in  Linux Mint 17.1 Rebecca ,STS 3.7.2 and apache-tomcat-8.0.24.

Saturday, April 23, 2016

Redis Pub Sub with Jedis

Within these days we were working on a project which heavily relies on Redis for its data sync process.To give  more clarity here we are briefing  the scenario.

There are two parts on the system.One is a producer which is making some cache in Redis and another is a consumer which is subscribed to the same Redis.The subscription is through a channel.The consumer is subscribed to the channel.When there is some change in the channel ,it is published to the consumer and the consumer  updates itself accordingly.Here we used Jedis as Redis  client.



In Redis, we can subscribe to multiple channels and when someone publishes messages on those channels, Redis notifies us  with published messages. Jedis provides this functionality with JedisPubSub abstract class. To handle pub / sub events, we need to extend JedisPubSub class and implement the abstract methods.



package com.brainatjava.test;
public class xyzListener extends JedisPubSub {   @Override
    public void onMessage(String channel, String message) {
    System.out.println(message);
}

Now we wrote the code for registering listener in a different class.

public class TestProgram {
private void registerListeners() {
        if(!xyzListener.isSubscribed()){
                Runnable task=()->{       
                try{
                    Jedis jdeisConnection=    ((Jedis) redisTemplate.getConnectionFactory().getConnection().getNativeConnection());
                    jedisConnectionExceptionFlag=false;
                    jdeisConnection.subscribe(lineItemDeliveryListener, "channelName");
                      
                }
                catch(JedisConnectionException jce){
                    logger.error("got jedis connection excpetion "+jce.getMessage(),jce);
                    jedisConnectionExceptionFlag=true;
                }
                catch(RedisConnectionFailureException rce){
                    logger.error("got jedis RedisConnectionFailureException excpetion "+rce.getMessage(),rce);
                    jedisConnectionExceptionFlag=true;
                }
                catch(Exception e){
                logger.error("error in registerListeners "+xyzListener.isSubscribed(),e);
            }};
           
            Thread xyzUpdater = new Thread(task);
            xyzUpdater.setName("xyzUpdater");
            xyzUpdater.start();
        }
    }

}
Here if we notice the above code ,we found that  first we are checking is xyzListener is subscribed to the required channel ,if not we are doing it.But Observe that we are doing it in a hacky way.That is we are getting the native connection first and then we are subscribing to the channel.And the subscribe is a blocking call.It act in wait and watch mode.

So when there is a change in the channel the listener listens it and update its cache accordingly.But There is always should have a fail safe mechanism in place.We have also done that.If somehow Jedis pubsub is not working,then we have a mechanisim in place to do it manually.

The mechanism is like we have a cron scheduler running in every 15 minutes and    checking the cache timestamp and if the cache timestamp of the latest cache and the timestamp of the consumer varies, then we assume there is issue with pubsub and we will update the consumer cache manually.

With this design everything was fine and  the 15 minutes cron was their without any use.After the smooth working of some days we got an alert that the 15 minute cron is running and manually updating the cache.So it has no impact on our service as the cache is getting updated manually with the help of cron scheduler.

But why this happened?After investigetting sometimes we found before some  days the redis was restarted.And this created the whole issue.It is behaving like it is subscribed.Now you know the solution.

Friday, March 11, 2016

The try-with-resources Statement

It is always required to close the resources like database connections, file handles like BufferedReader,BufferedWriter etc after it's use.Otherwise we will face resource leak issue.And sometimes we forget to close the resources after the use.But in Java 7 a functional interface namely AutoCloseable is introduced.And resoureces like Connection ,BufferedReader and BufferedWriter etc extends AutoCloseable.
The try-with-resources statement is a try statement that declares one or more resources. The try-with-resources statement ensures that each resource is closed at the end of the try statement. Any object that implements java.lang.AutoCloseable,  can be used as a resource inside the try-with-resources statement.
The following example writes a line in a file. It uses an instance of BufferedWriter to write data in the file. BufferedWriter is a resource that must be closed after the program is finished with it:
static String writeALineToFile(String path) throws IOException {
    try (BufferedWriter bw =
                   new BufferedWriter(new FileWriter(new File("path")))) {
        return bw.write();
    }
}
In this example, the resource declared in the try-with-resources statement is a BufferedWriter. The declaration statement appears within parentheses immediately after the try keyword. The class BufferedWriter, in Java SE 7 and later, implements the interface java.lang.AutoCloseable. Because the BufferedWriter instance is declared in a try-with-resource statement, it will be closed regardless of whether the try statement completes normally or abruptly (as a result of the method BufferedWriter.write throwing an IOException).

Let's make it clear that here try-with-resource statement and try block are two different things.

Prior to Java SE 7, we can use a finally block to ensure that a resource is closed regardless of whether the try statement completes normally or abruptly. The following example uses a finally block instead of a try-with-resources statement:

static String writeALineToFileWithFinally(String path)
                                                     throws IOException {
 try {
       BufferedWriter bw =
                   new BufferedWriter(new FileWriter(new File("path")))) ;
         bw.write();
    }
}
    } finally {
        if (bw != null) bw.close();
    }
}
However, in this example, if the methods write and close both throw exceptions, then the method  
writeALineToFileWithFinally   throws the exception thrown from the finally block; the exception thrown from the try block is suppressed.

But In contrast, in the example readFirstLineFromFile, if exceptions are thrown from both the try block and the try-with-resources statement, then the method readFirstLineFromFile throws the exception thrown from the try block; the exception thrown from the try-with-resources block is suppressed.

We  can retrieve the suppressed exceptions by calling the Throwable.getSuppressed method from the exception thrown by the try block.


Note: A try-with-resources statement can have catch and finally blocks just like an ordinary try statement. In a try-with-resources statement, any catch or finally block is run after the resources declared have been closed.