Skip to content

Introduction to Nashorn

Java 8 introduced and new javascript engine named “Nashorn”. Nashorn is based on Da Vinci Machine, a project with aim of adding dynamic language support to JVM. Nashorn is a nice milestone to make the hybrid softwares easier than before. The nice features of this engine makes you able to make a full duplex communication between your java (any other compiled languages) codes and javascript.

The simplest way to use Nashorn is a command line tool which is bundled in JDK 8 or OpenJDK 8 and you can find it in “/bin”.  with executing jjs you will face with jjs prompt that you can work with Nashorn interactively, also you can pass js files as arguments to jjs. You can find  a  basic example of using jjs in below:

Consider the following simple.js file:

var name="Nashorn";

Now by calling: jjs simple.js the text “Nashorn” will be presented on your screen.

I think jjs is enough for introduction, if you need more information you can type jjs -help.

Also you can use Nashorn script engine in your java code. Consider the following file:

public class Program {

 public static void main(String... args) throws ScriptException {
        ScriptEngineManager factory = new ScriptEngineManager();
        ScriptEngine nashornEngine = factory.getEngineByName("nashorn");
        nashornEngine.eval("print('hello world');");

With this simple code a very nice hello world will be showed on your screen. Also you can evaluate js files to your script engine, ScriptEngine interfaces has an eval method overload with Reader abstract class type. So simply you can pass any objects which is an instance of Reader class. Consider the following code:

script1.js content:

var version = 1;

function hello(name) {
      return "hello " + name;
} content:

public class Program {

            public static void main(String... args) throws ScriptException, NoSuchMethodException {
                      ScriptEngineManager factory = new ScriptEngineManager();
                      ScriptEngine nashornEngine = factory.getEngineByName("nashorn");
                      nashornEngine.eval(new InputStreamReader(Program.class.getResourceAsStream("script1.js")));
                      Invocable invocable = (Invocable) nashornEngine;
                      Object result = invocable.invokeFunction("hello", "soroosh");

ScriptEngine interface has a get method, As you noticed in sample you can call it to retrieve any variables or any states defined in your ScriptEngine. In above example “version” is a variable declared in simple.js file.

Every script engine has its own implementation of ScriptEngine class and there are some optional interfaces which script engines can implement to extend their functionality. If you check the source code of NashornSriptEngine the class signature is:

public final class NashornScriptEngine extends javax.script.AbstractScriptEngine implements javax.script.Compilable, javax.script.Invocable


So Nashorn script engine makes you able to use these two interfaces too. In above example for calling functions which are declared in our script engine we used Invocable interface.

Note: ScriptEngine is stateful, so if you call some functions or eval some codes on your script engine the state of objects and variables can effect on their result.


In this post i tried to introduce Nashorn in a very basic and practical way, In future posts i will demonstrate Java + Nashorn interoperability more and its usages in real world.



Default methods an approach to extend legecy code

As you know the new version of java was released on 18 March 2014 and i am going to introduce a serial of posts to demonstrate its new features and maybe in some points i will talk about my ideas and criticism about them.

The first feature which i think is important is “Default methods”, In all previous versions of java language the interfaces could just include method definition (declaration) not method implementation (method body), But in java 8 a new feature added to interfaces which makes you able to declare methods with their implementations in interfaces.

Suppose to this new feature you can create an interface like:

public interface Dog {
    void bark();

    default void bite() {
        System.out.println("Biting Biting Biting");

public class Husky implements Dog {
    public void bark() {

    public static void main(String... args){
        Dog dog = new Husky();

It is completely self explained, You can add behaviors to your interfaces and all the implemented classes will have this behavior as the default implementation of method, So they will not be forced to implement default methods.

The reason of default method

In one of the previous posts  we had an introduction about Open Close Principal, Just as a review in this principal classes should be close for modification and open for extending. I think default methods do not follow this principal but there are some points which maybe we don’t have any solutions to extend our legacy codes.

For example in java 8 a new feature added to language which you can use lambda on collections, one of the ways you can use this is calling the stream method of Collection interface, If it was just a method declaration all the written codes which implemented Collection would be break.

Also some times it happened for me that need to  extend my interface but because many other clients were using interface i had to find another solution and unfortunately in most of the times it was a messy way.

Some points about default methods

There are some points you should know when you want to use default methods or you want to use codes which are using default methods.

    • Extending interfaces that contain default methods:
      When you want to extend or implement an interface with default methods you have three choices about default methods.

      • You can use their default implementation and ignore to redefine them.
      • You can redeclare it, So it will be an abstract method.
      • You can override it just with redefining it.


    • Multiple inheritance with default methods: 

      With using default methods you can have classes which have a mixin behavior of many interfaces but you should notice to an important point.
      If extended interfaces have a common method signature you  will face with a compile time error regards to there is an ambiguity between two implementations of the same method signature, In this situation you will need to override the method and implement it by your own code or select one of the default methods. 

public interface FirstInterface {
    default void doSomething(){
        System.out.println("Doing something from FirstInterface");


public interface SecondInterface {
   default  void doSomething(){
       System.out.println("Doing something from SecondInterface");

public class FirstImplementation implements SecondInterface,FirstInterface {

    public void doSomething(){

    public static void main(String... args){

        new FirstImplementation().doSomething();

JAVA Objects Memory Size Reference

Whether you are a well-grounded java programmer or a newcomer, It is essential to know memory consumption calculation in java. so in this article I am going to write about memory consumption of objects, Data types and collections, which are the most important in java.

Instances of an object on the Java heap take up memory for their actual fields and housekeeping information which consist of recording an object’s class, ID and status flags such as whether the object is currently reachable, currently synchronization-locked etc.

Each Object reference occupies 4 bytes if the Java heap is under 32GB and XX:+UseCompressedOops is turned on (it is turned on by default). Otherwise, Object references occupy 8 bytes. If the number of bytes which are required by an object for its header and fields is not multiple of 8, then you round up to the nearest multiple of 8 due to padding.

Padding or alignment: the JVM allocates the memory in multiples of 8 bytes. for example look at following class:

class X {
int a;
byte b;

JVM allocates 8 bytes from reference to the class definition + 1 byte (the b variable) + 4 bytes (the a variable)=13 bytes +3 bytes for padding to rounding up to 16 to be multiple of 8.
primitive data types
All primitive data types occupy following byte size:

  • byte, Boolean:1 byte.
  • short, char :2 bytes.
  • int, float :4bytes.
  • long, double :8bytes.

Numeric wrappers
Numeric wrappers occupy 12 bytes + size of the underlying type:

  • Byte, Boolean : 12 bytes +1 byte Data type +3 bytes alignment = 16 bytes.
  • Short, Character : 12 bytes +2 bytes Data type +2 bytes alignment =16 bytes.
  • Integer, Float : 12 bytes +4 bytes Data type =16 bytes.
  • Long, Double : 12 bytes +8 bytes Data type +4 bytes alignment = 24bytes.

HashMap, HashSet
HashMap is built on top of the array of “Map.Entry” objects. It contains a key, a value, hash of a key and an int and a pointer to the next entry. it means that an entry occupies 32 bytes (12 bytes header + 16 bytes data + 4 bytes padding). So, a HashMap with size = S has to spend 32*s bytes for entries storage. Besides, it will use 4 * c bytes for entries array, where c is the map capacity. memory consumption of a HashSet is identical to HashMap.

LinkedHashMap is not efficient and is famous for the most memory-hungry collection in JDK. It extends HashMap by using “LinkedHashMap.Entry” as an entry in the internal array of entries. It means that LinkedHashMap consumes 40 * SIZE + 4 * CAPACITY bytes.

TreeMap , TreeSet
a tree contains exactly some nodes. Each tree node contains: key, value, pointers to the left and right children, pointer to a parent and a boolean ‘colour’ flag. so It means that a node occupies:
12 bytes for header
20 bytes for 5 object fields : int key,int value,int left pointer,int right pointer,int parent pointer.
1 byte for flag
So, the total memory consumption of a TreeMap is 40 * SIZE bytes, which is approximately the same as the memory consumption of a HashMap.
so its memory consumption is identical: 40 * SIZE bytes.
each LinkedList node contains references to the previous and next elements as well as a reference to the data value. So 12 bytes header + 3*4 bytes for references, which is 6 times more than ArrayList in terms of per-node overhead.

it has an Object[] for storage plus an int field for tracking the list 4 bytes (but may be more if ArrayList capacity is seriously more than its size).

Collection Overview

JDK collection                Size

HashMap                           32 * SIZE + 4 * CAPACITY bytes

HashSet                             32 * SIZE + 4 * CAPACITY bytes

LinkedHashMap             40 * SIZE + 4 * CAPACITY bytes

TreeMap, TreeSet           40 * SIZE bytes

Note: This article is presented by Saeid Siavashi.  Know more about him in:


CAP is not just for your head.

Today i like to write about an important theorem in distributed computer systems. I’m sure you notice the subject of this post is about CAP  theorem (also known as Brewer’s theorem). Eric brewer is the man who proposed CAP theorem in 2000.

CAP is the acronym of three words:

Consistency: All nodes must read the latest changed data, In the other word every node in our distributed system should read same data. If a write operation occured in one of the nodes, Reading same data from the other node must return the latest write ( When system received something newer, then must not return any of the older data items )

Availability: There must not be any request what is blocked with any of nodes, All of the requests must have a response about the status of request.

Partition Tolerance: The system continues its convenient tasks even any of the messages lost or there are some failure parts in system.

CAP theorem is about the impossibility of having all of these attributes together in a system.  ٍEvery distributed system at most can have two of these three attributes. Most of the references introduce CAP as an triangle which a distributed system can have just two of its angles.

CAP Triangle

CAP Triangle

Examples of Consistency + Availability are:

  • Single-node Databases
  • Cluster Databases
  • LDAP
  • xFS file system

Examples of Consistency + Partition Tolerance are:

  • Distributed Databases
  • Distributed Locking
  • Majority Locking

Examples of Availability + Partition Tolerance are:

  • Coda
  • Web caching
  • DNS

You can read the formal proof of CAP theorem in : Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-tolerant Web. It is not very hard and with reading this article can clear everything for you.

But in this post i proof this concept with some justifications.

Assumption: In a simple distributed system we have two nodes: NODE A & NODE B.  a client writes “DataItem1” on NODE A and at the same time a client request to read the “DataItem1” on NODE B.

Assume we have CA environment, so all the data in all nodes are consistent and all of the nodes can execute every query, If all the messages between nodes fail then a query to node B cannot have the latest value of data item. As you see there are some situations that we can not have “CA” environment with “P”.

Assume we have CP environment, so all the data in all nodes are consistent and there is partition tolerance attribute. Now if before writing “DataItem1” on NODE A connection between two nodes break, Requesting to node B cannot execute our query, So web missed availability. Node B wants to synch its data with NODE A but the connection is broken so response cannot be available.

At last assume we have PA environment.  So every request from nodes will have a response and partition tolerance permits our system to continue its tasks with any message and system failures. If client writes “DataItem1” on NODE A and in the same time other client send request for “DataItem1”  to Node B and the connection between two nodes are broken then client will read old version of “DataItem1”.

Note: It is possible to have delay in communication and  synchronizing between the nodes, It is the most important reason which a PA system cannot have consistency at all. In these environments we have partial consistency between our nodes.

Singleton Pattern

There are many situations that you need to have just one instance of a kind of objects, Thread pool, Cache, Registry, etc. are some examples of these situations. It is possible you ask a question: “Why do i need pattern for this problem? i can just create one object and make it global static.” . As you will see with using singleton pattern we want to be sure there is just one instance object in all of our system. We need a solution that does not let to the programmer to create more than one instance instead of programmer handle the  uniqueness of instance.

Singleton is not very complex, but has some trap points that we explain them in this post. At first let me show you what is Singleton.

In Singleton pattern we don’t let clients to create the object, So our class does not have any public ( and protected is a trap) constructor.

public class SingletonObject {
 private SingletonObject() {

no one out of your class can’t create an instance of your class. Now you need to add a static public method to retrieve your unique object.

public class SingletonObject {
 private static SingletonObject object;
private SingletonObject() {
public static SingletonObject getInstance() {
 if (object == null) {
object = new SingletonObject();
return object;

It is all of Singleton pattern, now you can be sure that you have just one SingletonObject instance in your application. ‌But there are two issues that you need consider them. First don’t use protected constructor in your singleton class, because everyone can extend your singleton class and add a new public constructor to it 🙂 so you could not restrict programmers to use just one object in their applications, another possible solution is to use final keyword in your class specification.

The other issue is in concurrent environments, When your object does not initialize yet and you have two threads which want to get an instance in the same time, it is possible to have two objects of your singleton, So you need to make your object creation thread safe.

First solution is to synchronize your get Instance method.

public static synchronized SingletonObject getInstance() {
if (object == null) {
object = new SingletonObject();
return object;

Very easy and simple solution, but if you have high getInstance invocation then your application overhead will become very much. In this situation you can create your object eagerly, it’s not possible everywhere to create your object eagerly but when it is possible you have a good choice. The last solution is to use “Double-checked locking”, In this solution you check you object reference then synchronize for creating object, with this solution the overhead decrease.

public static SingletonObject getInstance() {
 if (object == null) {
 synchronized (SingletonObject.class) {
 if (object == null) {
 object = new SingletonObject();
return object;

But after release 1.5, the best way to implement Singleton pattern is using enum, with this model of implementation you don’t have any concern about the synchronization and the number of objects in your application.

public enum SingletonObject {
public void aMethodOfSingetonOBject(){
// do something<

What do you think about enum usage? can we use this model every where?

Greedy Boss – Adapter Pattern

You work for a company which its goal is to held concerts all of the world, your job is to model seats of concerts and everything is clear, You know Object Oriented very well and this task is just a kidding 😉 So you start your modeling. In this domain you have some seats, some of them are first class and the others are business. Audience can sit on a sit if the seat is empty.

So you start your task, Hmmm maybe at first you like to model seats with an interface, it is a good choice, at all preferring interfaces to concrete class is not bad.

public interface Seat {
 void sit(Audience audience);

Boolean isEmpty();

void release();

Audience getAudience();

Seat interface is very simple, understandable and you follow “Interface Segregation Principle”  well. Now you implement Business seat and First class seat very easy. Maybe there are some duplicated code that you can solve them with an abstract class, because their duplicated code is static, but it’s not our case.

Your boss wants to held a concert in a green field! At first it’s just strange but you think why your boss wants to do that? Boss found a green field with many stumps! HAHA you are right, your boss wants to sell ticket for sitting on stumps too. He wants you change your model to support stumps as seat too. Stump implementation has been specified before your entrance and you have to use it.

public class Stump {
 private boolean empty;

public void sit() {
 empty = false;

public void release() {
 empty = true;

public boolean isEmpty() {
return empty;

Very very simple implementation of stump, but don’t forget you must not change the stump class to implement your Seat interface, it’s obvious. Stump is not a seat at all just you want to use it as a seat in your current model. So what is your solution?

All you need is to extend stump and add new Seat interface to it. For this purpose you can create a new class which implements Seat interface, then your new class has a composition relation with Stump class. then you can write your adapter codes in your new intermediate class.

public class StumpAdapter implements Seat {
 private final Stump stump;
 private Audience audience;

public StumpAdapter(Stump stump) {
 this.stump = stump;

 public void sit(Audience audience) {
 this.audience = audience;

 public Boolean isEmpty() {
 if (this.audience == null) {
 return this.stump.isEmpty();
 return false;


 public void release() {
 this.audience = null;

 public Audience getAudience() {
 return this.audience;

In your new Adapter you keep information and add behaviors that you need to adapt your legacy interface  or class to need interface or class.

Now for testing your code you can write an epic test:

public class ConcertTest {
 public void concert_epic() {
 SimpleAudience audience1 = new SimpleAudience();
 Stump stump = new Stump();
 final Seat stumpAsSeat = new StumpAdapter(stump);
 Assert.assertEquals(audience1, stumpAsSeat.getAudience());


Congratulation you could adapt your legacy code with your new design, You can use this pattern every time that you are the owner of code, legacy code and the most important when you want to follow “Open Close Principle”.

Introduction to Design Pattern

In this post i want to explain what design patterns are and how they can help us.

Many of software engineers think the starting point of Design Patterns is the Book “Design Patterns Elements of Reusable Object-Oriented Software” that has been written by Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides. They are known as Gang Of Four (GOF), But the first time Patterns started with Christopher Alexander, a Professor of Architecture at Berkeley, So Patterns first stated in Architecture field not in Computer science. If you like to know more about his patterns and solutions you can read there two books : The Timeless Way of Building and A Pattern Language.

There are many advantages to use design patterns, Here you can see the most important reasons:

  • Common Vocabulary : Design patterns can help to your team and all of the software engineers that you collaborate with them to have common vocabularies which can enhance the understand of your problem and you solution. As as example when you don’t know patterns maybe you will explain a solution with these sentences:
    I changed all of the constructors of my class to private and then i create an static field of my same class, then i add a getter method that will pass created static object every time a client call my getter, so i can be sure i have just one object from my class in all of my system”. Everything is ok but how much effort must you pay to understand the meaning of your colleague? Instead of this explanation your colleague could just say: “I used Singleton pattern for creating object x”.
  • Common Problem so There is Common Solution: Many of the developers like to think about all the problems they face with them, But don’t forget many of the problems has a common solutions, you are not alone with them and they are not just yours. If you use Common solutions for your problems your code will be more clean and unstrastanble for other developers. In the other side you can focus to your design and you domain specific problems.
  • Transition from Implementation to Design: When you want to present a solution and you don’t have common vocabularies you had to explain the details of implementation, as a consequence you will miss your abstraction in design time and you should talk about objects, their relations and other implementation time issues.

Design patterns in object oriented world can be divide to three different categories:

  • Creation Patterns: These patterns are known as Factory Patterns, These kinds of patterns are about the conventional patterns about the objects creation problems and common solutions for them.
  • Behavioral Patterns: These patterns are about how we can change, enhance, extend , etc. behaviors of the objects.
  • Structural Patterns: These patterns are about how we can present solution for structural problems of objects.

Nowadays there many augmented patterns in various domains of software engineering, Enterprise Integration Patterns, Service Oriented Architecture Patterns, Software Architecture Patterns are just some examples.

I will explain design patterns one by one in my next posts. Now i hope you have a clear understanding abut design patterns, please contact me if some where is not understandable or clear for you.

SOLID principles

Object oriented has some principles that when you follow them the non-functional attributes of your codes will be enhanced. SOLID is an acronym for 5 principles what introduced by Robert C Martin and acronym was create by Michael Feathers.

  • Single Responsibility Principle : Every class should have just One responsibility and encapsulate its responsibility entirely. The Most important reason of this principle is that there must be just one reason for changing a class. For example when you want to read a config file for database connection information and connect to database you must have two separate class, First for config file reading and the other for connection to database.
  • Open Close Principle: Every class should be open for extending and closed modification. When you write a class and you are sure then new features and enhancements must be added with extending class not with modification source code. So your code must be extendable. In the other word just modify your code for errors, new features and enhancements need new class that extends current class.
  • Liskov Substitution Principle: If S is a subtype of  T, then objects of type T may be replaced with objects of type S without altering any part of your code.
  • Interface Segregation Principle: No client should be force to depend on methods it does not use. ISP means your interfaces must be small and specific for a goal.
  • Dependency Inversion Principle: High level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend upon details. Details should depend upon abstractions. When you are designing an application it is a bad idea that you create dependency between high level and low modules, instead of this model of design, try to design dependencies with specifications then your high level modules just depends on your specification and you can substitute your modules implementations very easy. There are many solutions for Dependency Inversion, Plugin, Service Locator and Dependency Injection are some these patterns.


I want to start to explain some of important design patterns in my next posts and it is one the reasons that i explained SOLID principles. There are some other principles you may like to know as (YAGNI , GRASP , KISS , … ) 

Introduction to JUnit – Part 2

In the previous part of Introduction to JUnit i explained how we can add JUnit to our java project and what are the primary annotations of JUnit.
In this part i explain how we can test our application with a simple example, if you remember i introduced an Object Factory. this Object Factory can build some objects and return them to clients.

Our Object Factory can be used in Connection Pools, Load Balancers and many other solutions.

Step 1:

We need to prepare out environment. In this tutorial i use maven. So create a new maven project. and add following dependency to your pom.xml


Then create a test class with name ObjectFactoryTest in your src/test/java directory of maven project.
Maybe it’s strange at first but in Test Driven Development at first we create test and then code just for passing our test. If you are not familiar with this kind of development read Test Before, Rest After!!

Step 2:

We need to write a test for our API and then write some code to pass it. So in this step we write our first test method.

private int NUM = 10;
 public void when_ObjectFactory_is_created_its_capacity_must_be_set(){
 final ObjectFactory objectFactory = new ObjectFactory(NUM);

Congratulation you have write your first unit test. There are some points in our test method.

first naming convention: there are two approach for naming test method, as you see in my test even i did not observe java method naming conventions. In this approach you completely explain what you are testing, so you have a good document for yourself and anyone who will read your codes.

The other approach that i don’t like it so 🙂 is the java classical convention, in this approach test method name must not be declarative and is just a name for your method, for our previous test method a good name in this model is : “testObjectFactoryCreation”.

It’s your choice that which one is better and you like to use, but at all i use first approach.

Second Assertion: I did not explain about test method assertion before, When you write a test it is obvious that you need to test! in JUnit the assertion will test your assumptions. so  Assert.assertEquals(NUM,objectFactory.getPoolSize()) means that i expect that return value of  getPoolSize must be equals to the NUM constant.

OK now that we write our first test we need to run it. in this level we have some compile errors, because we have not created ObjectFactory and its getPoolSize method yet. So try to resolve just compile errors and the execute: mvn clean test

The result is : Tests run: 1, Failures: 1, Errors: 0, Skipped: 0

No you need to complete your ObjectFactory class to pass this test. Your ObjectFactory class will be something like this code:

public class ObjectFactory {
 private int poolSize;

public ObjectFactory(int poolSize) {
 this.poolSize = poolSize;

public int getPoolSize() {
 return poolSize;

You must continue with Test Code Refactor steps to complete your ObjectFactory code. ( in this post we don’t pay attention to Refactoing )

Clients need to get an object from our ObjectFactory, So we need to have a getObject method.

 public void getObject_should_not_return_null() {
 final ObjectFactory objectFactory = new ObjectFactory(NUM);
 final Object object = objectFactory.getObject();

For passing this test just you need to add follow method to your ObjectFactory class:

public Object getObject() {
 return new Object();

At the first glance you will understand that our code is not suitable for ObjectFactory but don’t forget you must write code just for passing your test. So the code is not OK because your tests are not enough.
When we call getObject more that poolSize we should get an existed Object not new one.

So we can write this test method:

 public void getObject_should_return_one_of_existing_objects() {
 final ObjectFactory objectFactory = new ObjectFactory(1);
 final Object firstObject = objectFactory.getObject();
 final Object secondObject = objectFactory.getObject();

Assert.assertSame("Two objects are not the same.",firstObject,secondObject);

For passing this unit test you need to change your getObject code,  Below code is one the solutions for passing test:

public Object getObject() {
 if (pool.size() < poolSize) {
 final Object object = new Object();
 return object;
 return pool.get(0);


pool field is a List reference to an ArrayList object ( You can see complete code in GitHub ).

But our  solution is not complete yet. When pool becomes full our code just return the first object!

So we can write this test method:

 public void getObject_should_return_different_objects_after_pool_becomes_full() {
 final ObjectFactory objectFactory = new ObjectFactory(2);
 final Object firstExistedObject = objectFactory.getObject();
 final Object secondExistedObject = objectFactory.getObject();

Assert.assertNotSame("Two objects are the same.",firstExistedObject,secondExistedObject);

Our test method emerges the absence in our code.Try to resolve this problem yourself 🙂 I will upload complete code in my github, but please try to complete codes yourself, it can be a good practice.

If you have any opinions or problems please give me feedback.

Introduction to JUnit

If you have never write test, it must be a concern for you ” When can i be sure that my code works “Most of the times you have to realize many parts of your software until you could validate your code! But there is another way to validate the accuracy of you code. You can write a simple client for your code and validate the behavior of it.
For example you want to create a special Object Factory. this Object Factory must create a specified numbers of objects and in the next requests to get an instance it must get objects with a round robin algorithm,

As you see this Object Factory is not very hard to write, and you will its testing is not hard too. But if you have a web project and you don’t want to have automated tests maybe you need these augmented parts of software for testing:

  • Your API must be complete without client code ( You need to think as client when you are writing your API )
  • All the services you need to use your Object Factory
  • Web UI as a client of your code
  • Some Not required logging for watching the functionality
  • Debug your code ineffective for finding the place of bugs

If you are agree with me that these are problem continue to reading this post 😉 in the other way continue too maybe you find out a more simple solution for your accuracy.

JUnit is the most popular testing framework in the java world. The last major version of this framework is 4.x. One of the biggest changes in this version is using Annotation instead of conventions. When you want to test a code you write a class ( Test Class ) and write your tests ( Test Methods ) for the assertion of your code. You can see a simple Test Template in the following code:

public class ObjectFactoryTest {


public void ObjectFactory_should_create_with_specified_numbers_of_pool() {

//Arrange code

//Act Code

// Assert Code



So a Test Class is just a simple class with special annotations the JUnit Framework can understand them and run your tests. Some basic annotations that is desirable to know now for you are :

  • BeforeClass: A Static method that will run at the first of starting the tests of your test class, So this static method just run one time for your test class
  • AfterClass: A Static method that will run after running all test methods of your test class, So this static method just run one time for your test class too
  • Before: A method that will run before every test method of your test class.
  • After: A method that will run after every test method of your test class.
  • Test: This annotation specify your test methods

If you like to know how they run you can run following code ( Source code is from: JUnit 4 Tutorial 1 – Basic usage )

import org.junit.*;
import static org.junit.Assert.*;
import java.util.*;

 * @author mkyong
public class JunitTest1 {

    private Collection collection;

    public static void oneTimeSetUp() {
        // one-time initialization code
    	System.out.println("@BeforeClass - oneTimeSetUp");

    public static void oneTimeTearDown() {
        // one-time cleanup code
    	System.out.println("@AfterClass - oneTimeTearDown");

    public void setUp() {
        collection = new ArrayList();
        System.out.println("@Before - setUp");

    public void tearDown() {
        System.out.println("@After - tearDown");

    public void testEmptyCollection() {
        System.out.println("@Test - testEmptyCollection");

    public void testOneItemCollection() {
        assertEquals(1, collection.size());
        System.out.println("@Test - testOneItemCollection");

If you want to use JUnit with maven you need to add following dependency to your pom.xml






Don’t forget as a convention maven runs test classes that have Test Suffix. So your Test Class Names must be * format.

Now it’s your time to use JUnit and executed some simple tests. You can write Object Factory code with Test First Codes. In the next tutorial i will explain one of the possible implementations of Object Factory and its tests.
Please give me feedback if somewhere exists ambiguity or you need any prerequisites for these tutorials.