Data can be loaded directly from any persistent store into Ignite caches. This example shows you how to load data from a MySQL database into an Ignite distributed cache. Here, I am assuming that you already have Apache Ignite installed on your system. If not, you can go through this tutorial first.


1.  Sample PERSON table

To start with, here is what the PERSON data in my database looks like-

2.  Model 

Here is a sample Person.java class corresponding to the PERSON table in the database.
public class Person {

    private long id;

    private long orgId;

    private String name;

    private int salary;

    // Constructor
    …
    // Getter and Setter methods
    …
}

3. Maven Dependency

I have specified the following dependencies in my project’s pom.xml :
<dependency>
    <groupId>org.apache.ignite</groupId>
    <artifactId>ignite-core</artifactId>
    <version>1.5.0.final</version>
</dependency>

<dependency>
    <groupId>org.apache.ignite</groupId>
    <artifactId>ignite-spring</artifactId>
    <version>1.5.0.final</version>
</dependency>

<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.6</version>
</dependency>

4. Configure Read-Through

To load the data from the database, you need to enable the read-through mode and set the cacheStoreFactory property of CacheConfiguration. You can set these values either in your Spring XML configuration file or programmatically.

    
    
    
    



    
        
            
                
                
                
                
                
                
                    
                        
                    
                
                
                    
                        
                            
                            
                            
                                
                                    
                                    
                                    
                                    
                                
                            
                        
                    
                
            
        
    

    

    
    ...

5. Implement CacheStore

Now that we have our model, maven dependencies, and cache configurations in place, it’s time to implement the store. To load the data from the database, loadCache() and load() methods of the CacheStore interface should be implemented.
public class PersonStore implements CacheStore<Long, Person> {

    @SpringResource(resourceName = "dataSource")
    private DataSource dataSource;
    
    // This method is called whenever IgniteCache.loadCache() method is called.
    @Override
    public void loadCache(IgniteBiInClosure<Long, Person> clo, @Nullable Object... objects) throws CacheLoaderException {
        System.out.println(">> Loading cache from store...");

        try (Connection conn = dataSource.getConnection()) {
            try (PreparedStatement st = conn.prepareStatement("select * from PERSON")) {
                try (ResultSet rs = st.executeQuery()) {
                    while (rs.next()) {
                        Person person = new Person(rs.getLong(1), rs.getLong(2), rs.getString(3), rs.getInt(4));

                        clo.apply(person.getId(), person);
                    }
                }
            }
        }
        catch (SQLException e) {
            throw new CacheLoaderException("Failed to load values from cache store.", e);
        }
    }
  
    // This method is called whenever IgniteCache.get() method is called.
    @Override
    public Person load(Long key) throws CacheLoaderException {
        System.out.println(">> Loading person from store...");

        try (Connection conn = dataSource.getConnection()) {
            try (PreparedStatement st = conn.prepareStatement("select * from PERSON where id = ?")) {
                st.setString(1, key.toString());

                ResultSet rs = st.executeQuery();

                return rs.next() ? new Person(rs.getLong(1), rs.getLong(2), rs.getString(3), rs.getInt(4)) : null;
            }
        }
        catch (SQLException e) {
            throw new CacheLoaderException("Failed to load values from cache store.", e);
        }
    }

    // Other CacheStore method implementations.
    …
}

For convenience purposes, Ignite also provides its users with CacheStoreAdapter class that has a default implementation for some of the CacheStore methods - loadAll(), writeAll(), and deleteAll().

6.  Load Cache

Here is a sample PersonStoreExample.java class that makes a call to IgniteCache.loadCache() method which internally delegates the call to CacheStore.loadCache() method (which we implemented in the previous step).
public class PersonStoreExample {
    public static void main(String[] args) throws IgniteException {
        Ignition.setClientMode(true);

        try (Ignite ignite = Ignition.start("config/cluster-config.xml")) {
            try (IgniteCache<Long, Person> cache = ignite.getOrCreateCache("personCache")) {
                // Load cache with data from the database.
                cache.loadCache(null);

                // Execute query on cache.
                QueryCursor<List<?>> cursor = cache.query(new SqlFieldsQuery(
                        "select id, name from Person"));

                System.out.println(cursor.getAll());
            }
        }
    }
}

7. Start Ignite Cluster

From the command shell, lead yourself to the Ignite installation folder and start the server nodes, using the following command -
$ bin/ignite.sh <path-to-Spring-XML-configuration-file>
Make sure that PersonStore.java is in the class path of Ignite. To do so, you can set the USER_LIBS environment variable, or drop the project jar into the libs folder of your Ignite installation.

8. Output

From your IDE, run PersonStoreExample.java.

For more information, documentation, and screencasts, visit the Apache Ignite website.




  1. In a multi-user environment, concurrent transactions on the same set of cache entries can cause deadlocks – an unfavorable situation that adversely affects the performance of any application. Once a system enters a heavy deadlock state, recovery may require a complete cluster re-start. Keeping this in mind, Apache Ignite came up with ACID compliant Deadlock-Free Transactions that can prevent deadlocks and enhance application performance. Before looking into this feature in more detail, let’s briefly go through the basics.

    What is a Deadlock?

    A deadlock is a situation where a transaction T1 waits indefinitely for a resource R2 that is held by another transaction T2; and T2 waits for a resource R1 that is held by T1. T1 wouldn’t release the lock on R1 until it acquires the lock on R2, and T2 wouldn’t release the lock on R2 until it acquires the lock on R1. 



    Deadlocks happen when concurrent transactions try to acquire locks on same objects in different order. A safer approach is to acquire locks in the same order; but this may not be feasible every time.

    Preventing Deadlocks in Ignite

    When transactions in Ignite are performed with concurrency mode -OPTIMISTIC and isolation level -SERIALIZABLE, locks are acquired during transaction commit with an additional check allowing Ignite to avoid deadlocks. This also prevents cache entries from being locked for extended periods and avoids “freezing” of the whole cluster, thus providing high throughput. Furthermore, during commit, if Ignite detects a read/write conflict or a lock conflict between multiple transactions, only one transaction is allowed to commit. All other conflicting transactions are rolled back and an exception is thrown, as explained in the section below.

    How it Works

    Ignite assigns version numbers to every transaction and cache entry. These version numbers help decide whether a transaction will be committed or rolled back. Ignite will fail a OPTIMISTIC SERIALIZABLE transaction (T2), with a TransactionOptimisticException exception, if:

    1. There exists an ongoing PESSIMISTIC transaction or OPTIMISTIC transaction with isolation levels- READ-COMMITTED or REPEATABLE-READ (T1), holding a lock on a cache entry requested by T2.

    2. There exists another ongoing OPTIMISTIC SERIALIZABLE transaction (T1) whose version is greater than that of T2, and T1 holds a lock on a cache entry requested by T2.

    3. By the time T2 acquires all required locks, there exists a cache entry with the current version different from the observed version. This is because another transaction T1 has committed and changed the version of the cache entry.

    Example

    public class DeadlockExample {
    
        private static final String ENTRY1 = "entry1";
        private static final String ENTRY2 = "entry2";
    
        public static void main(String[] args) throws IgniteException {
            Ignite ignite = Ignition.start("/myexamples/config/cluster-config.xml");
    
            // Create cache with given name, if it does not exist.
            final IgniteCache<String, String> cache = ignite.getOrCreateCache("myCache");
    
            // populate
            int i = 0;
            cache.put(ENTRY1, Integer.toString(i++));
            cache.put(ENTRY2, Integer.toString(i++));
    
            new Thread(() -> {
                try (Transaction t1 = Ignition.ignite().transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) {
    
                    String val1 = cache.get(ENTRY1);
                    cache.put(ENTRY1, val1 + "b");
    
                    String val2 = cache.get(ENTRY2);
                    cache.put(ENTRY2, val2 + "b");
    
                    t1.commit();
    
                    System.out.println("t1: " + cache.get(ENTRY1));
                    System.out.println("t1: " + cache.get(ENTRY2));
                }
            }, "t1-Thread").start();
    
            new Thread(() -> {
                try (Transaction t2 = Ignition.ignite().transactions().txStart(OPTIMISTIC, SERIALIZABLE)) {
    
                    String val2 = cache.get(ENTRY2);
                    cache.put(ENTRY2, val2 + "c");
    
                    String val1 = cache.get(ENTRY1);
                    cache.put(ENTRY1, val1 + "c");
    
                    t2.commit();
    
                    System.out.println("t2: " + cache.get(ENTRY1));
                    System.out.println("t2: " + cache.get(ENTRY2));
                }
            }, "t2-Thread").start();
        }
    }

    Output


    The output shows that transaction t2 had a lock conflict with t1. Thus, t1 was allowed to commit and t2 was rolled back with an exception.

    Conclusion

    In a highly concurrent environment, optimistic locking can lead to a high rate of transaction failures. This is still advantageous over pessimistic locking where the possibility of a deadlock occurrence is high. Optimistic-Serializable transactions in Ignite are much faster than pessimistic transactions, and can provide a significant performance improvement to any application. Transactions in Ignite are ACID compliant ensuring guaranteed consistency of data throughout the cluster at all times.




  2. Data can be loaded directly from any persistent store into Ignite caches. This example shows you how to load data from a MySQL database into an Ignite distributed cache. Here, I am assuming that you already have Apache Ignite installed on your system. If not, you can go through this tutorial first.


    1.  Sample PERSON table

    To start with, here is what the PERSON data in my database looks like-

    2.  Model 

    Here is a sample Person.java class corresponding to the PERSON table in the database.
    public class Person {
    
        private long id;
    
        private long orgId;
    
        private String name;
    
        private int salary;
    
        // Constructor
        …
        // Getter and Setter methods
        …
    }

    3. Maven Dependency

    I have specified the following dependencies in my project’s pom.xml :
    <dependency>
        <groupId>org.apache.ignite</groupId>
        <artifactId>ignite-core</artifactId>
        <version>1.5.0.final</version>
    </dependency>
    
    <dependency>
        <groupId>org.apache.ignite</groupId>
        <artifactId>ignite-spring</artifactId>
        <version>1.5.0.final</version>
    </dependency>
    
    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <version>5.1.6</version>
    </dependency>

    4. Configure Read-Through

    To load the data from the database, you need to enable the read-through mode and set the cacheStoreFactory property of CacheConfiguration. You can set these values either in your Spring XML configuration file or programmatically.
    
        
        
        
        
    
    
    
        
            
                
                    
                    
                    
                    
                    
                    
                        
                            
                        
                    
                    
                        
                            
                                
                                
                                
                                    
                                        
                                        
                                        
                                        
                                    
                                
                            
                        
                    
                
            
        
    
        
    
        
        ...
    
    

    5. Implement CacheStore

    Now that we have our model, maven dependencies, and cache configurations in place, it’s time to implement the store. To load the data from the database, loadCache() and load() methods of the CacheStore interface should be implemented.
    public class PersonStore implements CacheStore<Long, Person> {
    
        @SpringResource(resourceName = "dataSource")
        private DataSource dataSource;
        
        // This method is called whenever IgniteCache.loadCache() method is called.
        @Override
        public void loadCache(IgniteBiInClosure<Long, Person> clo, @Nullable Object... objects) throws CacheLoaderException {
            System.out.println(">> Loading cache from store...");
    
            try (Connection conn = dataSource.getConnection()) {
                try (PreparedStatement st = conn.prepareStatement("select * from PERSON")) {
                    try (ResultSet rs = st.executeQuery()) {
                        while (rs.next()) {
                            Person person = new Person(rs.getLong(1), rs.getLong(2), rs.getString(3), rs.getInt(4));
    
                            clo.apply(person.getId(), person);
                        }
                    }
                }
            }
            catch (SQLException e) {
                throw new CacheLoaderException("Failed to load values from cache store.", e);
            }
        }
      
        // This method is called whenever IgniteCache.get() method is called.
        @Override
        public Person load(Long key) throws CacheLoaderException {
            System.out.println(">> Loading person from store...");
    
            try (Connection conn = dataSource.getConnection()) {
                try (PreparedStatement st = conn.prepareStatement("select * from PERSON where id = ?")) {
                    st.setString(1, key.toString());
    
                    ResultSet rs = st.executeQuery();
    
                    return rs.next() ? new Person(rs.getLong(1), rs.getLong(2), rs.getString(3), rs.getInt(4)) : null;
                }
            }
            catch (SQLException e) {
                throw new CacheLoaderException("Failed to load values from cache store.", e);
            }
        }
    
        // Other CacheStore method implementations.
        …
    }

    For convenience purposes, Ignite also provides its users with CacheStoreAdapter class that has a default implementation for some of the CacheStore methods - loadAll(), writeAll(), and deleteAll().

    6.  Load Cache

    Here is a sample PersonStoreExample.java class that makes a call to IgniteCache.loadCache() method which internally delegates the call to CacheStore.loadCache() method (which we implemented in the previous step).
    public class PersonStoreExample {
        public static void main(String[] args) throws IgniteException {
            Ignition.setClientMode(true);
    
            try (Ignite ignite = Ignition.start("config/cluster-config.xml")) {
                try (IgniteCache<Long, Person> cache = ignite.getOrCreateCache("personCache")) {
                    // Load cache with data from the database.
                    cache.loadCache(null);
    
                    // Execute query on cache.
                    QueryCursor<List<?>> cursor = cache.query(new SqlFieldsQuery(
                            "select id, name from Person"));
    
                    System.out.println(cursor.getAll());
                }
            }
        }
    }

    7. Start Ignite Cluster

    From the command shell, lead yourself to the Ignite installation folder and start the server nodes, using the following command -
    $ bin/ignite.sh <path-to-Spring-XML-configuration-file>
    Make sure that PersonStore.java is in the class path of Ignite. To do so, you can set the USER_LIBS environment variable, or drop the project jar into the libs folder of your Ignite installation.

    8. Output

    From your IDE, run PersonStoreExample.java.

    For more information, documentation, and screencasts, visit the Apache Ignite website.



  3. For systems where low latency is critical, there is nothing better than caching the data in memory in a distributed cluster. While storing data in memory provides fast data access, distributing it on a cluster of nodes increases application performance and scalability. And Apache Ignite helps you achieve exactly that.

    Ignite data grid is a distributed in-memory key-value store. It can also be viewed as a partitioned hash map that enables caching data in-memory over multiple server nodes. You can store as much data as you want, in memory, by adding new nodes to the Ignite cluster at anytime. As a key-value store, Ignite data grid supports:

    Partitioning & Replication
    Ignite can be configured to either replicate or partition the data in memory. In a replicated cache, data is fully replicated on all the nodes in the cluster. In a partitioned cache, the data is evenly split and distributed cluster-wide so that each node contains only a subset of the total data.

    Redundancy
    Ignite allows you to store multiple backup copies of cached data within your cluster. Configuring backup prevents data loss in case of a node crash. You can configure as many backups as the total number of nodes in the cluster.

    Consistency
    Ignite supports atomic and transactional modes for cache operations. In atomic mode, multiple atomic operations are performed individually. In transactional mode, multiple cache operations are grouped and executed in a single transaction. The transactional mode in Ignite is fully ACID compliant, and the data always remains consistent, regardless of any node failures.

    Data Locality
    Ignite determines data locality using a pluggable hashing algorithm. Every client can determine which node a key belongs to by plugging it into a hashing function. This eliminates the need for any special mapping servers or name nodes that can potentially be a single point of failure. 

    SQL Queries
    To query the Ignite cache, you can simply use standard SQL syntax (ANSI 99). Ignite lets you use any SQL function, aggregation, or grouping. It also supports distributed SQL joins. Here is an example of how to execute an SQL query in Ignite: 
    // ‘Select’ query to concatenate the first and last name of all persons
    // and associate them with organization names.
    SqlFieldsQuery sql = new SqlFieldsQuery(
      "select concat(p.firstName, ' ', p.lastName), o.name " +
      "from Person p, Organization o " +
      "where p.orgId = o.id");
    
    // Execute the query on Ignite cache and print the result.
    try (QueryCursor<List<?>> cursor = cache.query(sql)) {
      System.out.println("Persons and Organizations: " + cursor.getAll());
    }

    Example

    Here is an example of some basic cache operations in Ignite:
    // Store keys in cache.
    cache.put("Hello", "World");
    
    // Retrieve values from cache.
    String val = cache.get("Hello");
    
    // Replace-if-exists operation.
    boolean success = cache.replace("Hello", "Again");
    
    // Remove-if-exists operation.
    success = cache.remove("Hello");

    Screencast

    If you prefer to watch a running example, here is a short screencast.




    Conclusion

    Apache Ignite stores data in distributed caches across multiple nodes providing fast data access. Caches can be configured to store data in either partitioned or replicated manner. Ignite clusters are resilient to node failures and data on Ignite nodes is guaranteed to be consistent as long as the cluster is alive. Ignite is easy to setup and use which helps developers get started in no time.

    For more information, documentation, and screencasts, visit the Apache Ignite website.



  4. This tutorial shows you how to create a simple ‘Hello World’ example in Apache Ignite.

    The following technologies were used in this example:
    1. Java Development Kit (JDK) 1.8
    2. Apache Ignite 1.5.0-b1
    3. Maven 3.1.1
    4. IntelliJ IDEA 15 CE
    Note: JDK 1.7 or above is required.

    1.  Download and Install Ignite

    Download the latest binary distribution from the Apache Ignite website and extract the resulting .zip file to a location of your choice:
    $ unzip apache-ignite-fabric-1.5.0-b1-bin.zip
    $ cd apache-ignite-fabric-1.5.0-b1-bin

    2.  Set Environment Variable (this step is optional)

    Set IGNITE_HOME environment variable to point to the installation folder and make sure there is no trailing / in the path. On my mac, I have set this environment variable in .bash_profile file, like so:
    export IGNITE_HOME=<path-to-ignite-installation-folder>

    3.  Start Ignite Cluster

    Start a node using bin/ignite.sh command and specify an example configuration file provided in the Ignite installation:
    $ bin/ignite.sh examples/config/example-ignite.xml
    If the installation was successful, your Ignite node startup message should look like this:

    Click on the image to view full size.

    I have started one more node in another terminal, by repeating the above command (in step 3). 
    Click on the image to view full size.

    I now have an Ignite cluster setup with two server nodes running. You can start as many nodes as you like. Ignite will automatically discover all the nodes.

    4.  Add Ignite Depedency

    Add the following Ignite dependencies in your project’s pom.xml file:
    <dependency>
        <groupId>org.apache.ignite</groupId>
        <artifactId>ignite-core</artifactId>
        <version>1.5.0-b1</version>
    </dependency>
    
    <dependency>
        <groupId>org.apache.ignite</groupId>
        <artifactId>ignite-spring</artifactId>
        <version>1.5.0-b1</version>
    </dependency>

    5.  HelloWorld.java

    Here is a sample HelloWord.java file that prints ‘Hello World’ on all the nodes in the cluster.
    import org.apache.ignite.Ignite;
    import org.apache.ignite.IgniteCache;
    import org.apache.ignite.IgniteException;
    import org.apache.ignite.Ignition;
    
    public class HelloWorld {
      public static void main(String[] args) throws IgniteException {
        try (Ignite ignite = Ignition.start("examples/config/example-ignite.xml")) {
          // Put values in cache.
          IgniteCache<Integer, String> cache = ignite.getOrCreateCache("myCache");
          
          cache.put(1, "Hello");
          cache.put(2, "World!");
    
          // Get values from cache and
          // broadcast 'Hello World' on all the nodes in the cluster.
          ignite.compute().broadcast(() -> {
            String hello = cache.get(1);
            String world = cache.get(2);
    
            System.out.println(hello + " " + world);
          });
        }
      }
    }

    6.  Project Structure

    Review project directory structure.
    Click on the image to view full size.

    7.  Set VM Options in IDEA

    Go to Run --> Edit Configurations --> VM options (under Configuration tab) and enter:
    -DIGNITE_HOME=<path-to-Ignite-installation-folder>
    This step is required only because we are trying to provide a relative path to the configuration file in our code (line #7). You can skip this step and provide an absolute path instead.

    8.  Output

    Run HelloWorld.java. You will see ‘Hello World!’ printed on all three nodes. 

    On IDEA console
    Click on the image to view full size.


    On both terminals
    Click on the image to view full size.

    Screencast

    If you prefer to watch a running example, here is a short screencast.




    For more information, documentation, and screencasts, visit the Apache Ignite website.




  5. Businesses are accumulating data at enormous rates requiring huge amounts of storage. Managing large data is hard, but processing it is even more challenging. With terabytes of data to store and process, it is often the case that developers find themselves in a quandary about how to strike the right balance between speed, scalability, and cost. 

    Storing data in cache can significantly enhance the speed of your application. It reduces the network overhead caused due to frequent data movement between an application and the database. 

    Apache Ignite allows you to store the most frequently accessed data in memory. It evenly distributes the data across a cluster of computers in either partitioned or replicated manner. Ignite allows you to access the data from any underlying data store – RDBMS, NoSQL, or HDFS.


    You may ask - Once a cluster is formed with n nodes, what if the size of the data set increases? In that case, you can dynamically add nodes to the Ignite cluster without restarting the entire cluster. Ignite has virtually unlimited scale.

    Ignite database caching provides the following configurable options: 

    Write-Through and Read-Through 
    In write-through mode, when data is updated in cache, it is also updated in the underlying database. In case of read-through mode, when the requested data is not available in cache, it is automatically loaded from the database.

    Write-Behind Caching
    Ignite provides an option to asynchronously perform updates to the database. By default, each update in a write-through mode involves a corresponding request to the underlying database. With write-behind caching enabled, cache data updates are accumulated and sent to the database in batches. For applications where put and remove operations are frequent, write-behind caching can provide a performance boost.

    Automatic Persistence
    Ignite ships with its own user-friendly database schema-mapping wizard that provides automatic support for integrating with persistent stores. This utility automatically connects to the underlying database and generates all the required XML OR-mapping configuration and Java domain model POJOs.

    SQL Queries
    To query the Ignite cache, you can simply use the standard SQL syntax (ANSI 99). Ignite lets you use any SQL function, aggregation, or grouping. It also supports distributed SQL joins. Here is an example of how to execute an SQL query in Ignite:

    IgniteCache<Long, Person> cache = ignite.cache("mycache");
    
    // ‘Select’ query to concatenate the first and last name of all persons.
    SqlFieldsQuery sql = new SqlFieldsQuery(
      "select concat(firstName, ' ', lastName) from Person");
    
    // Execute the query on Ignite cache and print the result.
    try (QueryCursor<List<?>> cursor = cache.query(sql)) {
      for (List<?> row : cursor)
        System.out.println("Full name: " + row.get(0));
    }

    Conclusion

    Apache Ignite is an open source project focused towards distributed in-memory computing. Ignite stores data in memory, distributed across multiple nodes providing fast data access. The option to asynchronously propagate data to the persistence layer is an added advantage. Additionally, the ability to integrate with a variety of databases also makes Ignite an easy choice for developers to use it for database caching.

    For more information, documentation, and screencasts, visit the Apache Ignite website.



  6. In a distributed environment, deploying a user-defined data structure as cluster wide service can provide access to all its operations, from anywhere in the grid. For example, you can have your own implementation of a distributed SecureMap, which auto-encrypts its values and is deployed as a service on all the cluster nodes.

    In GridGain, you can implement your own custom data structures and deploy them as distributed services on the grid. You can also access them from clients via service proxies which connect to remotely deployed distributed services. This allows to invoke your own services from anywhere, regardless of the deployment topology, be that a cluster-singleton, a node singleton, or any other deployment.

    As an example, let's define a simple counter service as MyCounterService interface.

    public class MyCounterService {
        /**
         * Increment counter value and return the new value.
         */
        int increment() throws GridException;
         
        /**
         * Get current counter value.
         */
        int get() throws GridException;
    }
    

    An implementation of a distributed service has to implement both, GridService and MyCounterService interfaces.

    The easiest way to implement our counter service is to store the counter value in a GridGain distributed cache. The key for this counter value is the name of the service. This allows us to create multiple instances of this counter service with different names, hence having different counters within the grid.

    public class MyCounterServiceImpl implements GridService, MyCounterService {
        private static final long serialVersionUID = 0L;
        
        // Auto-inject grid instance. 
        @GridInstanceResource
        private Grid grid;
        
        /** Instance of distributed cache. */
        private GridCache<String, Integer> cache;
     
        /** Service name. */
        private String name;
     
        /**
         * Service initialization.
         */
        @Override public void init(GridServiceContext ctx) throws Exception {      
            cache = grid.cache("myCounterCache");
     
            name = ctx.name();
             
            System.out.println("Service was initialized: " + ctx.name());
        }
     
        /**
         * Cancel this service.
         */
        @Override public void cancel(GridServiceContext ctx) {
            System.out.println("Service was cancelled: " + ctx.name());
        }
        
        /**
         * Start service execution.
         */
        @Override public void execute(GridServiceContext ctx) throws Exception {
            // Our counter service does not need to have any execution logic
            // of its own, as it is only accessed via MyCounterService public API.
            System.out.println("Executing distributed service: " + ctx.name());
        }
      
        /**
         * Get current counter value.
         */
        @Override public int get() throws GridException {
            Integer i = cache.get(name);
    
            // If there is no counter, then we return 0.  
            return i == null ? 0 : i;
        }
     
        /**
         * Increment our counter. 
         */
        @Override public int increment() throws GridException {
            // Since the cache is partitioned, 'transformAndCompute(...)' method
            // ensures that the closure will execute on the cluster member where 
            // GridGain caches our counter value.
            return cache.transformAndCompute(name, new MyTransformClosure());
        }
     
        /**
         * GridGain transform closure which increments the value
         * currently stored in cache.
         */
        private static class MyTransformClosure implements GridClosure<Integer, GridBiTuple<Integer, Integer>> {
            @Override public GridBiTuple<Integer, Integer> apply(Integer i) {
                int newVal = i == null ? 1 : i + 1;
     
                // First value in the tuple is the new value to store in cache,
                // and the 2nd value is to be returned from 'transformAndCompute(...)' call.
                // In our case, both values are the same.
                return new GridBiTuple<>(1, 1);
            }      
        }
    }
    

    We can now create a service proxy and invoke our distributed service.

    try (Grid g = GridGain.start("examples/config/example-cache.xml")) {
        //Get an instance of GridServices for remote nodes.
        GridServices svcs = grid.forRemotes().services();
        try {
             // Deploy node singleton. An instance of the service
             // will be deployed on every remote cluster node.
             svcs.deployNodeSingleton("myCounterService", new MyCounterServiceImpl()).get();
    
             // Get service proxy for remotely deployed service.
             // Since service was deployed on all remote nodes, our 
             // proxy is *not sticky* and will load-balance service 
             // invocations across all remote nodes in the cluster.
             MyCounterService cntrSvc = grid.services().
                serviceProxy("myCounterService", MyCounterService.class, /*not-sticky*/false);
    
             // Invoke a remote distributed counter service.
             cntrSvc.increment();
    
             // Print latest counter value from a remote counter service.
             System.out.println("Incremented value : " + cntrSvc.get());
        }
        finally {
            // Undeploy all services.
            grid.services().cancelAll();
        }
    }
    

    In the above example, cntrSvc is a proxy for the remotely deployed service, myCounterService. You can find more information about GridGain distributed services here.



  7. When storing data in a distributed cache, Map is the most obvious data structure. But, there are times when applications need to process data in the order it is received. GridGain In-Memory Data Fabric, in addition to providing standard key-value map-like storage, has an implementation of fast Distributed Blocking Queue.

    As an implementation of java.util.concurrent.BlockingQueue, GridGain Distributed Queue also supports all operations from java.util.Collection interface. Distributed Queues can be created in either collocated or non-collocated mode.

    Collocated queues are best suited when you have many small-sized queues. In this mode, you can have many queues, with all elements for each queue cached on the same node, making contains(…), get(…), and iterate(…) operations fast. The only constraint is that data should fit in memory allocated for a single node unless you configure GridGain to evict data to off-heap memory or disk.

    Non-collocated queues, on the other hand, are useful when you have large unbounded queues. Queue elements are distributed across all nodes in the cluster, allowing to utilize memory available across all the nodes for queue entries. However, certain operations, like iterate(…), can be slower since it requires going through multiple cluster nodes.

    Here is a simple example of how to create a queue in GridGain:

    try (Grid g = GridGain.start("examples/config/example-cache.xml")) {  
        // Initialize new FIFO queue.
        GridCacheQueue<String> queue = g.cache("partitioned_tx").dataStructures().queue(
                    "myQueue",     // Queue name.
                    20,            // Queue capacity. 0 for unbounded queue.
                    true,          // Collocated.
                    true           // Create if it does not exist.
                 );
     
        // Put items in queue.
        for (int i = 0; i < queue.capacity(); i++)
            queue.put(Integer.toString(i));
     
        // Read items from queue.
        for (int i = 0; i < queue.size(); i++)
            System.out.println("Queue item read from queue head: " + queue.take());
    }
    

    For more information on Distributed Queues you can refer to GridGain examples and documentation. 



  8. Failing-over web session caching is problematic when you run multiple application servers. It is not uncommon for web applications to run in a cluster to distribute the load of high volume of web requests. But what if one of the application servers crashes? The load balancer will just route the web request to another available application server, but all of user’s session data is lost. In simple words, you may be filling your shopping cart with your favorite items, but if the application server serving your request crashes, you will end up with an empty cart.

    A feasible solution here would be to cache all your web sessions in GridGain cache. GridGain In-Memory Data Fabric WebSessions Cache is a distributed cache that maintains a copy of all web sessions’ data in memory.


                                       
    So, when an application server fails, web requests get routed to some other application server that simply fetches the web session from GridGain cache.

    This process happens in the background and is so seamless that it does not affect the users’ experience. Not only that, GridGain also ensures fault tolerance by either replicating or partitioning the data, which is easily configurable, across all grid nodes in the cluster. And so, no session data is lost.

    Moreover, a web request can now be sent to any active application server, that can access the session data from GridGain cluster, and so, you may choose to turn off the Sticky Connections support of the load balancer.

    With just a few simple steps you can enable web sessions caching with GridGain in your application. All you need to do is:

    1.     Download GridGain and add the following jars to your application’s classpath:
    ·       gridgain.jar
    ·       gridgain-web.jar
    ·       gridgain-log4j.jar
    ·       gridgain-spring.jar

    Or, if you have a Maven based project, add the following to your application's pom.xml

    <dependency>
          <groupId>org.gridgain</groupId>
          <artifactId>gridgain-fabric</artifactId>
          <version> ${gridgain.version}</version>
          <type>pom</type>
    </dependency>

    <dependency>
        <groupId>org.gridgain</groupId>
        <artifactId>gridgain-web</artifactId>
        <version> ${gridgain.version}</version>
    </dependency>

    <dependency>
        <groupId>org.gridgain</groupId>
        <artifactId>gridgain-log4j</artifactId>
        <version>${gridgain.version}</version>
    </dependency>
                  Make sure to replace  ${gridgain.version}with actual GridGain version.
         
    2.     Configure GridGain cache in either PARTITIONED mode

    <bean class="org.gridgain.grid.cache.GridCacheConfiguration">
        <!-- Cache name. -->
        <property name="name" value="partitioned"/>
       
        <!-- Cache mode. -->
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="backups" value="1"/>
        ...
    </bean>

    or REPLICATED mode

    <bean class="org.gridgain.grid.cache.GridCacheConfiguration">
        <!-- Cache name. -->
        <property name="name" value="replicated"/>
       
        <!-- Cache mode. -->
        <property name="cacheMode" value="REPLICATED"/>
        ...
    </bean>

    You can also choose to use the default cache configuration, specified in GRIDGAIN_HOME/config/default-config.xml, shipped with GridGain installation.

    3.     Declare a context listener in the application’s web.xml.
    ...

    <listener>
       <listener-class>org.gridgain.grid.startup.servlet.GridServletContextListenerStartup</listener-class>
    </listener>

    <filter>
       <filter-name>GridGainWebSessionsFilter</filter-name>
       <filter-class>org.gridgain.grid.cache.websession.GridWebSessionFilter</filter-class>
    </filter>

    <!-- You can also specify a custom URL pattern. -->
    <filter-mapping>
       <filter-name>GridGainWebSessionsFilter</filter-name>
       <url-pattern>/*</url-pattern>
    </filter-mapping>

    <!-- Specify GridGain configuration (relative to META-INF folder or GRIDGAIN_HOME). -->
    <context-param>
       <param-name>GridGainConfigurationFilePath</param-name>
       <param-value>config/default-config.xml </param-value>
    </context-param>

    <!-- Specify the name of GridGain cache for web sessions. -->
    <context-param>
       <param-name>GridGainWebSessionsCacheName</param-name>
       <param-value>partitioned</param-value>
    </context-param>

    ...

    4.     Optional – Set eviction policy for stale web sessions data lying in cache.

    <bean class="org.gridgain.grid.cache.GridCacheConfiguration">
        <!-- Cache name. -->
        <property name="name" value="session-cache"/>

        <!-- Set up LRU eviction policy with 10000 sessions limit. -->
        <property name="evictionPolicy">
            <bean class="org.gridgain.grid.cache.eviction.lru.GridCacheLruEvictionPolicy">
                <property name="maxSize" value="10000"/>
            </bean>
        </property>
        ...
    </bean> 


    Conclusion

    The main advantage of GridGain web sessions caching is that it ensures that the user session data is always available no matter which application server the user’s web request is routed to. Sticky Connections support is also not required since the web sessions’ data is now available to all application servers.

    Another advantage is that in GridGain, data is always stored in memory vs. maintaining a copy of sessions’ data on disk. Therefore, the performance of your application does not get compromised while recovering the session data owned by the failed application server. 

Blog Archive
Loading
Dynamic Views theme. Powered by Blogger.