1. In a multi-user environment, concurrent transactions on the same set of cache entries can cause deadlocks – an unfavorable situation that adversely affects the performance of any application. Once a system enters a heavy deadlock state, recovery may require a complete cluster re-start. Keeping this in mind, Apache Ignite came up with ACID compliant Deadlock-Free Transactions that can prevent deadlocks and enhance application performance. Before looking into this feature in more detail, let’s briefly go through the basics.

    What is a Deadlock?

    A deadlock is a situation where a transaction T1 waits indefinitely for a resource R2 that is held by another transaction T2; and T2 waits for a resource R1 that is held by T1. T1 wouldn’t release the lock on R1 until it acquires the lock on R2, and T2 wouldn’t release the lock on R2 until it acquires the lock on R1. 



    Deadlocks happen when concurrent transactions try to acquire locks on same objects in different order. A safer approach is to acquire locks in the same order; but this may not be feasible every time.

    Preventing Deadlocks in Ignite

    When transactions in Ignite are performed with concurrency mode -OPTIMISTIC and isolation level -SERIALIZABLE, locks are acquired during transaction commit with an additional check allowing Ignite to avoid deadlocks. This also prevents cache entries from being locked for extended periods and avoids “freezing” of the whole cluster, thus providing high throughput. Furthermore, during commit, if Ignite detects a read/write conflict or a lock conflict between multiple transactions, only one transaction is allowed to commit. All other conflicting transactions are rolled back and an exception is thrown, as explained in the section below.

    How it Works

    Ignite assigns version numbers to every transaction and cache entry. These version numbers help decide whether a transaction will be committed or rolled back. Ignite will fail a OPTIMISTIC SERIALIZABLE transaction (T2), with a TransactionOptimisticException exception, if:

    1. There exists an ongoing PESSIMISTIC transaction or OPTIMISTIC transaction with isolation levels- READ-COMMITTED or REPEATABLE-READ (T1), holding a lock on a cache entry requested by T2.

    2. There exists another ongoing OPTIMISTIC SERIALIZABLE transaction (T1) whose version is greater than that of T2, and T1 holds a lock on a cache entry requested by T2.

    3. By the time T2 acquires all required locks, there exists a cache entry with the current version different from the observed version. This is because another transaction T1 has committed and changed the version of the cache entry.

    Example

    public class DeadlockExample {
    
        private static final String ENTRY1 = "entry1";
        private static final String ENTRY2 = "entry2";
    
        public static void main(String[] args) throws IgniteException {
            Ignite ignite = Ignition.start("/myexamples/config/cluster-config.xml");
    
            // Create cache with given name, if it does not exist.
            final IgniteCache<String, String> cache = ignite.getOrCreateCache("myCache");
    
            // populate
            int i = 0;
            cache.put(ENTRY1, Integer.toString(i++));
            cache.put(ENTRY2, Integer.toString(i++));
    
            new Thread(() -> {
                try (Transaction t1 = Ignition.ignite().transactions().txStart(PESSIMISTIC, REPEATABLE_READ)) {
    
                    String val1 = cache.get(ENTRY1);
                    cache.put(ENTRY1, val1 + "b");
    
                    String val2 = cache.get(ENTRY2);
                    cache.put(ENTRY2, val2 + "b");
    
                    t1.commit();
    
                    System.out.println("t1: " + cache.get(ENTRY1));
                    System.out.println("t1: " + cache.get(ENTRY2));
                }
            }, "t1-Thread").start();
    
            new Thread(() -> {
                try (Transaction t2 = Ignition.ignite().transactions().txStart(OPTIMISTIC, SERIALIZABLE)) {
    
                    String val2 = cache.get(ENTRY2);
                    cache.put(ENTRY2, val2 + "c");
    
                    String val1 = cache.get(ENTRY1);
                    cache.put(ENTRY1, val1 + "c");
    
                    t2.commit();
    
                    System.out.println("t2: " + cache.get(ENTRY1));
                    System.out.println("t2: " + cache.get(ENTRY2));
                }
            }, "t2-Thread").start();
        }
    }

    Output


    The output shows that transaction t2 had a lock conflict with t1. Thus, t1 was allowed to commit and t2 was rolled back with an exception.

    Conclusion

    In a highly concurrent environment, optimistic locking can lead to a high rate of transaction failures. This is still advantageous over pessimistic locking where the possibility of a deadlock occurrence is high. Optimistic-Serializable transactions in Ignite are much faster than pessimistic transactions, and can provide a significant performance improvement to any application. Transactions in Ignite are ACID compliant ensuring guaranteed consistency of data throughout the cluster at all times.




  2. Data can be loaded directly from any persistent store into Ignite caches. This example shows you how to load data from a MySQL database into an Ignite distributed cache. Here, I am assuming that you already have Apache Ignite installed on your system. If not, you can go through this tutorial first.


    1.  Sample PERSON table

    To start with, here is what the PERSON data in my database looks like-

    2.  Model 

    Here is a sample Person.java class corresponding to the PERSON table in the database.
    public class Person {
    
        private long id;
    
        private long orgId;
    
        private String name;
    
        private int salary;
    
        // Constructor
        …
        // Getter and Setter methods
        …
    }

    3. Maven Dependency

    I have specified the following dependencies in my project’s pom.xml :
    <dependency>
        <groupId>org.apache.ignite</groupId>
        <artifactId>ignite-core</artifactId>
        <version>1.5.0.final</version>
    </dependency>
    
    <dependency>
        <groupId>org.apache.ignite</groupId>
        <artifactId>ignite-spring</artifactId>
        <version>1.5.0.final</version>
    </dependency>
    
    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <version>5.1.6</version>
    </dependency>

    4. Configure Read-Through

    To load the data from the database, you need to enable the read-through mode and set the cacheStoreFactory property of CacheConfiguration. You can set these values either in your Spring XML configuration file or programmatically.
    
        
        
        
        
    
    
    
        
            
                
                    
                    
                    
                    
                    
                    
                        
                            
                        
                    
                    
                        
                            
                                
                                
                                
                                    
                                        
                                        
                                        
                                        
                                    
                                
                            
                        
                    
                
            
        
    
        
    
        
        ...
    
    

    5. Implement CacheStore

    Now that we have our model, maven dependencies, and cache configurations in place, it’s time to implement the store. To load the data from the database, loadCache() and load() methods of the CacheStore interface should be implemented.
    public class PersonStore implements CacheStore<Long, Person> {
    
        @SpringResource(resourceName = "dataSource")
        private DataSource dataSource;
        
        // This method is called whenever IgniteCache.loadCache() method is called.
        @Override
        public void loadCache(IgniteBiInClosure<Long, Person> clo, @Nullable Object... objects) throws CacheLoaderException {
            System.out.println(">> Loading cache from store...");
    
            try (Connection conn = dataSource.getConnection()) {
                try (PreparedStatement st = conn.prepareStatement("select * from PERSON")) {
                    try (ResultSet rs = st.executeQuery()) {
                        while (rs.next()) {
                            Person person = new Person(rs.getLong(1), rs.getLong(2), rs.getString(3), rs.getInt(4));
    
                            clo.apply(person.getId(), person);
                        }
                    }
                }
            }
            catch (SQLException e) {
                throw new CacheLoaderException("Failed to load values from cache store.", e);
            }
        }
      
        // This method is called whenever IgniteCache.get() method is called.
        @Override
        public Person load(Long key) throws CacheLoaderException {
            System.out.println(">> Loading person from store...");
    
            try (Connection conn = dataSource.getConnection()) {
                try (PreparedStatement st = conn.prepareStatement("select * from PERSON where id = ?")) {
                    st.setString(1, key.toString());
    
                    ResultSet rs = st.executeQuery();
    
                    return rs.next() ? new Person(rs.getLong(1), rs.getLong(2), rs.getString(3), rs.getInt(4)) : null;
                }
            }
            catch (SQLException e) {
                throw new CacheLoaderException("Failed to load values from cache store.", e);
            }
        }
    
        // Other CacheStore method implementations.
        …
    }

    For convenience purposes, Ignite also provides its users with CacheStoreAdapter class that has a default implementation for some of the CacheStore methods - loadAll(), writeAll(), and deleteAll().

    6.  Load Cache

    Here is a sample PersonStoreExample.java class that makes a call to IgniteCache.loadCache() method which internally delegates the call to CacheStore.loadCache() method (which we implemented in the previous step).
    public class PersonStoreExample {
        public static void main(String[] args) throws IgniteException {
            Ignition.setClientMode(true);
    
            try (Ignite ignite = Ignition.start("config/cluster-config.xml")) {
                try (IgniteCache<Long, Person> cache = ignite.getOrCreateCache("personCache")) {
                    // Load cache with data from the database.
                    cache.loadCache(null);
    
                    // Execute query on cache.
                    QueryCursor<List<?>> cursor = cache.query(new SqlFieldsQuery(
                            "select id, name from Person"));
    
                    System.out.println(cursor.getAll());
                }
            }
        }
    }

    7. Start Ignite Cluster

    From the command shell, lead yourself to the Ignite installation folder and start the server nodes, using the following command -
    $ bin/ignite.sh <path-to-Spring-XML-configuration-file>
    Make sure that PersonStore.java is in the class path of Ignite. To do so, you can set the USER_LIBS environment variable, or drop the project jar into the libs folder of your Ignite installation.

    8. Output

    From your IDE, run PersonStoreExample.java.

    For more information, documentation, and screencasts, visit the Apache Ignite website.



  3. For systems where low latency is critical, there is nothing better than caching the data in memory in a distributed cluster. While storing data in memory provides fast data access, distributing it on a cluster of nodes increases application performance and scalability. And Apache Ignite helps you achieve exactly that.

    Ignite data grid is a distributed in-memory key-value store. It can also be viewed as a partitioned hash map that enables caching data in-memory over multiple server nodes. You can store as much data as you want, in memory, by adding new nodes to the Ignite cluster at anytime. As a key-value store, Ignite data grid supports:

    Partitioning & Replication
    Ignite can be configured to either replicate or partition the data in memory. In a replicated cache, data is fully replicated on all the nodes in the cluster. In a partitioned cache, the data is evenly split and distributed cluster-wide so that each node contains only a subset of the total data.

    Redundancy
    Ignite allows you to store multiple backup copies of cached data within your cluster. Configuring backup prevents data loss in case of a node crash. You can configure as many backups as the total number of nodes in the cluster.

    Consistency
    Ignite supports atomic and transactional modes for cache operations. In atomic mode, multiple atomic operations are performed individually. In transactional mode, multiple cache operations are grouped and executed in a single transaction. The transactional mode in Ignite is fully ACID compliant, and the data always remains consistent, regardless of any node failures.

    Data Locality
    Ignite determines data locality using a pluggable hashing algorithm. Every client can determine which node a key belongs to by plugging it into a hashing function. This eliminates the need for any special mapping servers or name nodes that can potentially be a single point of failure. 

    SQL Queries
    To query the Ignite cache, you can simply use standard SQL syntax (ANSI 99). Ignite lets you use any SQL function, aggregation, or grouping. It also supports distributed SQL joins. Here is an example of how to execute an SQL query in Ignite: 
    // ‘Select’ query to concatenate the first and last name of all persons
    // and associate them with organization names.
    SqlFieldsQuery sql = new SqlFieldsQuery(
      "select concat(p.firstName, ' ', p.lastName), o.name " +
      "from Person p, Organization o " +
      "where p.orgId = o.id");
    
    // Execute the query on Ignite cache and print the result.
    try (QueryCursor<List<?>> cursor = cache.query(sql)) {
      System.out.println("Persons and Organizations: " + cursor.getAll());
    }

    Example

    Here is an example of some basic cache operations in Ignite:
    // Store keys in cache.
    cache.put("Hello", "World");
    
    // Retrieve values from cache.
    String val = cache.get("Hello");
    
    // Replace-if-exists operation.
    boolean success = cache.replace("Hello", "Again");
    
    // Remove-if-exists operation.
    success = cache.remove("Hello");

    Screencast

    If you prefer to watch a running example, here is a short screencast.




    Conclusion

    Apache Ignite stores data in distributed caches across multiple nodes providing fast data access. Caches can be configured to store data in either partitioned or replicated manner. Ignite clusters are resilient to node failures and data on Ignite nodes is guaranteed to be consistent as long as the cluster is alive. Ignite is easy to setup and use which helps developers get started in no time.

    For more information, documentation, and screencasts, visit the Apache Ignite website.

Blog Archive
Loading
Dynamic Views theme. Powered by Blogger.