When storing data in a distributed cache, Map is the most obvious data structure. But, there are times when applications need to process data in the order it is received. GridGain In-Memory Data Fabric, in addition to providing standard key-value map-like storage, has an implementation of fast Distributed Blocking Queue.
As an implementation of java.util.concurrent.BlockingQueue, GridGain Distributed Queue also supports all
operations from java.util.Collection interface. Distributed Queues can be created in either
collocated or non-collocated mode.
Collocated queues are best suited when you have many small-sized queues. In
this mode, you can have many queues, with all elements for each queue cached on
the same node, making contains(…), get(…),
and iterate(…) operations fast. The only constraint is that data should fit
in memory allocated for a single node unless you configure GridGain to evict
data to off-heap memory or disk.
Non-collocated queues, on the other hand, are useful when you have large unbounded
queues. Queue elements are distributed across all nodes in the cluster, allowing
to utilize memory available across all the nodes for queue entries. However,
certain operations, like iterate(…),
can be slower since it requires going through multiple cluster nodes.
try (Grid g = GridGain.start("examples/config/example-cache.xml")) { // Initialize new FIFO queue. GridCacheQueue<String> queue = g.cache("partitioned_tx").dataStructures().queue( "myQueue", // Queue name. 20, // Queue capacity. 0 for unbounded queue. true, // Collocated. true // Create if it does not exist. ); // Put items in queue. for (int i = 0; i < queue.capacity(); i++) queue.put(Integer.toString(i)); // Read items from queue. for (int i = 0; i < queue.size(); i++) System.out.println("Queue item read from queue head: " + queue.take()); }
For more information on Distributed Queues you can refer to GridGain examples and documentation.