Stack buffers are ideal when you want the amount of control offered by queue buffers, but you have an important requirement for low latency.
To use a stack as a buffer, you’ll need two processes, just like you would with queue buffers, but a list 20 will be used instead of a queue data structure.
The reason the stack buffer is particularly good for low latency is related to issues similar to buffer bloat 21. If you get behind on a few messages being buffered in a queue, all the messages in the queue get to be slowed down and acquire milliseconds of wait time.
Eventually, they all get to be too old and the entire buffer needs to be discarded.
On the other hand, a stack will make it so only a restricted number of elements are kept waiting while the newer ones keep making it to the server to be processed in a timely manner.
Whenever you see the stack grow beyond a certain size or notice that an element in it is too old for your QoS requirements you can just drop the rest of the stack and keep going from there. PO Box also offers such a buffer implementation.
A major downside of stack buffers is that messages are not necessarily going to be processed in the order they were submitted — they’re nicer for independent tasks, but will ruin your day if you expect a sequence of events to be respected.
[20] Erlang lists are stacks. For all we care, they provide push and pop operations that take O(1) complexity and are very fast.
[21] http://queue.acm.org/detail.cfm?id=2071893