Relaxed Priority Queue & Evaluation of Locks
Examensarbete för masterexamen
Computer science – algorithms, languages and logic (MPALG), MSc
We present a new, lock-free and concurrent priority queue, utilizing some ideas from  by Rukundo et al., that relaxes the traditional sequential semantics of the delete_min operation to achieve better scalability and performance. This relaxation is such that delete_min operations can be k-out-of-order when compared to sequential semantics, for which k has a well-defined upper bound. We experimentally compare our priority queue to established, concurrent priority queues, both with relaxed and non-relaxed semantics, and find that ours performs well. Furthermore, we conduct tests with locks using data structures from  by Rukundo et al., comparing the data structures’ performance when making use of a mix of atomic instructions and locks to that of the original design, which only utilizes atomic instructions. This allowed us to use different underlying data structures, and 3 different data structures using locks were tested; a linked list, a dynamic array, and a list of small arrays. We find that in some instances, this leads to increased cache-locality in data access patterns, and therefore an overall increase in performance. As an example of this, a speedup of up to 7.6 times was observed for the largest relaxation of the queue when using an array, compared to the original, with large improvements for other data structures as well.
Concurrency , Data Structures , Algorithms , Priority Queue , Semantic Relaxation , Lock-free , Scalability , Scalability