Distributed Training for Deep Reinforcement Learning Decoders on the Toric Code

Publicerad

Typ

Examensarbete för masterexamen

Program

Modellbyggare

Tidskriftstitel

ISSN

Volymtitel

Utgivare

Sammanfattning

We distribute the training of a deep reinforcement learning-based decoder on the toric code developed by Fitzek et al. [9]. Reinforcement learning agents asynchronously step through multiple environments in parallel and store transitions in a prioritized experience replay buffer. A separate process samples the replay buffer and performs backpropagation on a policy network. With this setup, we managed to improve wall-clock training times with a factor 12 for toric code sizes of d = 5 and d = 7. For d = 9, we were unable to reach optimal performance but improved the decoder’s success rate using a network with a parameter reduction of factor 20. We argue that these results pave the way for optimal decoders, correcting errors close to what is theoretically possible, based on reinforcement learning for toric code sizes ≤ 9. The complete code for the training and toric code environment can be found in the repository https://github.com/Lindeby/toric-RL-decoder and https://github.com/Lindeby/gym_ToricCode.

Beskrivning

Ämne/nyckelord

Deep Reinforcement Learning, Distributed, Toric Code, Quantum Error Correction, Ape-X

Citation

Arkitekt (konstruktör)

Geografisk plats

Byggnad (typ)

Byggår

Modelltyp

Skala

Teknik / material

Index

item.page.endorsement

item.page.review

item.page.supplemented

item.page.referenced