The optimal discrimination of coherent states of light with current technology is a key problem in classical and quantum communication, whose solution would enable the realization of efficient receivers for long-distance communications in free-space and optical fiber channels. In this paper, we show that reinforcement learning (RL) protocols allow an agent to learn near-optimal coherent-state receivers made of passive linear optics, photodetectors, and classical adaptive control. Each agent is trained and tested in real time over several runs of independent discrimination experiments and has no knowledge about the energy of the states nor the receiver setup nor the quantum-mechanical laws governing the experiments. Based exclusively on the observed photodetector outcomes, the agent adaptively chooses among a set of similar to 3 x 10(3) possible receiver setups and obtains a reward at the end of each experiment if its guess is correct. At variance with previous applications of RL in quantum physics, the information gathered at each run is intrinsically stochastic and thus insufficient to evaluate exactly the performance of the chosen receiver. Nevertheless, we present families of agents that: (i) discover a receiver beating the best Gaussian receiver after similar to 3 x 10(2) experiments; (ii) surpass the cumulative reward of the best Gaussian receiver after similar to 10(3) experiments; (iii) simultaneously discover a near-optimal receiver and attain its cumulative reward after similar to 10(5) experiments. Our results show that RL techniques are suitable for online control of quantum receivers and can be employed for long-distance communications over potentially unknown channels.

Bilkis, M., Rosati, M., Yepes, R.M., Calsamiglia, J. (2020). Real-time calibration of coherent-state receivers: Learning by trial and error. PHYSICAL REVIEW RESEARCH, 2(3) [10.1103/physrevresearch.2.033295].

Real-time calibration of coherent-state receivers: Learning by trial and error

Rosati, M.
;
2020-01-01

Abstract

The optimal discrimination of coherent states of light with current technology is a key problem in classical and quantum communication, whose solution would enable the realization of efficient receivers for long-distance communications in free-space and optical fiber channels. In this paper, we show that reinforcement learning (RL) protocols allow an agent to learn near-optimal coherent-state receivers made of passive linear optics, photodetectors, and classical adaptive control. Each agent is trained and tested in real time over several runs of independent discrimination experiments and has no knowledge about the energy of the states nor the receiver setup nor the quantum-mechanical laws governing the experiments. Based exclusively on the observed photodetector outcomes, the agent adaptively chooses among a set of similar to 3 x 10(3) possible receiver setups and obtains a reward at the end of each experiment if its guess is correct. At variance with previous applications of RL in quantum physics, the information gathered at each run is intrinsically stochastic and thus insufficient to evaluate exactly the performance of the chosen receiver. Nevertheless, we present families of agents that: (i) discover a receiver beating the best Gaussian receiver after similar to 3 x 10(2) experiments; (ii) surpass the cumulative reward of the best Gaussian receiver after similar to 10(3) experiments; (iii) simultaneously discover a near-optimal receiver and attain its cumulative reward after similar to 10(5) experiments. Our results show that RL techniques are suitable for online control of quantum receivers and can be employed for long-distance communications over potentially unknown channels.
2020
Bilkis, M., Rosati, M., Yepes, R.M., Calsamiglia, J. (2020). Real-time calibration of coherent-state receivers: Learning by trial and error. PHYSICAL REVIEW RESEARCH, 2(3) [10.1103/physrevresearch.2.033295].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11590/470618
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 12
social impact