Language Models for Quantum Design
Georg Rieger (rieger@phas.ubc.ca) and Brett Gladman (gladman@astro.ubc.ca)
All are welcome to this event!
Abstract:
In the last several years, generative language models like GPT have scaled to the point where they routinely demonstrate emergence behaviour. I will demonstrate how language models can be trained on the qubit measurement data produced by today's quantum devices, and discuss how large models could help scale quantum computers in the future.
Bio:
Roger Melko is a professor at the University of Waterloo, and associate faculty at the Perimeter Institute for Theoretical Physics. He received his PhD from the University of California, Santa Barbara in 2005 and spent two years as a Wigner Fellow at Oak Ridge National Laboratory before coming back to Canada. His research involves the development of computer strategies for the theoretical study of quantum materials, atomic matter, quantum information systems, and artificial intelligence. He was the recipient of the 2016 CAP Herzberg Medal and the 2021 CAP/DCMMP Brockhouse Medal.
Learn More:
- See his faculty page from University of Waterloo: Roger Melko | Physics and Astronomy | University of Waterloo
- See his Github page: Roger G. Melko | rgmelko.github.io
- See his page at the Perimeter Institute: Roger Melko | Perimeter Institute
Links:
- What is quantum computing? Quantum computing - Wikipedia
- Read about Generative Language Models in this article from the Center for Security and Emerging Technology (CSET): "What Are Generative AI, Large Language Models, and Foundation Models?" What Are Generative AI, Large Language Models, and Foundation Models? | Center for Security and Emerging Technology
- Read this article from MIT TECH: What is a quantum computer? Explainer: What is a quantum computer? | MIT Technology Review