Hamartia Antidote
ELITE MEMBER
- Joined
- Nov 17, 2013
- Messages
- 35,188
- Reaction score
- 30
- Country
- Location
Atom Computing Wins the Race to 1000 Qubits
Atom Computing, a developer of neutral atom-based quantum computers, today announced it has built a 1,225-site atomic array that contains 1,180 qubits. This follows building a 100-qubit system, named Phoenix, [âŠ]
www.hpcwire.com
Atom Computing, a developer of neutral atom-based quantum computers, today announced it has built a 1,225-site atomic array that contains 1,180 qubits. This follows building a 100-qubit system, named Phoenix, in 2021. While there are no beta users for the new device, which wonât be broadly available until sometime in 2024, Atom Computing becomes the first quantum company to reach the 1000-qubit milestone in a gate-based system. The new device passes the 433-qubit mark set by IBMâs Osprey QPU, which was announced late last year and deployed in May.
Founded in 2018, Atom Computing touts the new device as evidence of neutral atomsâ inherent strength and rapid advance in the intensifying competition among diverse qubit types. IBM, which is developing superconducting qubits, is also on the cusp of breaching the 1000-qubit mark.
(Note: Long-time quantum pioneer D-Wave is an outlier in that it has a 5000-qubit system (Advantage) but it is not a gate-based system; it is generally agreed that gate-based approaches are needed to get the full power of fault-tolerant quantum computing, and recently D-Wave began developing gate-based technology.)
âThis order-of-magnitude leap â from 100 to 1,000-plus qubits within a generation â shows our atomic array systems are quickly gaining ground on more mature qubit modalities,â said Rob Hays, Atomic Computing CEO, in the announcement. âScaling to large numbers of qubits is critical for fault-tolerant quantum computing, which is why it has been our focus from the beginning. We are working closely with partners to explore near-term applications that can take advantage of these larger scale systems.â
In a briefing with HPCwire, Hays noted, âWe were the first to build nuclear spin qubits. We were the first to build a gate-based, neutral atom-based system. Thereâs others that have built analog-based systems that are out there as well. We also build proprietary control systems and the software stack, so we basically build everything from the qubit up to the API, which is Qiskit and QASM (openQASM), so that you can program [our device] just like youâd program with IBM or an IonQ quantum system. Thatâs what we did with the prototype (100-qubit, Phoenix.)â
âWith this system, we [have demonstrated] the promise of neutral atom technology, that it scales not just faster, but also more cost effectively and more energy efficiently than we believe the other technologies will scale. Theyâll have to prove us right or wrong.â
Crossing the 1000-qubit threshold is a big step. A few years ago, it looked like superconducting qubits and trapped ion qubits were the most promising near-term candidates with photonic approaches also being highly-touted but perhaps more distant. Now thereâs a plethora of qubit modalities â quantum dots, nitrogen-vacancy, topological qubits, cat qubits, and more to come. Most observers say itâs still too early to know which qubit modality (or modalities) will win out. One research group has even used an atomic microscope as a quantum computer.
Superconducting and trapped ion qubits are still considered the most mature and their developers â IBM, Google, IonQ, Quantinuum, etc. â have also played important roles in developing tools and stimulating build-out of the broader quantum ecosystem. Others are now reaping some of those benefits.
How to effectively scale up quantum computers is an active area of research. Itâs thought that thousands-to-millions of qubits will be necessary to achieve full fault-tolerant quantum computing. Currently, error rates remain so high that many physical qubits are required to deliver a single error-corrected logical qubit. Whether it will be best to build giant single devices â think a single chip for example â or link many quantum devices together to reach practical size is an ongoing question.
IBM, which has laid out the most detailed development roadmap, is scheduled to deliver a 1000-plus qubit device (Condor) this year, as well as a smaller device (Heron) intended to demonstrate a modular approach to scaling.
Jerry Chow, IBM Fellow and Director, Quantum Systems & Runtime Technology, told HPCwire last week, âWe are on track to reach the IBM Quantum Development Roadmapâs 2023 milestones, including the deployment of the 133-qubit Heron processor, and the demonstration of the experimental 1,121-qubit Condor processor. Already this year, weâve published research indicating the evidence of quantum utility on our 127-qubit Eagle processorâa point at which quantum computers serve as scientific tools to explore new problems that challenge even the most advanced approximate classical computation methods.â
So far, Atom Computing hasnât provided much granular detail about its two systems. Broadly, the neutral atoms â in this case strontium 87 â are placed in an evacuated chamber. Lasers are shined into the chamber to create a 2d lattice pattern. âSticky spotsâ form at lattice intersections that trap the atoms in position. The atoms are the qubits and their nuclear spins are manipulated with lasers to encode states. Entanglement is accomplished by inducing a Rydberg state in selected qubits causing the outer shell of electrons to expand and become entangled with neighbors.
The company has a paper (Assembly and coherent control of a register of nuclear spin qubits) explaining the approach. Other companies â e.g. QuEra and Pasqal â also use neutral atoms although, at least at present, their systems are used for analog computing schemes rather than the gate-based approach used by Atom Computing. Government labs are also exploring neutral atoms for use in quantum computing â one example is work at Sandia National Laboratories.
Here are some neutral atom advantages singled out in Atom Computingâs announcement:
- Long coherence times. The company has achieved record coherence times by demonstrating its qubits can store quantum information for 40 seconds.
- Mid-circuit measurement. Atom demonstrated the ability to measure the quantum state of specific qubits during computation and detect certain types of errors without disturbing other qubits.
- High fidelities. Being able to control qubits consistently and accurately to reduce the number of errors that occur during a computation.
- Error correction. The ability to correct errors in real time.
- Logical qubits. Implementing algorithms and controls to combine large numbers of physical qubits into a âlogical qubitâ designed to yield correct results even
âThatâs correct,â said Hays, referencing a photo (on the right). âThis is the picture of the vacuum system itself, itâs actually a multistage vacuum. The qubits in the computational chamber sit in the bottom of that image, thereâs a little black dot, thatâs actually a window. Thatâs where the lasers go in and the camera reads out through one of many windows in that system. That vacuum is about the size of a grapefruit, then thereâs some stacks and some stuff â itâs about 30 centimeters tall, total.
âThat gives you an idea of kind of whatâs at the heart of the system. Thatâs all on an optical table, with modules of optics and lasers and things that rack and stack around it. Youâll see one of the photos (image at top of the page) is the complete system with the shroud around it, so basically the chassis around it. Thatâs what the complete system looks like. Itâs sitting in a room-temperature room. Thereâs no dilution refrigerators or helium or anything like that going on inside the system,â he said.
Unlike some of its rivals, Atom Computing seems to have concentrated more on quickly achieving scale than on showcasing early collaborations.
Said Hays, âWe donât see a lot of advantage right now in talking about our customer interactions publicly when we donât have a product that someone could buy. But you can imagine that as soon as we go put a product out there in the public, we would want to have lots of customer testimonials and use cases and proofs of concept that are already in the bag that we can we can use this for marketing and sales purposes. Weâre building that now in the background.â
One exception is Atom Computingâs collaboration with Department of Energyâs National Renewable Energy Research Lab (NREL), announced in July. That program will explore using Atom Computingâs neutral atom platform for a variety of energy grid management applications. Todayâs announcement also included testimonials from Vodafone. (See press release)
Scaling up the new system, said Hays, leveraged technology from the 100-qubit prototype. âWe needed a combination of more laser power, maintaining very clean lasers, meaning not a lot of noise or anything thatâs interjected to the system, and very precise control of amplitude and phase frequency of the light in free space, in three dimensions. A lot of the technology was based around the control systems in vacuum optics to make that happen. As we scale forward weâll continue with technology advances, mostly in the optical realm.â
Hays noted that while Atom Computing is focused on gate-based development, the neutral atom modality gives it the flexibility to do both.
âYou can actually have the flexibility with this modality to do either path. This is the big difference between the analog systems that QuEra and Pascal have developed and the gate-based systems that we have. In an analog system, you basically take the problem, like the network optimization problem or whatever youâre trying to work on, and you actually arrange the atoms to map to the problem.
âWe actually do it the opposite way. We chose to have a fixed lattice of atoms, and then figure out logically, through gates and algorithmic mapping, how to run that circuit in the most optimal way on the hardware, on the fixed architecture that weâve built. Thereâs pros and cons to both [approaches]. One of the pros of having a fixed architecture is speed. Because if youâre going rearrange atoms, anytime you move something physically, it takes a lot longer than if you just keep it fixed,â said Hays.
As noted earlier, the best way to scale up quantum computers is still being determined. IBM seems to be migrating towards a more modular strategy. Building giant qubit devices and getting signals in-and-out of them is just plain hard. IBMâs 433-qubit (Osprey) announced late in 2022 and debuted in May with 413 of the qubits actually accessible. Hays tends to agree that modular is likely the best way to go.
âEventually, I think, we want to connect stuff together, because bigger is always better. But thatâs actually one of the distinguishing things about our approach. We can get a million or a few million qubits in one module thatâs the size of a mouse, because the atoms are so tiny and theyâre so close together. So, we donât have to do module interconnects for a long time, but that said, there are obviously advantages,â said Hays.
âIf I can get a million qubits in a module in a given year, well, why donât I want 10 million or 20 million or 40 million if I can connect some of those together. I think thatâs a problem that we still would like to see solved in the industry and weâre not alone. If we can figure out how to get an entangled photon out of our system, and in and out of our module, and someone else can solve a networking problem, how to move the photons over fiber or something like that, or through a switch. Thatâs great. We can use that technology. That technology could exist for multiple modalities and multiple companies.â
Most quantum watchers would agree that lately, the efforts to tackle error correction and mitigation have gained equal footing with efforts to build bigger quantum computers. âWe want 100 good qubits, not 1000 noisy ones,â is the user community refrain. Hays knows this. Scale and effective error correct are both required.
He said, âI think whatâs important about the scaling announcement is not just the disclosure of what these commercial systems are going to look like, but really starting to set the pace for scaling towards fault tolerance. To get the fault tolerance, youâre going need hundreds of thousands to millions of qubits, because it takes that many to get mapped with error correction and all of that. The quality of the qubits is absolutely very important, and we will be competitive on all those major metrics, but whoever scales the fastest and the cheapest is likely ultimately to win.â
Last edited: