What's new

đŸ„‡ Atom Computing Wins the Race to 1000 Qubits [1,180 so far]

Hamartia Antidote

ELITE MEMBER
Joined
Nov 17, 2013
Messages
35,188
Reaction score
30
Country
United States
Location
United States

Atom Computing, a developer of neutral atom-based quantum computers, today announced it has built a 1,225-site atomic array that contains 1,180 qubits. This follows building a 100-qubit system, named Phoenix, in 2021. While there are no beta users for the new device, which won’t be broadly available until sometime in 2024, Atom Computing becomes the first quantum company to reach the 1000-qubit milestone in a gate-based system. The new device passes the 433-qubit mark set by IBM’s Osprey QPU, which was announced late last year and deployed in May.

Founded in 2018, Atom Computing touts the new device as evidence of neutral atoms’ inherent strength and rapid advance in the intensifying competition among diverse qubit types. IBM, which is developing superconducting qubits, is also on the cusp of breaching the 1000-qubit mark.

(Note: Long-time quantum pioneer D-Wave is an outlier in that it has a 5000-qubit system (Advantage) but it is not a gate-based system; it is generally agreed that gate-based approaches are needed to get the full power of fault-tolerant quantum computing, and recently D-Wave began developing gate-based technology.)

“This order-of-magnitude leap – from 100 to 1,000-plus qubits within a generation – shows our atomic array systems are quickly gaining ground on more mature qubit modalities,” said Rob Hays, Atomic Computing CEO, in the announcement. “Scaling to large numbers of qubits is critical for fault-tolerant quantum computing, which is why it has been our focus from the beginning. We are working closely with partners to explore near-term applications that can take advantage of these larger scale systems.”

In a briefing with HPCwire, Hays noted, “We were the first to build nuclear spin qubits. We were the first to build a gate-based, neutral atom-based system. There’s others that have built analog-based systems that are out there as well. We also build proprietary control systems and the software stack, so we basically build everything from the qubit up to the API, which is Qiskit and QASM (openQASM), so that you can program [our device] just like you’d program with IBM or an IonQ quantum system. That’s what we did with the prototype (100-qubit, Phoenix.)”

“With this system, we [have demonstrated] the promise of neutral atom technology, that it scales not just faster, but also more cost effectively and more energy efficiently than we believe the other technologies will scale. They’ll have to prove us right or wrong.”

Crossing the 1000-qubit threshold is a big step. A few years ago, it looked like superconducting qubits and trapped ion qubits were the most promising near-term candidates with photonic approaches also being highly-touted but perhaps more distant. Now there’s a plethora of qubit modalities – quantum dots, nitrogen-vacancy, topological qubits, cat qubits, and more to come. Most observers say it’s still too early to know which qubit modality (or modalities) will win out. One research group has even used an atomic microscope as a quantum computer.

Superconducting and trapped ion qubits are still considered the most mature and their developers – IBM, Google, IonQ, Quantinuum, etc. – have also played important roles in developing tools and stimulating build-out of the broader quantum ecosystem. Others are now reaping some of those benefits.

How to effectively scale up quantum computers is an active area of research. It’s thought that thousands-to-millions of qubits will be necessary to achieve full fault-tolerant quantum computing. Currently, error rates remain so high that many physical qubits are required to deliver a single error-corrected logical qubit. Whether it will be best to build giant single devices – think a single chip for example – or link many quantum devices together to reach practical size is an ongoing question.

IBM, which has laid out the most detailed development roadmap, is scheduled to deliver a 1000-plus qubit device (Condor) this year, as well as a smaller device (Heron) intended to demonstrate a modular approach to scaling.

Jerry Chow, IBM Fellow and Director, Quantum Systems & Runtime Technology, told HPCwire last week, “We are on track to reach the IBM Quantum Development Roadmap’s 2023 milestones, including the deployment of the 133-qubit Heron processor, and the demonstration of the experimental 1,121-qubit Condor processor. Already this year, we’ve published research indicating the evidence of quantum utility on our 127-qubit Eagle processor—a point at which quantum computers serve as scientific tools to explore new problems that challenge even the most advanced approximate classical computation methods.”

So far, Atom Computing hasn’t provided much granular detail about its two systems. Broadly, the neutral atoms – in this case strontium 87 – are placed in an evacuated chamber. Lasers are shined into the chamber to create a 2d lattice pattern. “Sticky spots” form at lattice intersections that trap the atoms in position. The atoms are the qubits and their nuclear spins are manipulated with lasers to encode states. Entanglement is accomplished by inducing a Rydberg state in selected qubits causing the outer shell of electrons to expand and become entangled with neighbors.

The company has a paper (Assembly and coherent control of a register of nuclear spin qubits) explaining the approach. Other companies – e.g. QuEra and Pasqal – also use neutral atoms although, at least at present, their systems are used for analog computing schemes rather than the gate-based approach used by Atom Computing. Government labs are also exploring neutral atoms for use in quantum computing – one example is work at Sandia National Laboratories.

Here are some neutral atom advantages singled out in Atom Computing’s announcement:

  • Long coherence times. The company has achieved record coherence times by demonstrating its qubits can store quantum information for 40 seconds.
  • Mid-circuit measurement. Atom demonstrated the ability to measure the quantum state of specific qubits during computation and detect certain types of errors without disturbing other qubits.
  • High fidelities. Being able to control qubits consistently and accurately to reduce the number of errors that occur during a computation.
  • Error correction. The ability to correct errors in real time.
  • Logical qubits. Implementing algorithms and controls to combine large numbers of physical qubits into a “logical qubit” designed to yield correct results even
Another advantage is that neutral atoms-based systems don’t require exotic cold temperature environments. The cooling, if you will, is done by the lasers which control the motion of the atoms.

“That’s correct,” said Hays, referencing a photo (on the right). “This is the picture of the vacuum system itself, it’s actually a multistage vacuum. The qubits in the computational chamber sit in the bottom of that image, there’s a little black dot, that’s actually a window. That’s where the lasers go in and the camera reads out through one of many windows in that system. That vacuum is about the size of a grapefruit, then there’s some stacks and some stuff – it’s about 30 centimeters tall, total.

“That gives you an idea of kind of what’s at the heart of the system. That’s all on an optical table, with modules of optics and lasers and things that rack and stack around it. You’ll see one of the photos (image at top of the page) is the complete system with the shroud around it, so basically the chassis around it. That’s what the complete system looks like. It’s sitting in a room-temperature room. There’s no dilution refrigerators or helium or anything like that going on inside the system,” he said.

Unlike some of its rivals, Atom Computing seems to have concentrated more on quickly achieving scale than on showcasing early collaborations.

Said Hays, “We don’t see a lot of advantage right now in talking about our customer interactions publicly when we don’t have a product that someone could buy. But you can imagine that as soon as we go put a product out there in the public, we would want to have lots of customer testimonials and use cases and proofs of concept that are already in the bag that we can we can use this for marketing and sales purposes. We’re building that now in the background.”

One exception is Atom Computing’s collaboration with Department of Energy’s National Renewable Energy Research Lab (NREL), announced in July. That program will explore using Atom Computing’s neutral atom platform for a variety of energy grid management applications. Today’s announcement also included testimonials from Vodafone. (See press release)

Scaling up the new system, said Hays, leveraged technology from the 100-qubit prototype. “We needed a combination of more laser power, maintaining very clean lasers, meaning not a lot of noise or anything that’s interjected to the system, and very precise control of amplitude and phase frequency of the light in free space, in three dimensions. A lot of the technology was based around the control systems in vacuum optics to make that happen. As we scale forward we’ll continue with technology advances, mostly in the optical realm.”

Hays noted that while Atom Computing is focused on gate-based development, the neutral atom modality gives it the flexibility to do both.

“You can actually have the flexibility with this modality to do either path. This is the big difference between the analog systems that QuEra and Pascal have developed and the gate-based systems that we have. In an analog system, you basically take the problem, like the network optimization problem or whatever you’re trying to work on, and you actually arrange the atoms to map to the problem.

“We actually do it the opposite way. We chose to have a fixed lattice of atoms, and then figure out logically, through gates and algorithmic mapping, how to run that circuit in the most optimal way on the hardware, on the fixed architecture that we’ve built. There’s pros and cons to both [approaches]. One of the pros of having a fixed architecture is speed. Because if you’re going rearrange atoms, anytime you move something physically, it takes a lot longer than if you just keep it fixed,” said Hays.

As noted earlier, the best way to scale up quantum computers is still being determined. IBM seems to be migrating towards a more modular strategy. Building giant qubit devices and getting signals in-and-out of them is just plain hard. IBM’s 433-qubit (Osprey) announced late in 2022 and debuted in May with 413 of the qubits actually accessible. Hays tends to agree that modular is likely the best way to go.

“Eventually, I think, we want to connect stuff together, because bigger is always better. But that’s actually one of the distinguishing things about our approach. We can get a million or a few million qubits in one module that’s the size of a mouse, because the atoms are so tiny and they’re so close together. So, we don’t have to do module interconnects for a long time, but that said, there are obviously advantages,” said Hays.

“If I can get a million qubits in a module in a given year, well, why don’t I want 10 million or 20 million or 40 million if I can connect some of those together. I think that’s a problem that we still would like to see solved in the industry and we’re not alone. If we can figure out how to get an entangled photon out of our system, and in and out of our module, and someone else can solve a networking problem, how to move the photons over fiber or something like that, or through a switch. That’s great. We can use that technology. That technology could exist for multiple modalities and multiple companies.”

Most quantum watchers would agree that lately, the efforts to tackle error correction and mitigation have gained equal footing with efforts to build bigger quantum computers. ‘We want 100 good qubits, not 1000 noisy ones,’ is the user community refrain. Hays knows this. Scale and effective error correct are both required.

He said, “I think what’s important about the scaling announcement is not just the disclosure of what these commercial systems are going to look like, but really starting to set the pace for scaling towards fault tolerance. To get the fault tolerance, you’re going need hundreds of thousands to millions of qubits, because it takes that many to get mapped with error correction and all of that. The quality of the qubits is absolutely very important, and we will be competitive on all those major metrics, but whoever scales the fastest and the cheapest is likely ultimately to win.”

2021
 
Last edited:
. .

Pakistan Defence Latest Posts

Back
Top Bottom