By | November 18, 2023
A dark blue background filled with a regular grid of lighter dots

Magnify / Qubits of the new hardware: an array of individual atoms.

Atomic Computing

Today, a startup called Atom Computing announced that it has done internal testing of a 1,180-qubit quantum computer and will make it available to customers next year. The system represents a major step forward for the company, which had only built one previous system based on neutral atomic qubits — a system that operated with only 100 qubits.

The error rate of individual qubit operations is high enough that it will not be possible to run an algorithm that relies on the entire qubit count without failing due to an error. But it backs up the company’s claims that its technology can scale quickly and provides a testbed for work on quantum error correction. And for smaller algorithms, the company says it will simply run multiple instances in parallel to increase the chance of returning the correct answer.

Computing with atoms

Atom Computing, as its name suggests, has chosen neutral atoms as its qubit (there are other companies that work with ions). These systems rely on an array of lasers that create a series of sites that are energetically favorable for atoms. If left on their own, atoms will tend to fall into these places and stay there until a stray gas atom bumps into them and knocks them out.

Since the location of atoms is determined by the configuration of the lasers, it is possible to address each one individually. Quantum information is stored in the nuclear spin, which is relatively impermeable to the environment. While other types of qubits have coherence lifetimes that are only a fraction of a second, neutral atoms will often hold their state for tens of seconds. Since the nuclear spin does not easily interact with the environment, it is possible to pack the atoms tightly together, allowing for a relatively dense system.

However, it is possible to manipulate atoms so that they can interact and become entangled. This works through what is called a Rydberg blockade, which prohibits interactions unless two atoms are a certain distance apart and both are in the Rydberg state, where their outermost electrons are only loosely bound and orbit at a large distance from the nucleus. By placing the right pairs of atoms in the Rydberg state (which can also be done with lasers) it is possible to entangle them. And since the lasers allow control over the location of individual atoms, it is possible to entangle any two.

Because this system allows atoms to be packed relatively tightly together, Atom Computing claims the system is well-positioned to scale quickly. Unlike in systems like transmons, where small differences in device fabrication lead to qubits with small variations in performance, every trapped atom is guaranteed to behave the same. And since atoms do not crosstalk unless manipulated, it is possible to pack many of them into a relatively small space.

These two factors, company executives argue, make neutral atoms well-positioned to scale up to large numbers of qubits. Its original system, which went online in 2021, was a 10×10 grid of atoms (although three-dimensional arrangements are also possible). And when they spoke to Ars a year ago, they mentioned that they hoped to scale up their next-generation system by an order of magnitude—though they wouldn’t say when they expected that to be ready.

It’s almost done

Atom Computing now uses the system internally and plans to open it up to the public next year. The system has been moved from a 10×10 grid to a 35×35 grid, bringing the potential locations for atoms up to 1,225. So far, testing has taken place with up to 1,180 atoms present, making it the largest machine yet has publicly acknowledged (at least in terms of qubit count).

The qubits are housed in a 12×5-foot box that contains the lasers and optics, along with the vacuum system and some unused space — Atom CEO Rob Hayes said “there’s a lot of air inside that box.” However, it does not contain the computer hardware that controls the system and its operations. The mesh of atoms it’s used to create, however, is only about 100 microns per side, so it won’t strain the hardware to keep increasing the number of qubits.

Atomic Computing

Some of the changes in this system compared to Atom’s first attempt were focused on managing the transition from a research system that was most useful for people learning to handle atom-based quantum computing, to one that has the stability needed for customers who are more interested. in the algorithms that can be run there. “We’ve also added technology around uptime and availability to make this a real product, a real cloud service,” Hayes said.

It is an added challenge with atom-based systems because of the inevitability of collisions between the trapped atoms and stray gas molecules in the vacuum chamber. Ben Bloom, Atom’s founder and CTO, said that a row of Atoms can typically be maintained for over 100 seconds. This is enough for many calculations, but still means that the system as a whole needs to be reset regularly.

As mentioned earlier, however, the customers of this system will not be able to use all of these qubits for a single calculation – an error will inevitably occur. So for now the emphasis is on running algorithms that require fewer qubits and operations. This keeps things below the error threshold while allowing companies to develop algorithms that will become useful as quantum computers improve or possibly find individual cases where existing hardware is sufficient to produce useful results.

These types of calculations are often run multiple times to give confidence in the results and get a sense of the error rate. And here the high qubit count can also be useful. “We’re actually just going to use all these qubits, because they’re all identical, to actually parallelize the computation,” Bloom said. “So if someone gives us a 50-qubit algorithm, we’ll do that 50-qubit algorithm on all of our qubits, and then we’ll give you the results faster.”

A question of scale

But the main focus was simply on scaling the qubit number so that quantum error correction becomes possible. Error correction schemes typically involve spreading a single logic qubit across multiple hardware qubits, and therefore require much more of that hardware. “Our goal is to get a single system to have a useful number of qubits,” Bloom told Ars. “And for us, that probably means hundreds of thousands to millions of qubits in a single system.”

One of the features needed for error correction has already been demonstrated on Atom Computing hardware. They have already made non-disruptive measurements of their atoms while they are in the middle of a calculation, something that is necessary to recognize and correct errors.

But other things are still going on. In the previous version of the system, connections between qubits were handled by moving individual atoms next to each other to entangle them. But the process of moving them could prove to be a bottleneck as the number of qubits continues to increase. “Moving right now is slower than our (qubit operations),” Bloom told Ars. “And so if you go into a world where you’re doing error correction, I think you’re going to have to have a huge advantage to offset the costs of moving.”

An error-corrected qubit can also have different shapes, based on different configurations of the underlying hardware qubits. Early efforts have generally been tested on hardware that has two-dimensional arrays of qubits. But it is also possible to use three-dimensional schemes, and with the right configuration of lasers, Atom’s hardware can support 3D arrays. “In general, 3D has a lot of advantages,” Blom said. “It’s again a matter of carefully mapping time-to-solution for fault-tolerant algorithms and understanding whether the trade-offs are worth the complexity.”

One of the goals of the new system is to begin to understand these issues. At the same time, the company is also working to ensure that the architecture can continue to scale to ever higher qubit counts. In that regard, the company received some good news in the form of three papers published in Nature last week. All showed similar systems operating with high fidelity. And Bloom said that for the first time, the residual noise was not due to the lasers that make the system work.

“What has held back neutral atoms, until these papers have been published, has just been all the classical stuff we use to control the neutral atoms,” Bloom said. “And what it’s essentially shown is that if you can work with the classics — work with engineering firms, work with laser manufacturers (which is something we do) — you can actually push all that noise down. And now all of a sudden you’re left with this incredibly, incredibly pure quantum system.”

For Atom itself, the step up from 100 to 1,000 qubits was made without significantly increasing the required laser power. That will make it easier to keep increasing the qubit count. And, Bloom adds, “We think the amount of challenges we had to face to go from 100 to 1,000 is probably significantly higher than the amount of challenges we’re going to face when we go to what we want to go to next — 10,000, 100 000.”

Correction: The original article misspelled Atom’s founder’s last name and had the wrong time for public availability.

#Atom #Computing #announce #qubit #quantum #computer #Ars #Technica

Leave a Reply

Your email address will not be published. Required fields are marked *