Podcast | Logical Qubits Arrive - with QuEra Quantum computing needs error-corrected, logical qubits to exit the noisy intermediate-scale quantum (NISQ) era and bring real advantage to practical business and other use cases. A recent experiment at Harvard succeeded at creating 48 logical qubits on a neutral atom platform, and the techniques will be implemented in production systems in the future. We may have 100 logical qubits by 2026! Join Host Konstantinos Karagiannis for a chat with Alex Keesling from QuEra about this vastly accelerated timeline, what this means for the industry and find out how soon you can start using logical qubits in the cloud.Guest: Alex Keesling from QuEra Topics Digital Transformation The Post-Quantum World on Apple Podcasts Quantum computing capabilities are exploding, causing disruption and opportunities, but many technology and business leaders don’t understand the impact quantum will have on their business. Protiviti is helping organisations get post-quantum ready. In our bi-weekly podcast series, The Post-Quantum World, Protiviti Associate Director and host Konstantinos Karagiannis is joined by quantum computing experts to discuss hot topics in quantum computing, including the business impact, benefits and threats of this exciting new capability. Subscribe Read transcript + Alex Keesling: At 30 logical qubits, we expect to see a lot of the early applications people have been ideating for the last several years translating to logical qubits, seeing that this is performing correctly, and then porting it over to, in the fourth generation, the larger number of qubits — the 100 logical qubits — where you can just no longer have a classical computer predict exactly what the quantum computer will do. Konstantinos Karagiannis: Quantum computing needs error-corrected logical qubits to exit the noisy, or NISQ, era and bring real advantage to practical business and other use cases. A recent Harvard experiment created 48 logical qubits using a neutral-atom platform, and the techniques will be implemented in QuEra production systems going forward. We might have 100 logical qubits by 2026. Learn what this vastly accelerated timeline means for the industry, and find out how soon you can start using logical qubits in the cloud in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era. We have a repeat guest today: the CEO of QuEra Computing, Alex Keesling. Welcome back to the show. Alex Keesling: Hi. Very happy to be back. Konstantinos Karagiannis: We’ve crossed paths lots of times since then at different shows and things. But there is an exciting reason I have for inviting you back. Let’s get right to the headline and not make anyone wait in suspense. Forty-eight logical qubits — wow. Tell us about this amazing development. Alex Keesling: That’s even in the title of the paper where this was reported. At the end of last year, we had the chance to talk to the world about some work we’ve been supporting at Harvard University. This was led by the group of Mikhail Lukin at Harvard University with support from QuEra, both from some of our scientists and some of the hardware we’ve been building to integrate into the device that was created there, and that I have very fond memories of from my Ph.D. This result was to show that using a new architecture for quantum computing, using neutral atoms, you can do very powerful things with a large number of them — collect them into logical qubits and start a new era of quantum computing. We’re moving into running algorithms on logical qubits, and this progress has been incredibly fast. We’ll go into more details of what was done in this demonstration, but different types of error correction, error detection, error mitigation, and showing that you can start doing very complex things with not just physical but also logical qubits. It’s creating a new inflection point in quantum computing where the kind of discussion we can have is at a much more higher level of abstraction. And this is something that, at QuEra, we’re building on. This is shaping our roadmap. We’re very excited about the announcement at the end of the year of this work, led by Harvard, but we’re also super excited about the plans we’ve laid out for the future of the technology that we’re building here at QuEra, and how this will allow users to start playing this year with these concepts and interfacing with devices with logical qubits at a rapidly growing pace, targeting 100 logical qubits by 2026. There’s a lot there, and we’ll break it down. Konstantinos Karagiannis: It’s a perfect teaser. It literally covers everything I’m going to ask you. Now we get to drill down into all of it. In case the concept is new to some folks, can you give a little detail around what a logical qubit is? I’d be shocked if someone who listens to a few episodes hasn’t heard the term, but maybe you could cover what a logical qubit means and why it’s so important to the future of quantum computing. Alex Keesling: There’s something we all understand, and that is that quantum computers will have a tremendous computational power for several types of applications. But we’re looking at this from the point of view of, What will an abstract quantum computer be able to do? But to get to these powerful quantum computers, we need to go from where we are today — with tens, hundreds, maybe a thousand qubits that introduce errors at a constant rate every time they do even the most basic operations — to having millions of qubits and much better performance than we have right now. It turns out that you can think about making every single component orders of magnitude better to get to that point, which is a daunting engineering task. Or you can take a different path: This is building on our understanding of error correction for classical computers and for classical communications, where there’s a very simple core idea, which is that by taking many bits — or, in our case, with quantum computers, qubits — of information, you can redundantly encode a smaller number of qubits in them. As the computation happens, if there are any mistakes introduced, you can catch those errors and pinpoint where they happened and correct them. The classical example is, if I wanted to share a bit of information with you — it’s either a yes or a no, and the yes is a 0, and the no is a 1 — and I try to send it to you, if there’s some noise in the channel, you might see my 0 every once in a while flipped into a 1, or my 1 flipped into a 0. That’s not super useful for you if it takes away confidence from what you’re seeing that I’m sending you. I can send you three of the same in a row, and then you can do some majority voting. If you see more 0s than 1s, then you have a very high confidence that the message I was trying to pass on to you was a 0, and you could even then find any areas where you thought there was an error and flip them and then continue using that to pass on a more complex message to someone else. That is what we do with logical qubits: We take a larger number of physical qubits to encode one or more logical qubits, and we’re able to interrogate them throughout a computation, catch if there are errors and correct them. Konstantinos Karagiannis: That’s a clear explanation. And the catch there is what the ratio is, depending on the technology and everything, and we’ll get to that. There’s a lot of nuance to achieving logical. Part of it is how good the initial qubit is. Then there’s that whole idea of pooling them together to do the work. Can you give us a little bit about what’s behind the Planck-scale curtain here? There’s the ratio you’re shooting for, and how the error correction works. Alex Keesling: Going back to this idea of redundantly encoding, in principle, the more physical qubits you use to encode the logical qubit, the more confidence you have that you will be able to find and correct the errors faster than they can affect the computation. That ratio, that overhead, is very important. To hearken back to the beginning of the conversation with the results from last year, one of the things that was very cool about this demonstration is that it showed that with different ways of encoding the logical qubit into the physical qubits, you can see the performance of different quantum error–correction codes and different code distances within the same quantum error–correction code. One of the results you can see if you go to Nature and pull up this paper is that for what has been the workhorse of the quantum error–correction work that has been done in the last several decades — the surface code — when you implement this in the way it was done in this work, you can see that having a larger overhead actually leads to better performance. This is because, again, the more physical qubits you have, the better your ability to find those errors that are ideally occurring at a low rate and suppress them. This is the key enabling feature of error correction: You can, in principle, exponentially suppress the effective error by going to higher encoding. Of course, this is something that depends also on the fidelity of the physical operations. For the results from late last year, they were enabled by the fact that up to last year, the entangling operations between two physical qubits using neutral atoms, which is the platform we have been developing for many years, the best reported numbers were in the ballpark of 97%, 98%, close to 99% fidelity. But to be able to do efficient error correction, you need to suppress that error and increase the fidelity to the point where you have a chance to catch and correct errors faster than they appear. There was a new way of performing these physical entangling operations that took the fidelity to above 99%. In fact, there was a result of 99.5%, and that’s what enabled the error-correction work to happen. There are a few reasons for why this is the case, and the pace from the breakthrough in having high-fidelity operations to demonstrating operations with logical qubits was just a few months. This speaks to the fact that the architecture that has been developed on the neutral-atom platform is incredibly flexible and very powerful. This is done by leveraging the fact that it is relatively easy to put more physical qubits together. You can start with hundreds of them, and it is also relatively easy to control them, and to control them with high fidelity, regardless of which pair of qubits you’re working with. That makes it so that any little improvement in the number of qubits but also in the fidelity in the operations very quickly translates throughout the entire system. This is what gave this platform the ability to show working with logical qubits as possible, exploring many types of quantum error–correction codes as possible, and that there is a path to improving logical operation fidelity by increasing the code distance and, of course, by increasing the physical-operation fidelity. Now, because neutral atoms are a pretty clean qubit we can model very well, and we understand, we see, where the dominant sources of imperfections are for that remaining half a percent or so, and that’s something that can be improved through better engineering. One of the things we’re working on at QuEra is translating these developments from the academic world into products that we’ll bring to customers, first by matching the technical capabilities and then by bringing to end users, and to help them understand what kinds of approaches to error correction there are. As I said, there’s not just a single way to do quantum error correction. There are many codes. The overhead is something users will be able to play around with, and we see a lot of opportunity with some of the newer ideas for how to implement quantum error–correction code with low-density parity codes, or QLDPC codes, that have gathered a lot of attention in the last year or 18 months. This is something we have already started exploring how to implement on the neutral-atom platform and we’re very excited to see other users be able to work with soon. Konstantinos Karagiannis: Could you explain code distance? It might not be something people have heard. Alex Keesling: Code distance is, if I want to send you a 0, I could send it as a single bit or qubit, or I can send you three copies of the same thing, or I could send you five copies or seven copies, and you can always do some kind of majority voting. It’s basically, how many errors do you need to introduce in the physical qubits before you would misidentify what the logical qubit encoded was? Konstantinos Karagiannis: That’s an important number to know. Can we talk about what device was used in the Harvard project? I got to come and see Aquila in person. That was cool — I love that. Can we talk about what was used in that project and if it’s going to be available on the cloud, or if it’s just completely on the side? Alex Keesling: It’s important to clarify that Aquila was inspired by a system that was first built at Harvard University. This was the work of my Ph.D. and others throughout the last several years. Aquila took all the learnings and know-how many of us developed by working at the university and translated it to a product that is now available on the cloud for everyone to access. That same system that was built at Harvard and that resides there, that has gotten modified over the years, and that is what was used to run the results you can find in the Nature paper. Konstantinos Karagiannis: We’ll link that in the show notes, of course, so people can find the paper. Alex Keesling: Great. That system is still at Harvard, is something that is used for research purposes. Here at QuEra, we’re building new systems. Aquila will continue to be available on the cloud. We’re not going to take it offline to make changes, but we are building new systems to translate those advances that have been happening at the university and to integrate some more of the controls and other systems we have been developing here at the company. These are the systems we’re going to use to make available to customers, to bring them, these new capabilities of digital operation with neutral atoms, and in an architecture that is able to support quantum error correction. Konstantinos Karagiannis: The second generation you’re working on to make available to customers is going to be more than the 256 qubits, and Aquila is going to contain, from your roadmap, 10 logical qubits that customers will be able to access this year. Alex Keesling: That’s exactly right. We’re starting with 10. There’s still a lot of work that we’re starting to lay the foundation for customers to be able to build their favorite logical qubits — start running algorithms on them. One of the things we’re targeting for this year is to provide software, a logical qubit simulator, to help users along their journey to understand the advantages of the neutral-atom platform — how the ability to move atoms and to move blocks of atoms, for example, can simplify a lot of the computations by parallelising many of the operations, and to help them develop their own applications in preparation for the hardware becoming available to them to run those directly on the hardware. This is something we’re going to do later this year, and as you say, we’re going to be going for up to 10 logical qubits. One of the things we’re looking at is, users will have the ability to run NISQ-type applications —their traditional digital quantum computing operation that they might have been accessing already over the last few years. We want to see users be able to apply the same kinds of algorithms they’ve already developed for the NISQ era and then start translating some of those to running algorithms on logical qubits so they can themselves see how performance changes as they move from physical to logical qubits — and to continue developing these applications with us over the next few years as we go from 10 to 30 to 100 logical qubits. Konstantinos Karagiannis: That’s exciting. Anytime you do an emulator of qubits, it’s a logical qubit, because there’s no noise introducing. But you’re creating a simulator to show how to simulate the steps to get you to the logical qubit too. Otherwise, what’s anyone going to learn? Alex Keesling: That’s exactly right. We want users to have also an understanding of what the physical errors translate to and the types of algorithms they’re going to be running. Konstantinos Karagiannis: Yes, that’s an exciting approach. You said it — 10, 30, 100. We’ll go through these quick steps. Then the third generation, which would be next year, that would be the 30 qubits, based on, let’s say roughly 3,000 physical — about 100 to one. What kind of applications do you see running on that — what you call prototype applications in your timeline? Alex Keesling: You can build a classical emulator where you say, “What do I do with 30 perfect qubits?” The reality is that 30 perfect qubits, I could run that on a computer. I can run an emulation of that. What we’re looking for is for people to start porting over and developing their applications to be efficient on the logical-qubit operation, because quantum error correction is an incredibly powerful tool that, as we were saying, allows you to improve constantly the performance of the device so it truly becomes scalable. But it comes at the cost of a more complex operation internally because there is this concept of constantly checking for errors and correcting for them. At 30 logical qubits, what we’re expecting to see is the applications that were developed and tested with 10 logical qubits to then be increased and see how the performance evolves with an ever-increasing number of logical qubits. At this point, again, if you take a classical computer, you’ll be able to predict exactly what you should get out. And that is great because it will give people confidence that what they are getting out is exactly what they should be getting out. Today, what we have with quantum computers is a very powerful tool that is hard to predict. In this NISQ era, one of the things that we’ve seen is, there’s a lot of optimisation to be done where you get a result out and you have to then make an assessment of, is this good or not? Did the quantum computer do the right thing or not? But using logical qubits, it will be a lot easier to say, yes, the computer did what it had to do. And just like with classical computers, you write your code and you have confidence that the classical computer is going to do the right thing. And that’s what we’re evolving toward with quantum computers. At 30 logical qubits, we expect to see a lot of the early applications that people have been ideating for the last several years translating to logical qubits, seeing that this is performing correctly and then porting it over to, in the fourth generation, the larger number of qubits — the 100 logical qubits — where you can no longer have a classical computer predict exactly what the quantum computer will do. It’s this journey and the path of having constant confirmation that the device is doing the right thing so that when you can no longer predict the behavior by classical emulation, you still have confidence that it is doing the right thing and that it is doing something that is unique, that is not achievable with any other device in the world. Konstantinos Karagiannis: That’s a great point. Fourth generation — the question I was going to ask next is, once you pass 50 logical qubits, give or take, you’re in uncharted territory. We just can’t get that to run. Alex Keesling: And that’s exciting. Konstantinos Karagiannis: That’s beyond exciting. That means, 2026, we’ll have access for the first time, if all goes well, to a number of qubits that represent truly uncharted territory — stuff that if we even tried to simulate it with the best tensor networks, for example, once you try to contract them, you’re still going to run into troubles. This is going to be an amazing time. That’s where the first gate-based advantage might start appearing. That’s the general idea. Alex Keesling: And, of course, some of these things will be gradual changes. We continue to increase the number of physical qubits. That does mean that users will have that choice to make, especially in this early stage of the quantum error–correction era, where they can choose to use more physical qubits without the performance guarantees at the end, where they can say with confidence this implementation of the algorithm didn’t incur an error, but where they can still develop more complex algorithms, where they can see what is the average performance, and then the decision will be either that, or start working with logical qubits, where you reduce the number of available qubits, but you gain confidence in what the device is outputting. It’s going to be a lot of going back and forth and developing applications on one that may not port over until a year or two later, but where the know-how that users are generating is still very valuable because they have direct line of sight to when they will be able to run this on the logical qubit processors. Konstantinos Karagiannis: That’s a good point. This is simplifying what might be coming in the future, but would there be some kind of slider where you could say, “I want to go to 500 qubits from the 3,000 and get this half-logical effect” or something? Will there be a slider like that? Alex Keesling: That’s something I would love to see how end users want to engage with these capabilities. In the early days, you can do error detection without doing error correction, and there, the encoding might be lower. Or you can just try to suppress the error so much by having a large overhead that you’re left with even fewer logical qubits. But you have that much higher confidence in the results that are coming out. There’s no one path to quantum error correction. This is like, at the beginning of the day — the sun is rising. And we’re just starting to get a little bit of visibility. But in the next few years, what we’re going to see is a lot of code development of hardware, of quantum error–correction codes, of algorithms, and using the right combination of algorithm, quantum error–correction code and hardware will allow us to extract more early value from the devices. It might be that for some algorithms, for algorithm A, a particular quantum error–correction code is more applicable because of the types of operations that are simple to do in that particular code, whereas for algorithm B, a different error-correction code will be more powerful. Of course, we’re working toward a future where no one needs to think about this. And eventually, you’ll just know how many logical qubits you have and everything else is happening in the background, and it all works great, but there’s a lot of learning and development to be done in the next few years. There’s going to be a lot of input from end users about what the right applications are, from new ideas coming out of the industry, but also out of academic research and supporting this commercial, academic and even government ecosystem is going to be what makes quantum computing advance at an ever-increasing pace. Konstantinos Karagiannis: I’ve got to tell you, it’s almost heartwarming — which is a nerdy thing to say, but heartwarming — to see a timeline or roadmap that has both logical and physical on the same bar. It’s cool. Konstantinos Karagiannis: If you had to guess, where would you see 1,000 logical qubits coming in? Alex Keesling: I’m super optimistic. We’ll see 1,000 logical qubits before 2030. The conversation we’re having today — for any of the people you’ve had on your podcast, how many of them would have told you that we would be having this discussion today? We’re talking about commercial devices with 100 logical qubits within just the next few years. Progress is accelerating, and we’re going to be at 1,000 logical qubits before 2030. There’s the pure brute-force approach we actually know can take us there. But there’s so much more room for improvement and for acceleration coming from new quantum error–correction codes, from new, clever ideas that are developed side by side with the hardware to have fewer physical qubits required for a particular number of logical qubits, and where the operations are happening in a much more efficient way. That is what happened last year. It was a step change for us in the architecture — in how we build and operate the neutral-atom quantum computers that allowed for all this to happen so quickly — and there are many more of these very clever ideas around the corner. There’s a brute-force path that can get us there, and everything else on top just accelerates the path to get us there. Konstantinos Karagiannis: I’m excited. I felt it brewing. I knew that this was going to be a big year. Thanks for not letting me down. For everyone listening, you could definitely check out the links in the show notes. Like I said, there’s going to be the paper. You can go see the roadmap for yourself. And as Alex said, you can actually check out Aquila now and see where it is today and what’s coming in the future. Thank you so much for being a repeat guest, and I’ll have you on again when you break another threshold barrier with your machine. Alex Keesling: Happy to be here, and happy to come back anytime. This has been great. Konstantinos Karagiannis: Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap. QuEra has a production neutral-atom quantum computer, Aquila, which has been available in the cloud for a while now. This machine was initially developed based on Ph.D. work by Alex Keesling and others at Harvard. The neutral-atom system, still at Harvard, was used to create an astonishing 48 logical qubits recently, much to the industry’s surprise. QuEra has been working to implement these techniques in future generations of neutral-atom systems that end users can access. Before this year ends, you can use up to 10 logical qubits. There are several ways to accomplish logical qubits, but you usually need some combination of minimising noise and using consensus to agree on what results are valid. Let’s simplify the concept with an example: If I told you I can suppress errors enough to be 99% certain a 0 or 1 that I transmit will be correct, and I then send you five copies of, say, a 1, you can look at the results and agree that most of them coming through as a 1 means your answer is 1. An occasional 0 can be ignored by consensus. Quantum emulators or simulators don’t have any errors unless you introduce a noise profile to match specific NISQ systems. In the past, emulating qubits on a classical machine meant you could use up to 50 or so perfect qubits, depending on hardware. But that’s been roughly the barrier. Every simulated qubit doubles required system resources, so if a supercomputer can do 50-ish qubits, you’d need two supercomputers in perfect tandem to add just one more qubit. Because of this barrier, things will get even more interesting as we approach 2026. If Alex and the team are correct, we’ll soar into uncharted territory with 100 logical qubits. As a result, quantum advantage will likely appear in many types of gate-based use cases. It’s time to give QuEra systems a try to be ready to shift real-world use cases to the platform. That does it for this episode. Thanks to Alex Keesling for joining to discuss QuEra, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World, and leave a review to help others find us. Be sure to follow me on all socials @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. I hope to gather those and maybe do an AMA episode soon. For more information on our quantum services, check out Protiviti.com, or follow Protiviti Tech on Twitter and LinkedIn. Until next time, be kind, and stay quantum-curious.