Science
Why Today’s Quantum Computers Are Still Too Analog to Matter
Dr. Shu-Jen Han is the Chief Technical Officer of SEEQC, where he leads the company’s global, multidisciplinary research and development effort to build a chip-based digital quantum computing system. Since joining SEEQC in 2021—initially as Vice President of Engineering and now as CTO—Han has overseen both the long- and short-term technology roadmaps, guiding the integration of superconducting electronics, cryogenic control, and scalable system architecture. His role places him at the center of one of quantum computing’s most difficult challenges: translating laboratory breakthroughs into machines that can operate reliably at scale.
Before SEEQC, Han spent decades at the frontier of semiconductor and nanoscale device research. He began his career at IBM’s semiconductor research division, working on advanced CMOS technologies, and later rose to manage the nanoscale device and technology group at IBM’s T.J. Watson Research Center, where his work focused on post-silicon transistor research and nanoelectronics. He subsequently joined HFC Semiconductor, serving as Senior Director and eventually Associate Vice President, where he led multiple generations of MRAM development from early research through product qualification—experience that now informs his emphasis on manufacturability and systems engineering in quantum hardware. Han earned his PhD in Materials Science and Engineering from Stanford University, with a minor in Electrical Engineering, and is the author of more than 100 scientific publications and holds over 200 issued U.S. patents.
In this interview, I spoke with Dr. Han about SEEQC’s digital approach to superconducting quantum systems and why he believes it represents a necessary break from today’s dominant control paradigm. He contrasts conventional microwave, room-temperature control—which is effectively analog, wiring-intensive, and increasingly unmanageable at scale—with SEEQC’s model of superconducting digital pulses generated near the qubits themselves at millikelvin temperatures. By localizing control and readout, Han argues, quantum systems can reduce latency, cut cabling and power demands, and make real-time quantum error correction feasible as errors compound across thousands of gate operations.
Han is blunt about the scale of the problem ahead. Fault-tolerant quantum machines, he suggests, may require between 100,000 and one million qubits, a threshold at which today’s approaches to bandwidth, wiring density, and power consumption become untenable without chip-level integration. He discusses why conventional systems could demand tens to hundreds of megawatts and points to recent results, such as Google’s Willow experiments, as signals that scaling pressures are no longer theoretical. The conversation also touches on SEEQC’s partner-driven business model, its growing collaborations in Taiwan, and the broader question of how quantum computing can move from scientific promise to industrial reality.

Scott Douglas Jacobsen: When discussing digital quantum computing systems, how do you distinguish between quantum computing as a domain of mathematical models and algorithms, and quantum computing as a physical hardware system? Where does that boundary lie, and how do these two layers—abstract computation and physical implementation—ultimately converge in frontier electronic or digital quantum architectures?
Dr. Shu-Jen Han: Today’s quantum computing is more or less analog computing because in classical computing, we are familiar with CPU- and GPU-based digital systems, with deterministic 1 and 0. In quantum computing, you have superposition, which is a probability between 1 and 0, making it closer to analog. The hardware of today’s quantum computing uses microwave pulses to manipulate qubits. Many existing systems are based on this analog scheme. The problem, from our point of view, is that this approach is not scalable because microwave pulses must travel from room temperature down to millikelvin temperatures where qubits must remain extremely cold. There is another way, which we call digital quantum computing. Instead of relying on analog microwave signals, we use superconducting digital electronics to generate coherent digital pulses that manipulate qubits. That is a key distinction between our technology and conventional analog approaches.
Jacobsen: When you bring together fundamentally different approaches to computation—not merely different algorithms within classical linear or parallel models, but distinct computational paradigms altogether—how do you integrate them coherently within a digital quantum computing architecture? Given that some computational methods are more efficient or appropriate for specific problems than others, how do you determine which form of computation belongs where, and how do you optimize that integration without overengineering simple tasks?
Han: I think one thing I want to emphasize is that in our company, we did not try to replace existing quantum computing. They run the algorithm. We are trying to make the system more scalable. Of course, there is an additional advantage to using our technology: a digital approach to controlling and reading the qubits. That gives a lot of advantages. Going back to your question, we can utilize this advantage to enhance specific algorithms. One good example is the unique on-chip digital control and readout, directly next to the qubits.
As I mentioned in your first question, with the current approach, you need to send the signal from room temperature and read it back to room temperature. There is a huge delay between when you send signals to the qubits and when you read signals from the qubits. But if we can do everything next to the qubits, we do not have that delay. That said, we can use our approach to enhance many so-called error-correction methods. You might have heard about quantum error correction because qubits have a lot of errors. If you do not correct them, it is not useful. I think that is one of the reasons you need to put the qubits at 10 millikelvin, because at slightly higher temperatures, thermal noise will mess up your qubit information.
But even at 10 millikelvin, the qubits are still very noisy. So people are pushing so-called fidelity, meaning what percentage of errors will occur. People are already pushing to 99.99%. But even that tiny percentage of error, when you start to accumulate it, becomes a problem. When you do the computation, it is like thousands of these gates—we call them quantum gates. Each quantum circuit consists of thousands of these gates. If any gate has a tiny error, and you multiply that by 1,000 or 10,000, the result will not be correct. That is why we need to do quantum error correction along the way. I keep correcting this error along the way.
But you can imagine that if you need to do this error correction by sending the signal out, using room-temperature electronics to do the correction, and sending the corrected data back in, it is very resource-wasteful. Also, sometimes you cannot immediately correct the error because doing so would cause latency. So if we can do all this control and readout next to the qubits, which is what our technology can potentially do, we can do some kind of real-time error correction. When an error forms, we detect it and correct it immediately. We do not even need to send the signal out, using our digital approach. That enables a new type of quantum error correction and significantly improves the robustness of the quantum computer. That is one example of how our technology can enhance an algorithm.
Jacobsen: Is there an upper bound—practical or theoretical—on how much quantum error correction can be performed while computations are actively running? At what point do the demands of live error correction begin to constrain the system itself?
Han: Quantum error correction is very powerful. At a high level, the concept is that you use redundant qubits. Once you measure enough qubits, you can think of it as something similar to a parity check. In a simple way, if the majority outcome is one, you say the data qubit is one; if the majority is zero, you say the data qubit is zero. As long as you measure enough qubits and they are all entangled together, this works. For example, you might have 100 physical qubits representing a single data qubit. If they are all entangled and supposed to be one, some will flip to zero because of errors. But if you measure enough of them, from a probability point of view you can say there is a high likelihood that the data qubit should be one, or vice versa zero. That is the basic idea of quantum error correction.
In principle, there is no fundamental limit to how accurate you can be. It is a resource issue. If you could use an unlimited number of physical qubits to represent one data qubit, you could achieve remarkably high accuracy. But that is the ideal case. In practice, that is why, when people talk about practical or utility-scale quantum computers, they often say they need on the order of 100,000 to even 1 million qubits. It is not that all of those qubits are doing computation. The majority of them are doing quantum error correction.
Even though, in theory, you can keep increasing accuracy by adding more physical qubits, implementing this in practice is extremely difficult. That is one of the reasons we formed SEEQC: to resolve this scalability issue. SEEQC stands for Scalable Energy Efficient Quantum Computing, and scalability is our first mission. If you want to use so many physical qubits, the first problem is how to connect them. In the conventional approach, you have to send microwave signals from room temperature down to millikelvin temperatures and read the qubits back up to room temperature. That requires long cables running from room temperature to millikelvin. If you are talking about 100,000 to one million qubits, there is no way to put millions of cables into a dilution refrigerator. There is simply no space, and the heat generated by all of those cables is unacceptable.
Another major concern is bandwidth. You send data in and read data out, and the bandwidth requirements can be on the order of tens or even hundreds of terabits per second. There is no interface today that can accommodate that kind of bandwidth. Even companies like NVIDIA do not have interfaces designed for that scale. These are engineering problems, and I would even call them fundamental problems, that block our ability to build utility-scale quantum computers using conventional approaches.
That is where SEEQC comes in. As I mentioned earlier, we do not send all signals out to room temperature. Many signals are generated locally, next to the qubits, using our digital approach. We do qubit processing locally, including control, readout, and error correction. By doing that, we eliminate many of these fundamental constraints.

Jacobsen: Energy consumption increasingly defines the limits of modern computation. How do the energy curves for different forms of computation compare as workloads scale—from simple calculations to large-scale AI inference or prolonged, intensive computation? In that context, how does SEEQC-style quantum computing compare, in terms of wattage and efficiency, to conventional quantum architectures and classical high-performance systems?
Han: Energy efficiency is a critical issue and central to our company’s mission. As I mentioned earlier, SEEQC focuses on two core challenges: scalability and energy efficiency. Scalability is what I explained in the previous question, and energy efficiency is closely related to it. In the current approach, most of the electronics are built at room temperature, mainly using high-performance FPGA-based electronics, along with dilution refrigerators. These are incredibly high-power-consumption systems. Based on our estimates, if you consider a medium-scale qubit system—which is generally what is required for fault-tolerant quantum computing—you are talking about tens of megawatts up to 100 megawatts per system, assuming you can even build it. That level of power consumption is comparable to a modern AI data center. Today’s AI data centers can consume hundreds of megawatts, even approaching gigawatts, so a single quantum computer consuming around 100 megawatts is not far off. From an energy perspective alone, that approach is not scalable.
Our technology is very different. We reduce energy consumption by roughly four to five orders of magnitude. We still require some room-temperature electronics to control our digital chips, but we drastically reduced their number. We also reduce the number of dilution refrigerators needed, because our solution is chip-based and integrates much of the functionality directly on the chip. Instead of needing many refrigerators to support extremely large numbers of qubits, integration allows us to reduce that infrastructure significantly. This lower overall energy consumption makes large-scale quantum computing more realistic and approachable.
Our approach is strongly inspired by microelectronics and semiconductor engineering, which is my background. In classical microelectronics, you do not connect every transistor with a separate physical cable. If you tried to build a processor that way, it would be impossible. That is essentially what many current quantum computing approaches resemble. What we are doing is making something analogous to an integrated circuit for quantum computing. Instead of using physical cables to connect each qubit, we integrate qubits directly with control and readout electronics on the same chip. In our case, this integrated circuit is not based on CMOS silicon technology but on superconducting single-flux-quantum digital electronics. You can think of it as a digital circuit with extremely low power consumption. By integrating qubits with local control, readout, and processing, we remove fundamental barriers related to energy, wiring, and scale. Based on the history and lessons of microelectronics, this kind of integrated-circuit approach is the only realistic way to scale quantum computing.
Jacobsen: Looking ahead, where do you see this technology by 2026, and how do you expect it to evolve over the rest of the decade? Are you referring here to the trajectory of quantum computing as a whole, or specifically to SEEQC’s approach to hardware–software integration and digital quantum control?
Han: That is a good question. Our business model is different from most quantum computing companies. Many companies are trying to build their own large quantum systems to sell, or to place in data centers and offer as cloud services. They may have a unique qubit technology or a software advantage, but the goal is to deliver a full system to end users. Our business model is different. We focus on developing unique chip technologies—qubit control, qubit readout, and error correction chips—and integrating them into the systems of large quantum computing companies. Those companies are our customers. We do not sell directly to end users. We sell to large quantum computing companies. The reason is that, internally, they know their current approaches may not be scalable, even if they do not say that publicly. Our vision is to integrate our technology into their large systems so that, when they deliver fault-tolerant systems, our technology is at the core.
Jacobsen: Public discussions of quantum computing often focus narrowly on cryptography—particularly the idea of breaking classical encryption—but pay far less attention to constraints such as energy use, system stability, or scalability. Beyond cryptography, what do you see as the most significant general and technical limitations facing quantum computation today?
Han: There are two main questions. The first is whether we can build a large-scale quantum computer at all. The second is whether, once we build it, it will be useful. The first question is easier to answer. So far, we cannot build a truly large-scale quantum computer, but there are approaches to get there, including ours. We provide a more scalable approach from the control, readout, and integration perspective. However, the field still needs to address other challenges.
For example, qubit quality is still not where it needs to be. We do not specialize in making qubits; we specialize in making control and readout electronics that integrate with qubits. Other companies focus on qubit fabrication, but overall, qubit quality still needs significant improvement.
When you build a very large-scale qubit array, system performance is not determined by the average or best-performing qubits. The worst qubits, the tail of the distribution, determine it. That is how large systems behave. The field still needs to improve qubit quality and tighten the performance distribution by eliminating the worst-performing qubits. Once that happens, scaling becomes much more realistic. There has been significant progress in recent years. For example, there has been a lot of discussion around Google’s Willow chip, which reflects meaningful advances, even if many people have not yet examined it in depth.
My personal view is that Google’s Willow chip really sparked the recent surge in interest in quantum computing. If you look at the market, many quantum computing companies began to receive much more attention, even reflected in stock prices starting around 2025. One of the most significant trigger points was Willow. This is not hype; it was an important breakthrough demonstration. What Google showed is that when you scale up the number of qubits, the error rate can actually start to decrease. As I mentioned earlier, traditionally, when you scale up qubits, quantum error correction does not work well because qubit quality is non-uniform. There are always bad qubits—the tail of the distribution—and those worst qubits determine overall system performance. When you scale up with those bad qubits, error correction fails, and logical qubit error rates remain high.
What Willow demonstrated, for the first time, is that as the number of qubits increases, the logical qubit error rate for real data qubits starts to drop significantly. That suggests their qubit quality has reached a level where scaling becomes feasible. They still have only a few hundred qubits, so it is not yet a large-scale system, but it is a very strong proof point. It also gives our work purpose. If the industry now has qubits ready to scale, SEEQC’s technology can play a significant role in enabling that scalability.
Jacobsen: As a final question, are there any broader reflections you would like to leave readers with based on today’s discussion—particularly about where the field is headed or what policymakers, industry leaders, or the public may be overlooking?
Han: One additional point relates to collaboration with Taiwan. I am not sure whether Davis mentioned this to you, but Taiwan has become extremely important in this space. Taiwan entered the quantum technology field a bit later, but it arguably has the world’s strongest semiconductor ecosystem. I did my undergraduate studies in Taiwan before starting my PhD, and I spent much of my early life there. Because we are doing chip-based quantum computing, we want to leverage Taiwan’s semiconductor expertise. Even though our technology is not CMOS, many CMOS semiconductor techniques can still be applied to our platform.
That is why we are actively leveraging chip resources from Taiwan. We now have multiple collaborations there, including recent work with E3 in Taiwan. We have our own foundry, but we also want a second foundry, and we are working on CMOS design and room-temperature electronics collaborations with Taiwanese companies. There are many active engagements with Taiwanese companies and organizations right now.
Jacobsen: Thank you for the opportunity and your time, Shu.