Our work shows that there is a need for quantum communication theory and computational complexity to converge more fully, with many more exciting questions ahead!
Our work shows that there is a need for quantum communication theory and computational complexity to converge more fully, with many more exciting questions ahead!
First, we show that there exists a nearly maximal separation between the computational and unbounded two-way quantum capacity. Second, we show a transition in computational capacity from nearly maximal to zero when the complexity (in terms of Choi rank) goes from polynomial to super-polynomial.
We give a first definition of a computational capacity measure and show that it related to computational distillable entanglement for "efficiently stretchable" channels. We give a channel model for which we can tightly bound these quantities to establish two key results:
Last year, I became interested in the question what happens if we impose computational efficiency onto information theory. In our latest paper, we analyze how this changes the amount of information that can be transmitted over a channel -- spoiler alert: a lot! scirate.com/arxiv/2601.1...
I also wanted to add a shout-out to Γlvaro, Thomas and Jan who put out a work with similar mindset today: arxiv.org/abs/2509.21308. It turns out that our results mostly complement each other quite nicely, so please have a look there as well!
A big thanks to my coauthors Asad, Jacopo, Lorenzo, Sofiene and Jens.
This is in stark contrast to how people approached computational information theory to date, where the approaches focused on single-shot statements. These are of course more exact, but also much more complicated to manipulate and build intuition for.
We believe that our definition has a nice level of abstraction that captures the essence of information theory under computational constraints while at the same time having the look and feel of unbounded information theory.
I invite you to discover many exciting things in the paper, like computationally measured divergences and a computational Stein's Lemma or a computational Rains bound on efficiently distillable entanglement.
Another highlight is the fact that we can obtain separations in hypothesis testing, both between bounded and unbounded observers as well as classical-quantum separations. This means there exist classical distributions testable when having access to a quantum computer but not classically.
We can use the computational relative entropy as a mother quantity. We exemplify this by defining a computational entropy SΜ²(Οβ):=βD(Οββπ) and show that it quantifies the best rate at which we can compress a state under complexity restrictions -- just like the entropy in the unbounded setting.
The computational version of Pinsker's inequality.
For example, we obtain a computational version of Pinsker's inequality, which looks exactly like the unbounded relation only that the involved quantities carry underscores, marking them as computational quantities.
One of our main contributions is to turn this process of "polynomial regularization" into a rigorous notion. It allows us to reap the benefits of regularizing to obtain simpler expressions that closely resemble their counterparts from unbounded information theory.
Of course, talking about polynomial scaling only makes sense when there is a scaling parameter. Therefore, all our results pertain to families of quantum states indexed by some parameter n. This is often the number of qubits, but need not be.
We propose the following remedy: we define the computational relative entropy DΜ²(ΟββΟβ) as the best error exponent in asymmetric hypothesis testing when restricted to polynomially many copies and polynomial-time tests and take the polynomials to be arbitrarily large.
But there is a catch! The connection between hypothesis testing and the relative entropy only holds if arbitrary tests can be performed. The optimal tests, however, could be exponentially complex to implement. We also need a large number of copies of the state to get sufficient statistics.
The fact that the relative entropy and its derived quantities appear so often in information theory is that it quantifies the best achievable error exponent in asymmetric hypothesis testing, a primitive many important information processing tasks can be reduced to.
The definition of the Umegaki relative entropy.
The relative entropy D(ΟβΟ) is a fundamental quantity in classical and quantum information theory. It appears in many important theorems and serves as a mother quantity to derive other important measures from -- for example mutual information or entropy.
The distracted boyfriend meme, with the "girlfriend" being the relative entropy and the distraction being the computational relative entropy.
You think we already have enough entropies and divergence measures? I don't think so!
In our latest work (arxiv.org/abs/2509.20472), we introduce yet another information theoretic measure -- the computational relative entropy.
Let's have a quick rundownπ
I think the best bit is that they have a guy named "Clifford Mapp" on their team!
Very nice! Michael Wolf also has some very good newer ones: mediatum.ub.tum.de/download/170...
Registrations are now open for #QCTiP2025: qctip2025.com/registration....
Donβt wait too longβspots for participants without a talk are first-come, first-served!
Context please
I guess that particular myth was formulated so that you can debunk it. I never heard anyone claim that error mitigation cannot be used, just that it is no scalable solution for the problem.
We should all recall once in a while that we as a community are in a hamster wheel of our own making. This means we can also just choose to be nicer and more helpful to each other.
That said, one line rejects should not happen. At the end of the day we've all been on the other side and justifiably annoyed by bad reviews.
Therefore, it's more of an answer to the question: "how likely is this to cross the bar relative to the other submissions in your batch?". In that way, "weak reject" doesn't mean the submission is not good science, but just that there is so much other good stuff out there.
Something that is important for context β and that I only realized after seeing the process from the inside β is that the scores are not meant to judge the quality of the submission. The task of the PC is to make a program.
Submissions for QCTiP 2025 are now open.