Skip to content

5 Conjectures

The items below are conjectural research problems, not established results.

They are designed to be:

  • independent: each can be attacked without assuming any of the others,
  • self-contained: each comes with its own modeling assumptions,
  • geometry-agnostic: hardware enters only through explicit graph/function parameters,
  • compatible with non-Markovian, spatially correlated noise.

The current literature already establishes several nearby anchor points: geometrically local circuits under sufficiently weak nonunital noise can support arbitrary-depth computation with polylogarithmic overhead in the Markovian/local setting; random-circuit sampling under constant-rate nonunital noise can fail anticoncentration at all depths; universal measurement-free fault-tolerant computation has been demonstrated in a specific Bacon-Shor-based framework; local non-Markovian threshold theorems exist under bounded-coupling/locality assumptions; process tensors / quantum combs and forgetful memory channels are standard general languages for non-Markovian noise; and superactivation of quantum capacity is known. The formulations below are intended to unify and extend these ingredients into a single research program without collapsing them into one dependent chain. (APS Journals)

Standing framework and notation

Let the physical system consist of \(n\) local \(q\)-dimensional subsystems.

1. Dynamic hardware geometry

Let \(\mathfrak G = \{G_t=(V,E_t)\}_{t\ge 0}\) be the time-dependent interaction graph, with \(|V|=n\). A static architecture is the special case \(G_t\equiv G\).

Use the following geometry parameters.

  • Maximum degree:

    \[ \Delta(\mathfrak G):=\sup_t \max_{v\in V}\deg_{G_t}(v). \]
  • Growth dimension:

    \[ d_{\mathrm{grow}}(\mathfrak G):=\inf\Bigl\{D>0:\exists C>0\ \text{s.t.}\ |B_{G_t}(v,r)|\le C(1+r)^D,\ \forall t,v,r\Bigr\}. \]
  • Worst-case edge expansion over static snapshots:

    \[ h_{\min}(\mathfrak G):=\inf_t \min_{1\le |S|\le n/2}\frac{|\partial_{G_t} S|}{|S|}. \]
  • Interaction-emulation depth functional \(D_{\mathrm{emu}}(H\to \mathfrak G)\): for any target interaction hypergraph \(H\) on logical qubits, \(D_{\mathrm{emu}}(H\to \mathfrak G)\) is the minimum physical depth required to realize one full layer of the interactions in \(H\) using only allowed local operations on \(\mathfrak G\), including SWAPs, teleportation primitives if available, and time-dependent edge reconfiguration.

This \(D_{\mathrm{emu}}\) is the key abstraction. It subsumes routing number, congestion, dilation, token-swapping depth, and reconfigurability into a single operational quantity.

2. Non-Markovian nonunital noise

The noise over \(T\) rounds is modeled by a causal process tensor

\[ \Upsilon_{0:T}, \]

equivalently a quantum comb / memory channel, acting between inserted control operations. This is the correct fully general language for temporally correlated open-system dynamics. Forgetful memory channels and quantum combs provide the standard compositional framework for such processes. (OSTI.gov)

Assume the following explicit regularity parameters.

  • Spatial locality radius \(r_0\): in each round, the environment can correlate only subsystems contained in graph balls of radius at most \(r_0\).

  • Temporal memory scale \(\tau\): the influence of histories older than \(\ell\) rounds decays as

    \[ \sup_{\mathcal A,\mathcal B} \left\| \Phi^{\mathcal A}_{t+\ell:t} - \Phi^{\mathcal B}_{t+\ell:t} \right\|_\diamond \le C_{\mathrm{mem}} e^{-\ell/\tau}, \]

    whenever the two prior control histories \(\mathcal A,\mathcal B\) differ only before time \(t\).

  • Primitive nonunital marginal: each one-site reduced channel \(\Phi_{v,t}\) admits a decomposition

    \[ \Phi_{v,t} = \kappa_{v,t}\,\mathcal R_{\sigma^\star_{v,t}} + (1-\kappa_{v,t})\Psi_{v,t}, \]

    where

    \[ \mathcal R_{\sigma^\star_{v,t}}(\rho)=\sigma^\star_{v,t}, \]

    with \(\sigma^\star_{v,t}\neq I/q\) and \(\Psi_{v,t}\) CPTP.

  • Nonunitality floor:

    \[ \eta_{\min}:=\inf_{v,t}\left\|\sigma^\star_{v,t}-\frac{I}{q}\right\|_1 >0. \]
  • Primitive contraction on traceless operators:

    \[ \alpha_{\max}:= \sup_{v,t}\sup_{\operatorname{Tr}X=0,\ X\neq 0} \frac{\|\Phi_{v,t}(X)\|_1}{\|X\|_1} < 1. \]

This model strictly contains idealized reset-to-\(\sigma^\star\), amplitude-damping-type behavior, and spatially/temporally correlated variants thereof.

3. Random-circuit and coding primitives

  • A local random-circuit ensemble \(\mathcal E_{n,D}\) is any ensemble in which each depth layer is drawn from a gate set forming an approximate local unitary \(2\)-design on the allowed neighborhoods.

  • A QLDPC family \(\{\mathcal C_n\}\) has Tanner graphs \(T_n\) of bounded degree.

  • For a logical simulation protocol, let \(\mathcal L_{n,L}\) denote the induced encoded logical channel after \(L\) logical steps, including all measurement-free correction operations and all physical noise.

Conjecture 1 - Nonunital anticoncentration obstruction under finite-memory correlated noise

Statement

Let \(U\sim \mathcal E_{n,D}\) be a local random circuit of depth \(D\) on \(\mathfrak G\), and let \(\Upsilon_{0:D}\) be a causal noise process satisfying the locality, forgetfulness, primitive nonunitality, and contraction assumptions above.

Let \(p^{(D)}_{U,\Upsilon}(x)\) be the computational-basis output distribution. Then there exists a constant

\[ c_{\mathrm{ac}} = c_{\mathrm{ac}}\bigl(q,\eta_{\min},\alpha_{\max},r_0,\tau,C_{\mathrm{mem}},\Delta(\mathfrak G)\bigr) > 0 \]

such that for all sufficiently large \(n\), and for all depths \(D\),

\[ \mathbb E_{U\sim \mathcal E_{n,D}} \left[ q^n \sum_x \bigl(p^{(D)}_{U,\Upsilon}(x)\bigr)^2 \right] \ge 1+c_{\mathrm{ac}}. \]

Equivalently, the noisy output ensemble fails strong anticoncentration uniformly in depth. A stronger version asserts that there exists \(c'_{\mathrm{ac}}>0\) such that the law of the random probabilities

\[ \left\{ q^n p^{(D)}_{U,\Upsilon}(x)\right\}_x \]

stays at total-variation distance at least \(c'_{\mathrm{ac}}\) from the Porter-Thomas law, uniformly in \(D\).

Why this is a high-value topic

The Markovian constant-rate version is already known for realistic nonunital noise. The open extension is the physically relevant regime with temporal memory and spatial correlations. Proving this would largely close the loophole that near-term random-circuit complexity arguments can hide inside "realistic non-Markovianity." Refuting it would be equally significant, because it would exhibit a regime where chaotic spreading defeats nonunital bias even with finite memory. (OSTI.gov)

Minimal theorem-sized targets

  1. Prove the conjecture for a depth-independent process tensor of bounded Markov order \(\tau\).
  2. Prove it for product amplitude-damping noise dressed by a finite-range classical hidden Markov process.
  3. Replace the full Porter-Thomas separation by only the second-moment inequality above.

Technical tools most likely to matter

  • Process-tensor / comb tensor-network representations.
  • Weingarten calculus and moment-operator methods.
  • Open-system Lieb-Robinson bounds.
  • Dobrushin-type contraction coefficients for CPTP maps.
  • Noncommutative hypercontractivity and log-Sobolev inequalities.
  • Matrix-product-operator approximations for finite-memory channels.
  • Approximate design theory for local random circuits.
  • Concentration of measure for correlated quantum stochastic processes.

Conjecture 2 - Measurement-free fault-tolerance threshold is governed by a cooling-capacity inequality

Definitions

Define the cooling capacity of a noise process \(\Upsilon\), denoted \(C_{\mathrm{cool}}(\Upsilon)\), as the supremum of rates \(R\) such that there exists a family of protocols using only:

  • local unitaries,
  • waiting under the ambient process \(\Upsilon\),
  • discarding designated "hot" subsystems,

which, from \(m\) physical subsystems, produces \(Rm\) qudits whose joint state is \(o(1)\)-close in trace norm to \(|0\rangle\langle 0|^{\otimes Rm}\) as \(m\to\infty\).

In the i.i.d. ideal reset model \(\mathcal R_{\sigma^\star}^{\otimes m}\), this reduces heuristically to the entropy-compression yield

\[ C_{\mathrm{cool}} = 1-\frac{S(\sigma^\star)}{\log q}, \]

but the conjecture is stated for the fully correlated non-Markovian setting.

Define the required refresh rate \(C_{\mathrm{req}}(\mathfrak G,\mathcal F)\) of a measurement-free FT architecture \(\mathcal F\) compiled onto hardware family \(\mathfrak G\) to be the infimum asymptotic rate of near-pure ancillas per logical time step required to keep the logical error bounded below threshold.

Statement

For every family \(\mathcal F\) of fully unitary fault-tolerant constructions compiled onto a dynamic hardware family \(\mathfrak G\), there exists a threshold criterion of the form

\[ C_{\mathrm{cool}}(\Upsilon) > C_{\mathrm{req}}(\mathfrak G,\mathcal F) \]

that is both:

  • sufficient for arbitrarily long measurement-free fault-tolerant computation with polylogarithmic overhead, and
  • essentially necessary, in the sense that if

    \[ C_{\mathrm{cool}}(\Upsilon) < C_{\mathrm{req}}(\mathfrak G,\mathcal F), \]

    then no measurement-free protocol in the architecture class \(\mathcal F\) can maintain a nonzero asymptotic logical rate.

Why this is a high-value topic

This isolates the real thermodynamic content of the "quantum refrigerator" paradigm. The Markovian geometrically local case already shows that sufficiently weak nonunital noise can support arbitrary-depth computation without mid-circuit measurement. The research-level generalization is to identify the correct invariant quantity under correlated noise and arbitrary hardware geometry. Likewise, measurement-free universal FT exists in a specific Bacon-Shor-based setting, but not yet as a general cooling-capacity theorem. (APS Journals)

Minimal theorem-sized targets

  1. Compute \(C_{\mathrm{cool}}\) for finite-memory primitive reset processes with commuting fixed points.
  2. Upper-bound \(C_{\mathrm{req}}\) for one concrete measurement-free architecture on a \(2\)D grid and on a reconfigurable array.
  3. Prove only the sufficient direction:

    \[ C_{\mathrm{cool}} > C_{\mathrm{req}} \implies \text{fault tolerance}. \]

Technical tools most likely to matter

  • Heat-bath algorithmic cooling and majorization theory.
  • Entropy rate of hidden Markov / quantum memory processes.
  • Asymptotic equipartition and one-shot smooth-entropy methods.
  • Distillable purity / purity concentration under locality constraints.
  • Approximate QEC criteria and subsystem-code threshold machinery.
  • Open-system decoupling methods.
  • Non-Markovian threshold techniques in the style of bounded local memory.
  • Resource theories of athermality / nonunitality.
  • Quantum Shannon theory for purity distillation under correlated resources.

Conjecture 3 - Congestion-dilation barrier for expander-style QLDPC on constrained hardware

Definitions

Let \(T_n\) be the Tanner graph of a QLDPC code family. For an embedding of \(T_n\) into the space-time hardware graph generated by \(\mathfrak G\), define:

  • \(\operatorname{dil}(f)\): maximum path length used to realize any Tanner edge,
  • \(\operatorname{cong}(f)\): maximum number of routed Tanner edges simultaneously using any physical edge-time resource.

Define the embedding congestion-dilation complexity

\[ \mathrm{CD}(T_n,\mathfrak G) := \inf_f \operatorname{cong}(f)\,\operatorname{dil}(f), \]

where the infimum is over all valid locality-respecting compilation maps \(f\).

Let \(D_{\mathrm{emu}}(T_n\to\mathfrak G)\) be the minimum physical depth needed to realize one full coherent syndrome-extraction round.

Statement

There exist constants \(c_1,c_2>0\), independent of \(n\), such that for any bounded-degree expanding Tanner family \(\{T_n\}\),

\[ c_1\,\mathrm{CD}(T_n,\mathfrak G) \;\le\; D_{\mathrm{emu}}(T_n\to\mathfrak G) \;\le\; c_2\,\mathrm{CD}(T_n,\mathfrak G)\,\operatorname{polylog}(n). \]

Consequently, under local circuit-level noise with effective amplitude \(\delta_{\mathrm{eff}}\) after incorporating finite memory, a necessary condition for any nonvanishing compiled fault-tolerance threshold is

\[ \limsup_{n\to\infty} \delta_{\mathrm{eff}}(n)\,\mathrm{CD}(T_n,\mathfrak G) < \delta_\star \]

for some architecture-dependent constant \(\delta_\star>0\).

Why this is a high-value topic

This would turn a broad architectural intuition into a precise theorem: expander-style QLDPC codes are not obstructed merely by "2D locality," but by a quantified embedding complexity. If true, it gives an architecture-independent criterion separating static lattices from genuinely reconfigurable platforms. If false, it implies that some compiler or gadget family beats the expected routing barrier, which would itself be a major breakthrough.

Minimal theorem-sized targets

  1. Prove the lower bound only for static \(2\)D grids.
  2. Replace full QLDPC syndrome extraction by one check layer or one expander block.
  3. Prove the statement for SWAP-only compilers before allowing teleportation or code switching.

Technical tools most likely to matter

  • Leighton-type congestion-dilation lower bounds.
  • Bisection width, expansion mismatch, graph-minor obstructions.
  • Token swapping, routing-number theory, sorting networks.
  • Hypergraph embedding and edge-coloring for syndrome schedules.
  • Space-time circuit lower bounds.
  • Malignant-set counting under correlated noise.
  • Diamond-norm accumulation bounds with finite memory.
  • Tanner-graph expansion and local testability.
  • Single-shot QEC scheduling analysis.

Conjecture 4 - Information-theoretic logical survival can strictly separate from efficient decodability

Statement

There exist code families \(\{\mathcal C_n\}\), dynamic hardware families \(\{\mathfrak G_n\}\), and correlated nonunital noise processes \(\{\Upsilon^{(n)}\}\) such that the induced encoded logical channels \(\mathcal L_{n,L}\) satisfy:

  1. Information-theoretic survival: there exists \(c_0>0\) and \(\beta>0\) such that for all sufficiently large \(n\) and all

    \[ L \le \exp(n^\beta), \]

    one has

    \[ \sup_{\mathcal R} F_e\!\left(\mathcal R\circ \mathcal L_{n,L}\right)\ge c_0, \]

    where the supremum is over all physically allowed recovery maps \(\mathcal R\).

  2. Algorithmic intractability of decoding: for the same family, any decoder achieving entanglement fidelity at least \(c_0/2\) on a typical instance requires superpolynomial classical time, or equivalently, approximating the optimal recovery value

    \[ \sup_{\mathcal R} F_e(\mathcal R\circ\mathcal L_{n,L}) \]

    within inverse-polynomial additive error is average-case computationally intractable.

Interpretation

This conjecture formalizes a clean separation between:

  • physical persistence of logical quantum information, and
  • efficient recoverability by an algorithmically implementable decoder.

It is the rigorous version of the "decoder-independence" intuition: the physics may preserve logical information even when extracting it efficiently is computationally hard.

Why this is a high-value topic

If true, it would establish that "fault tolerance" has two inequivalent thresholds: an information-theoretic threshold and an algorithmic threshold. If false, it would indicate a deep equivalence between preservation and efficient exploitation in this class of open-system architectures.

Minimal theorem-sized targets

  1. Prove the separation for exact decoding before approximate decoding.
  2. Replace average-case hardness by worst-case hardness.
  3. Prove a weaker statement where survival is certified by coherent information rather than entanglement fidelity.

Technical tools most likely to matter

  • Approximate Knill-Laflamme conditions.
  • Coherent information, decoupling, and quantum-capacity lower bounds.
  • Complexity of LDPC decoding and approximate inference.
  • Tensor-network contraction hardness.
  • Reductions from classical CSP / syndrome-decoding problems.
  • Average-case hardness frameworks for noisy inference.
  • Quantum PCP / local-Hamiltonian-inspired decoding reductions.
  • Interactive-proof formulations of recovery certification.
  • One-shot recovery bounds for correlated channels.

Conjecture 5 - Superactivation of measurement-free modular capacity

Definitions

Let \(A\) and \(B\) be two modules, each with its own local nonunital correlated ambient process \(\Upsilon_A\) and \(\Upsilon_B\). Let \(\mathcal N\) be a noisy inter-module link channel, possibly with memory.

Define the measurement-free modular capacity

\[ Q_{\mathrm{mf}}(\mathcal N \mid \Upsilon_A,\Upsilon_B) \]

to be the supremum asymptotic rate at which logical qubits can be transmitted from \(A\) to \(B\) by a protocol that uses only:

  • local unitaries inside the modules,
  • the ambient nonunital processes \(\Upsilon_A,\Upsilon_B\),
  • uses of \(\mathcal N\),
  • discarding of designated subsystems,

with no mid-circuit measurements and vanishing asymptotic logical error.

Statement

There exist two link channels \(\mathcal N_1,\mathcal N_2\), each individually satisfying

\[ Q_{\mathrm{mf}}(\mathcal N_i \mid \Upsilon_A,\Upsilon_B)=0, \qquad Q(\mathcal N_i)=0, \]

but whose joint use satisfies

\[ Q_{\mathrm{mf}}(\mathcal N_1\otimes \mathcal N_2 \mid \Upsilon_A,\Upsilon_B) > 0. \]

Equivalently: there exists a genuine superactivation phenomenon for measurement-free modular FT capacity, not reducible to ordinary channel quantum capacity alone.

Why this is a high-value topic

Ordinary superactivation of quantum capacity is known. What is not known is whether the combination of local nonunital refresh resources and multiple individually useless interconnects can jointly cross a measurement-free modular FT threshold. If true, it would create a principled foundation for heterogeneous modular architectures with apparently "dead" links. If false, it would show that refrigeration-assisted FT throughput is fundamentally more rigid than ordinary communication capacity. (PubMed)

Minimal theorem-sized targets

  1. Prove the conjecture for one module-qubit pair with memoryless links first.
  2. Replace positive asymptotic rate by a one-shot or finite-blocklength advantage.
  3. Restrict to one specific zero-capacity family, such as symmetric / erasure-assisted or amplitude-damping-derived constructions.

Technical tools most likely to matter

  • Smith-Yard superactivation methods.
  • Coherent-information optimization and nonadditivity.
  • Memory-channel coding theorems for forgetful channels.
  • Entanglement distillation over correlated links.
  • Resource theories of purity and athermality under locality constraints.
  • One-shot decoupling and smooth entropies.
  • Network coding and multipath coherent routing.
  • Degradability / antidegradability analysis.
  • Strong-converse and finite-blocklength bounds for correlated channels.

Why these five are not redundant

These five topics divide the landscape into logically distinct questions:

  1. Conjecture 1 asks about the output statistics of noisy random circuits.
  2. Conjecture 2 asks for the correct thermodynamic threshold criterion for measurement-free FTQC.
  3. Conjecture 3 asks for a geometry/compiler lower bound for realizing high-expansion code structure on constrained hardware.
  4. Conjecture 4 asks whether physical preservation and efficient decodability are genuinely different thresholds.
  5. Conjecture 5 asks whether modular measurement-free throughput can exhibit a superactivation phenomenon.

A proof or disproof of any one of them would be significant even if the other four remain open.

Best first-paper targets

From a theorem-production standpoint, the most tractable entry points are:

  • Conjecture 1, restricted to bounded Markov order and second moments only.
  • Conjecture 3, restricted to static \(2\)D grids and SWAP-only compilation.
  • Conjecture 2, proving only the sufficient direction

    \[ C_{\mathrm{cool}} > C_{\mathrm{req}} \implies \text{measurement-free FT possible}. \]